CN111050070B - Video shooting method and device, electronic equipment and medium - Google Patents

Video shooting method and device, electronic equipment and medium Download PDF

Info

Publication number
CN111050070B
CN111050070B CN201911317151.9A CN201911317151A CN111050070B CN 111050070 B CN111050070 B CN 111050070B CN 201911317151 A CN201911317151 A CN 201911317151A CN 111050070 B CN111050070 B CN 111050070B
Authority
CN
China
Prior art keywords
video
input
areas
regions
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911317151.9A
Other languages
Chinese (zh)
Other versions
CN111050070A (en
Inventor
付晋城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911317151.9A priority Critical patent/CN111050070B/en
Publication of CN111050070A publication Critical patent/CN111050070A/en
Application granted granted Critical
Publication of CN111050070B publication Critical patent/CN111050070B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the invention discloses a video shooting method, a video shooting device, electronic equipment and a medium, wherein the video shooting method is applied to first electronic equipment and comprises the following steps: displaying a shooting preview interface, wherein the shooting preview interface comprises M areas; respectively receiving first inputs to N regions in the M regions, wherein M is a positive integer greater than or equal to 2, and N is a positive integer less than or equal to M; and responding to the first input, respectively shooting videos associated with each of the N areas to synthesize a target video, wherein when the target video is in a playing state, the videos associated with each of the M areas are synchronously played. By the embodiment of the invention, the situation that the target video is obtained by selecting the video in the photo album after the video is shot can be avoided. The efficiency of obtaining the target video is improved.

Description

Video shooting method and device, electronic equipment and medium
Technical Field
The embodiment of the invention relates to the field of electronic equipment, in particular to a video shooting method and device, electronic equipment and a medium.
Background
With the rapid development of electronic devices, more and more users begin to use electronic devices to capture video to record life-related fluid. Nowadays, the requirement of a user for the video is not only recording, but also making the video look cool and more interesting. For this, the user synthesizes a plurality of videos into a multi-grid video. The multi-grid video means that a plurality of areas are displayed during playing, and the video related to the multi-grid video is played in each area, so that the plurality of videos are played in the plurality of areas simultaneously.
At present, the scheme for acquiring the multi-grid video is as follows: the method comprises the steps that a user shoots a plurality of videos firstly, after the videos are shot, video editing software is opened, the shot videos are selected on an interface of the video editing software, and the shot videos are combined into a multi-grid video.
Therefore, in the process of acquiring the multi-grid video, a user needs to select a plurality of videos in the album. Since the video and the picture are mixed together when the album is displayed, it is troublesome for the user to select the video in the album. Especially, under the condition that the number of videos and pictures in the album is large, much effort is needed to select a plurality of videos, and the process of acquiring the multi-grid videos is troublesome.
Disclosure of Invention
The embodiment of the invention provides a video shooting method, which aims to solve the problem that the process of acquiring a multi-grid video is troublesome.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a video shooting method applied to a first electronic device, where the method includes:
displaying a shooting preview interface, wherein the shooting preview interface comprises M areas;
respectively receiving first inputs to N regions in the M regions, wherein M is a positive integer greater than or equal to 2, and N is a positive integer less than or equal to M;
and responding to the first input, respectively shooting videos associated with each of the N areas to synthesize a target video, wherein when the target video is in a playing state, the videos associated with each of the M areas are synchronously played.
In a second aspect, an embodiment of the present invention provides a video shooting apparatus applied to a first electronic device, where the apparatus includes:
the preview interface display module is used for displaying a shooting preview interface, wherein the shooting preview interface comprises M areas;
a first input receiving module, configured to receive first inputs to N regions of the M regions, respectively, where M is a positive integer greater than or equal to 2, and N is a positive integer less than or equal to M;
and the first input response module is used for responding to the first input, respectively shooting videos related to each of the N areas so as to synthesize a target video, wherein when the target video is in a playing state, the videos related to each of the M areas are synchronously played.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the video shooting method.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the video shooting method.
In the embodiment of the invention, videos respectively associated with N areas in the M areas are shot to synthesize the videos respectively associated with the M areas into the target video, and the target video can be a multi-grid video. Therefore, the embodiment of the invention can realize that the multi-grid video is obtained after the video is shot. The situation that after the video is shot, the video is selected from the photo album to obtain the multi-grid video is avoided. The efficiency of acquiring the multi-grid video is improved because the video does not need to be selected.
Drawings
The present invention will be better understood from the following description of specific embodiments thereof taken in conjunction with the accompanying drawings, in which like or similar reference characters designate like or similar features.
FIG. 1 is a flow chart illustrating a video capture method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a preview interface for opening a grid capture mode according to an embodiment of the present invention;
FIG. 3 illustrates a schematic view of a capture preview interface displaying a video associated with a region according to one embodiment of the present invention;
FIG. 4 is a diagram illustrating a capture preview interface accessed by opening a video file according to one embodiment of the present invention;
FIG. 5 is a diagram illustrating a capture preview interface including a music selection control according to one embodiment of the present invention;
FIG. 6 is a diagram illustrating an interface including a list of music according to one embodiment of the present invention;
FIG. 7 illustrates an interface diagram including controls for opening the soundtracks and accompaniment provided by one embodiment of the present invention;
FIG. 8 is a schematic interface diagram of a mobile video according to an embodiment of the present invention;
FIG. 9 is an interface diagram illustrating the results of the interchange of videos of two regions according to one embodiment of the present invention;
FIG. 10 is a schematic diagram of an interface for selecting a region of a video to be captured according to an embodiment of the present invention;
FIG. 11 is a schematic diagram illustrating an interface including a grid number setting control according to an embodiment of the invention;
FIG. 12 is a schematic diagram of an interface including a video duration setting control according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of a video camera according to an embodiment of the present invention;
fig. 14 shows a hardware structure diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart illustrating a video capture method according to an embodiment of the present invention. The video shooting method is applied to the first electronic equipment, and as shown in fig. 1, the video shooting method comprises the following steps:
step 101, displaying a shooting preview interface, wherein the shooting preview interface comprises M areas.
Alternatively, in step 101, in the case where the target photographing mode is turned on, a photographing preview interface is displayed.
For example, the target shooting mode is a grid shooting mode, and referring to fig. 2, the camera has a plurality of shooting modes, which are a beauty mode, a shooting mode, a grid mode, and a video mode. When a user opens the camera, the first electronic device enters a photographing mode by default. And the user performs left-sliding operation, the first electronic equipment receives left-sliding input and responds to the input to start the grid shooting mode. 4 areas are displayed on the shooting preview interface in the grid shooting mode.
Step 102, respectively receiving first inputs to N regions of M regions, where M is a positive integer greater than or equal to 2, and N is a positive integer less than or equal to M.
And 103, responding to the first input, respectively shooting videos associated with each of the N areas to synthesize a target video, wherein when the target video is in a playing state, the videos associated with each of the M areas are synchronously played.
In step 103, in the case that each of the M regions is associated with a video, the target video is synthesized according to the videos respectively associated with the M regions. The target video may be a multi-grid video.
For example, with continued reference to fig. 2, when the grid shooting mode is turned on, the first electronic device defaults to display a preview picture shot by the camera in the area 201 at the upper left corner. The first electronic device receives input for the capture control 202 and, in response to the input, captures a video associated with the region 201 in the upper left corner. After the first electronic device finishes shooting the video associated with the area 201 at the upper left corner, a preview picture shot by the camera is displayed in the area 203 at the upper right corner. And repeating the steps to shoot the video associated with the area 203 at the upper right corner. By analogy, videos respectively associated with the four areas are shot. The first electronic equipment synthesizes the videos respectively associated with the four areas into a four-grid video.
In the embodiment of the invention, videos respectively associated with N areas in the M areas are shot to synthesize the videos respectively associated with the M areas into the target video, and the target video can be a multi-grid video. Therefore, the embodiment of the invention can obtain the multi-grid video after the video is shot. The situation that after the video is shot, the video is selected from the photo album to obtain the multi-grid video is avoided. The efficiency of acquiring the multi-grid video is improved because the video does not need to be selected.
Optionally, the N regions comprise a first region; the video shooting method further comprises:
after the video associated with the first area is shot, saving the video associated with the first area under the condition that the shooting preview interface is quitted;
in the case where the shooting preview interface is displayed again, information of a video associated with a first area is displayed in the first area among the M areas displayed on the shooting preview interface. For example, the information of the video is a picture to be played of the video or one frame of image of the video.
For example, referring to fig. 3, the first region is region 201 and region 203. The first electronic device captures video associated with region 201 and region 203, respectively, and does not capture video associated with the other two regions, respectively. The user needs to go elsewhere, the first electronic device locks the screen or turns off the camera or the camera runs in the background, and the shooting preview interface is quitted. The first electronic device saves the video associated with area 201 and area 203, respectively. After that, when the first electronic device opens the camera and enters the target shooting mode, the shooting preview interface shown in fig. 3 is displayed again, a to-be-played screen of the video associated with the area 201 is displayed in the area 201 on the shooting preview interface, and a to-be-played screen of the video associated with the area 203 is displayed in the area 203.
In the embodiment of the invention, in the case of exiting the shooting preview interface halfway, the video associated with the first area is saved. Therefore, the next time the target shooting mode is entered again, information of the video shot last in the target shooting mode can be displayed without requiring the user to re-shoot the video that has been shot before. The shooting is convenient for the user, and the shooting efficiency of the target video can be improved.
Optionally, displaying a shooting preview interface, including:
receiving a second input to the video file under the condition that the interface comprising the video file is displayed, wherein the video file comprises videos related to a second area obtained by shooting through a second electronic device, and the second area is an area except for N areas in the M areas;
and displaying a shooting preview interface in response to the second input.
For example, referring to fig. 4, after the second electronic device finishes capturing the video associated with the area 204 and the video associated with the area 205, a video file is generated. The video file includes video associated with area 204 and area 205, respectively. The second electronic device shares the video file with the first electronic device, for example, the second electronic device sends the video file to the first electronic device, or the second electronic device uploads the video file to a server, and the first electronic device downloads the video file.
After the first electronic device acquires the video file, the video file is opened, and a shooting preview interface shown in fig. 4 is displayed. The first electronic device continues to capture the video associated with the area 201 and the area 203, respectively, on the basis of the capture preview interface shown in fig. 4. In the case that the four areas are all associated with videos, the first electronic device synthesizes the videos respectively associated with the area 204 and the area 205 shot by the second electronic device and the videos respectively associated with the area 201 and the area 203 shot by the first electronic device into a four-grid video.
The video of the synthesized multi-grid video is not limited to be shot by two electronic devices, and can also be shot by three or more electronic devices. And under the condition that the M areas on the shooting preview interface are respectively associated with videos, the first electronic equipment for shooting the last video synthesizes the videos respectively associated with the M areas into a multi-grid video.
In the embodiment of the invention, after the second electronic device finishes shooting the video associated with the second area, the video file is generated. The second electronic device shares the video file to the first electronic device. After the first electronic device acquires the video file, the video of the remaining area can be continuously shot based on the video shot by the second electronic device. The target video is shot by at least two electronic devices, and the requirement of shooting one target video by multiple users can be met.
Optionally, the video shooting method further comprises:
displaying a video editing interface under the condition that each of the M areas is associated with a video, wherein the video editing interface comprises a music selection control;
receiving a third input to the music selection control;
synthesizing a multi-grid video, comprising:
in response to a third input, synthesizing the videos respectively associated with the M regions into a target video according to the background music associated with the third input.
For example, M is 4, when the video associated with the last area of the four areas is shot, each area of the four areas is associated with a video, and the first electronic device displays a video editing interface as shown in fig. 5. A music selection control 206 is included in the video editing interface. The first electronic device receives a third input to the MUSIC selection control 206 and, in response to the third input, displays a MUSIC list as shown in fig. 6, the MUSIC list including MUSIC 001 to MUSIC 003.
In the case where the user selects MUSIC 001 (i.e., the background MUSIC associated with the third input), the first electronic device synthesizes a multi-grid video using MUSIC 001. In this case, when the multi-grid video is played, the MUSIC 001 is played instead of the original sound of the video respectively associated with the four regions.
The music list may further include an original sound, and in a case where the user selects the original sound, the first electronic device synthesizes the multi-grid video using the original sounds of the videos respectively associated with the four regions. In this case, when the multi-grid video is played, the original sounds of the videos respectively associated with the four regions are played simultaneously.
In the embodiment of the invention, the user can select the background music according to the requirement of the user, so that the background music is added to the target video, and the quality of the manufactured target video is improved.
Optionally, synthesizing the videos respectively associated with the M regions into the target video according to the background music associated with the third input, including:
receiving a fourth input to the target control in the case that the video editing interface comprises the target control; the target control indicates whether to use background music to synthesize the target video or whether to use audio of videos respectively associated with the M areas to synthesize the target video;
and responding to the fourth input, and synthesizing the videos respectively associated with the M areas into a target video according to the background music and the indication of the target control.
For example, the first electronic device displays a video editing interface. The first electronic device receives a downslide input to the video editing interface and displays the video editing interface as shown in fig. 7 in response to the downslide input. Or the first electronic device receives input of a setting control on the video editing interface and displays the video editing interface as shown in fig. 7 in response to the input of the setting control. The video editing interface includes two controls, control 207 and control 208.
The first electronic device may receive a fourth input to the control 207 (i.e., the target control), which may be an input to turn the original sound on or an input to turn the original sound off. If the fourth input is an input to turn on the original sound, the target video is synthesized using the original sounds of the videos respectively associated with the four regions. If the fourth input is an input for turning off the original sound, the target video is synthesized without using the original sounds of the videos associated with the four regions, respectively.
The first electronic device may receive a fourth input to the control 208 (i.e., the target control), which may be an input to turn on accompaniment (i.e., background music) or an input to turn off accompaniment. If the fourth input is an input to turn on the accompaniment, the target video is synthesized using the background music. If the fourth input is an input to close the accompaniment, the background music is not used to synthesize the target video.
The electronic device may also receive a fourth input to control 207 and control 208. For example, the fourth input is an input to turn on an original sound and turn on an accompaniment, and the target video is synthesized using the original sound of the video and the background music associated with the four regions, respectively. In this case, when the target video is played, not only the original sound of the video respectively associated with the four regions but also the background music is played. Therefore, the embodiment of the invention can be applied to a scene that a user sings.
For example, the first electronic device captures a plurality of videos of a user when singing, and synthesizes a multi-grid video using the original sound of the plurality of videos. When the multi-grid video is played, the original sounds of the multiple videos of one user when singing are played simultaneously, so that the method is suitable for scenes in which the user harms the sounds.
For another example, the first electronic device captures a plurality of videos of a plurality of users when singing, and synthesizes a multi-grid video using the original sound of the plurality of videos. When the multi-grid video is played, the original sounds of a plurality of users when singing are played simultaneously. Although videos of a plurality of users are taken at different times, the multi-grid video is a video of a plurality of users chorusing. The video of a plurality of users can be shot at different time, and the synthesized multi-grid video can have the chorus effect of the plurality of users.
Optionally, the video editing interface includes M regions, and information of a video associated with a region is displayed in each of the M regions; the video shooting method further comprises:
receiving a sixth input dragging information of the video associated with one of the two areas to the other area for two of the M areas;
in response to a sixth input, the videos associated with the two regions are interchanged.
For example, referring to fig. 8, for an area 203 and an area 204 of the four areas, a to-be-played picture a of a video associated with the area 203 is displayed in the area 203, and a to-be-played picture B of a video associated with the area 204 is displayed in the area 204. The first electronic device receives a sixth input that the user finger 209 drags the to-be-played picture B to the area 203. The first electronic device, in response to the sixth input, swaps the video associated with region 203 and region 204. I.e. from the interface shown in fig. 8 to the interface shown in fig. 9.
In the embodiment of the invention, the user can drag the video according to the requirement of the user, so that the videos related to the two areas are exchanged, namely, the user can arrange the areas when the videos are played, and the user can obtain a satisfactory target video conveniently.
Optionally, the capturing a video associated with each of the N regions respectively includes:
receiving a fifth input selected for a third region of the N regions;
in response to a fifth input, a preview screen is displayed in the third area, and a video associated with the third area is captured. The third region may be the same region as the first region or a different region.
For example, with continued reference to fig. 2, the shooting preview interface of the first electronic device has four areas, and in a case where none of the four areas is associated with a video, the preview image is displayed in the upper left corner area 201 by default, and the other areas are blank. The first electronic equipment receives a fifth input selected in the upper right corner area 203 of the shooting preview interface, and responds to the fifth input, the area for displaying the preview screen is switched from the upper left corner area 201 to the upper right corner area 203. That is, as shown in fig. 10, a preview screen is displayed in the upper right corner area 203 to capture a video associated with the upper right corner area 203.
In the embodiment of the invention, the user can select the area associated with the video to be shot according to the requirement of the user, namely, the user can arrange the area when the video is played, thereby obtaining the satisfactory target video of the user.
Optionally, after displaying the shooting preview interface, the video shooting method further includes:
receiving a seventh input to the grid number setting control under the condition that the shooting preview interface comprises the grid number setting control; in response to the seventh input, the value of M is set to the value associated with the seventh input.
For example, referring to fig. 11, in a case that the first electronic device displays a shooting preview interface, the first electronic device receives an input to the setting control or a downslide input, and displays a grid number setting control 210, where the grid number setting control 210 includes a 2 grid setting control, a 4 grid setting control, and a 9 grid setting control. The first electronic device receives a seventh input to the 2-grid setting control, and in response to the seventh input, displays the capture preview interface with 2 regions, i.e., transforms the capture preview interface with 4 regions into the capture preview interface with 2 regions. Wherein the seventh input is associated with a value of 2. In addition, the user can customize the number of grids.
In the embodiment of the invention, the user can select the number of the grids of the multi-grid video according to the requirement of the user, so that the target video meeting the requirement of the user is obtained.
Optionally, after displaying the shooting preview interface, the video shooting method further includes:
receiving an eighth input to the video duration setting control under the condition that the shooting preview interface comprises the video duration setting control; in response to an eighth input, the maximum durations of the videos respectively associated with the M regions are all set to the durations associated with the eighth input.
For example, referring to fig. 12, in a case where the first electronic device displays a shooting preview interface, the first electronic device receives an input to a setting control or receives an input to slide down, and displays a video duration setting control 211, where the video duration setting control 211 includes a 15s setting control, a 30s setting control, and a 60s setting control. And the first electronic equipment receives an eighth input of the 15s setting control, and in response to the eighth input, the maximum duration of the videos respectively associated with the four areas is set to be 15 s. Where 15s is the duration of the eighth input association. In addition, the user can customize the video time length.
In the embodiment of the invention, the user can select the duration of the multi-grid video according to the requirement of the user, so that the target video meeting the requirement of the user is obtained.
Fig. 13 is a schematic structural diagram of a video camera according to an embodiment of the present invention. The video photographing apparatus is applied to the first electronic device, and as shown in fig. 13, the video photographing apparatus 300 includes:
a preview interface display module 301, configured to display a shooting preview interface, where the shooting preview interface includes M regions;
a first input receiving module 302, configured to receive first inputs to N regions of the M regions, respectively, where M is a positive integer greater than or equal to 2, and N is a positive integer less than or equal to M;
a first input response module 303, configured to, in response to a first input, respectively capture a video associated with each of the N regions to synthesize a target video, where the target video is in a playing state, and the video associated with each of the M regions is synchronously played.
In the embodiment of the invention, videos respectively associated with N areas in the M areas are shot to synthesize the videos respectively associated with the M areas into the target video, and the target video can be a multi-grid video. Therefore, the embodiment of the invention can obtain the multi-grid video after the video is shot. The situation that after the video is shot, the video is selected from the photo album to obtain the multi-grid video is avoided. The efficiency of acquiring the multi-grid video is improved because the video does not need to be selected.
Optionally, the N regions comprise a first region; the video camera 300 further includes:
the video storage module is used for storing the video associated with the first area under the condition that the shooting preview interface is quitted after the video associated with the first area is shot;
and the video information display module is used for displaying the information of the video related to the first area in the first area of the M areas displayed on the shooting preview interface under the condition that the shooting preview interface is displayed again.
Optionally, the preview interface display module 301 includes:
the second input receiving module is used for receiving second input of the video file under the condition that an interface comprising the video file is displayed, wherein the video file comprises videos related to a second area obtained by shooting through second electronic equipment, and the second area is an area except for N areas in the M areas;
and the shooting module starting module is used for responding to the second input and displaying a shooting preview interface.
Optionally, the video camera 300 further includes:
the editing interface display module is used for displaying a video editing interface under the condition that each of the M areas is associated with a video, and the video editing interface comprises a music selection control;
a third input receiving module for receiving a third input to the music selection control;
the first input response module 303 includes:
and the grid video synthesis module is used for responding to a third input and synthesizing the videos respectively associated with the M areas into the target video according to the background music associated with the third input.
Optionally, the grid video composition module includes:
the fourth input receiving module is used for receiving fourth input of the target control under the condition that the video editing interface comprises the target control; the target control indicates whether to use background music to synthesize the target video or whether to use audio of videos respectively associated with the M areas to synthesize the target video;
and the fourth input response module is used for responding to the fourth input, synthesizing the videos respectively associated with the M areas into the target video according to the background music and the indication of the target control.
Optionally, the first input response module 303 includes:
the fifth input receiving module is used for receiving a fifth input selected by a third area in the N areas;
and the fifth input response module is used for responding to a fifth input, displaying a preview picture in the third area and shooting the video associated with the third area.
Optionally, the video editing interface includes M regions, and information of a video associated with a region is displayed in each of the M regions; the video camera 300 further includes:
a sixth input receiving module, configured to receive, for two of the M regions, a sixth input dragging information of a video associated with one of the two regions to the other region;
and the sixth input response module is used for responding to the sixth input and interchanging the videos related to the two areas.
Optionally, the video camera 300 further comprises at least one of:
the system comprises a grid number setting module, a shooting preview interface and a control module, wherein the grid number setting module is used for receiving a seventh input of a grid number setting control under the condition that the shooting preview interface comprises the grid number setting control; in response to the seventh input, setting the value of M to the value associated with the seventh input;
the video duration setting module is used for receiving an eighth input of the video duration setting control under the condition that the shooting preview interface comprises the video duration setting control; in response to an eighth input, the maximum durations of the videos respectively associated with the M regions are all set to the durations associated with the eighth input.
Fig. 14 shows a schematic hardware structure diagram of an electronic device according to an embodiment of the present invention, where the electronic device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 304, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, processor 410, and power supply 411. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 14 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The display unit 406 is configured to display a shooting preview interface, where the shooting preview interface includes M regions;
a user input unit 407, configured to receive first inputs to N regions of the M regions, respectively, where M is a positive integer greater than or equal to 2, and N is a positive integer less than or equal to M;
a processor 410, configured to capture a video associated with each of the N regions in response to the first input, respectively, to synthesize a target video, where the video associated with each of the M regions is played synchronously while the target video is in a playing state.
In the embodiment of the invention, videos respectively associated with N areas in the M areas are shot to synthesize the videos respectively associated with the M areas into the target video, and the target video can be a multi-grid video. Therefore, the embodiment of the invention can obtain the multi-grid video after the video is shot. The situation that after the video is shot, the video is selected from the photo album to obtain the multi-grid video is avoided. The efficiency of acquiring the multi-grid video is improved because the video does not need to be selected.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 401 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 410; in addition, the uplink data is transmitted to the base station. Typically, radio unit 401 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio unit 401 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 402, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 403 may convert audio data received by the radio frequency unit 401 or the network module 402 or stored in the memory 409 into an audio signal and output as sound. Also, the audio output unit 403 may also provide audio output related to a specific function performed by the electronic apparatus 400 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 403 includes a speaker, a buzzer, a receiver, and the like.
The input unit 304 is used to receive audio or video signals. The input Unit 304 may include a Graphics Processing Unit (GPU) 3041 and a microphone 3042, and the Graphics processor 3041 processes image data of a still picture or video obtained by an image capturing apparatus (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 406. The image frames processed by the graphic processor 3041 may be stored in the memory 409 (or other storage medium) or transmitted via the radio frequency unit 401 or the network module 402. The microphone 3042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 401 in case of the phone call mode.
The electronic device 400 also includes at least one sensor 405, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 4061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 4061 and/or the backlight when the electronic apparatus 400 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 405 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be described in detail herein.
The display unit 406 is used to display information input by the user or information provided to the user. The Display unit 406 may include a Display panel 4061, and the Display panel 4061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 407 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 407 includes a touch panel 4071 and other input devices 4072. Touch panel 4071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 4071 using a finger, a stylus, or any suitable object or attachment). The touch panel 4071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 410, receives a command from the processor 410, and executes the command. In addition, the touch panel 4071 can be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 4071, the user input unit 407 may include other input devices 4072. Specifically, the other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 4071 can be overlaid on the display panel 4061, and when the touch panel 4071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 410 to determine the type of the touch event, and then the processor 410 provides a corresponding visual output on the display panel 4061 according to the type of the touch event. Although in fig. 14, the touch panel 4071 and the display panel 4061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 4071 and the display panel 4061 may be integrated to implement the input and output functions of the electronic device, and the implementation is not limited herein.
The interface unit 408 is an interface for connecting an external device to the electronic apparatus 400. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 408 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 400 or may be used to transmit data between the electronic apparatus 400 and an external device.
The memory 409 may be used to store software programs as well as various data. The memory 409 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 409 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 410 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 409 and calling data stored in the memory 409, thereby performing overall monitoring of the electronic device. Processor 410 may include one or more processing units; preferably, the processor 410 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The electronic device 400 may further include a power supply 411 (e.g., a battery) for supplying power to various components, and preferably, the power supply 411 may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 400 includes some functional modules that are not shown, and are not described in detail herein.
An embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program, when executed by the processor, implements each process of the video shooting method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the video shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (11)

1. A video shooting method is applied to a first electronic device and comprises the following steps:
displaying a shooting preview interface, wherein the shooting preview interface comprises M areas;
respectively receiving first inputs to N regions in the M regions, wherein M is a positive integer greater than or equal to 2, and N is a positive integer less than or equal to M;
and responding to the first input, respectively shooting videos related to each of the N areas to synthesize a target video, wherein the videos related to each of the N areas are obtained by shooting through the same camera, and when the target video is in a playing state, the videos related to each of the M areas are synchronously played.
2. The method of claim 1, wherein the N regions comprise a first region;
the method further comprises the following steps:
after the video associated with the first area is shot, saving the video associated with the first area under the condition that the shooting preview interface is quitted;
and under the condition that the shooting preview interface is displayed again, displaying information of the video related to the first area in the first area of the M areas displayed by the shooting preview interface.
3. The method of claim 1, wherein displaying the capture preview interface comprises:
receiving a second input to a video file in the case of displaying an interface including the video file, wherein the video file includes a video associated with a second area captured by a second electronic device, and the second area is an area other than the N areas from among the M areas;
and responding to the second input, and displaying the shooting preview interface.
4. The method of claim 1, further comprising:
displaying a video editing interface under the condition that each of the M regions is associated with a video, wherein the video editing interface comprises a music selection control;
receiving a third input to the music selection control;
the composite target video includes:
in response to the third input, synthesizing the videos respectively associated with the M areas into the target video according to the background music associated with the third input.
5. The method according to claim 4, wherein the synthesizing the videos respectively associated with the M regions into the target video according to the background music associated with the third input comprises:
receiving a fourth input to a target control if the video editing interface includes the target control; wherein the target control indicates whether to synthesize the target video using the background music or whether to synthesize the target video using the audio of the video respectively associated with the M regions;
and responding to the fourth input, and synthesizing the videos respectively associated with the M areas into the target video according to the background music and the indication of the target control.
6. The method of claim 1,
the respectively shooting the video associated with each of the N areas comprises:
receiving a fifth input selected for a third region of the N regions;
displaying a preview screen in the third area in response to the fifth input, and capturing video associated with the third area.
7. The method of claim 4, wherein the video editing interface comprises the M regions, and wherein information of a video associated with a region is displayed in each of the M regions;
the method further comprises the following steps:
for two of the M regions, receiving a sixth input dragging information of a video associated with one of the two regions to another region;
in response to the sixth input, interchanging the videos associated with the two regions.
8. The method of claim 1, wherein after the displaying the capture preview interface, the method further comprises at least one of:
receiving a seventh input to the grid number setting control under the condition that the shooting preview interface comprises the grid number setting control; in response to the seventh input, setting the value of M to the value associated with the seventh input;
receiving an eighth input to the video duration setting control under the condition that the shooting preview interface comprises the video duration setting control; in response to the eighth input, setting the maximum durations of the videos respectively associated with the M regions to the duration associated with the eighth input.
9. A video shooting device applied to a first electronic device, the device comprising:
the preview interface display module is used for displaying a shooting preview interface, wherein the shooting preview interface comprises M areas;
a first input receiving module, configured to receive first inputs to N regions of the M regions, respectively, where M is a positive integer greater than or equal to 2, and N is a positive integer less than or equal to M;
the first input response module is used for responding to the first input, respectively shooting videos related to each of the N areas to synthesize a target video, wherein the videos related to each of the N areas are obtained through shooting by the same camera, and when the target video is in a playing state, the videos related to each of the M areas are synchronously played.
10. An electronic device, comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the video capturing method as claimed in any one of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the video capturing method according to any one of claims 1 to 8.
CN201911317151.9A 2019-12-19 2019-12-19 Video shooting method and device, electronic equipment and medium Active CN111050070B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911317151.9A CN111050070B (en) 2019-12-19 2019-12-19 Video shooting method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911317151.9A CN111050070B (en) 2019-12-19 2019-12-19 Video shooting method and device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN111050070A CN111050070A (en) 2020-04-21
CN111050070B true CN111050070B (en) 2021-11-09

Family

ID=70237735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911317151.9A Active CN111050070B (en) 2019-12-19 2019-12-19 Video shooting method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN111050070B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111464863A (en) * 2020-05-29 2020-07-28 杭州情咖网络技术有限公司 Background music synthesis method and device and electronic equipment
CN112004032B (en) 2020-09-04 2022-02-18 北京字节跳动网络技术有限公司 Video processing method, terminal device and storage medium
CN112637507B (en) * 2020-12-30 2023-03-28 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and readable storage medium
CN113079419A (en) * 2021-03-30 2021-07-06 北京字跳网络技术有限公司 Video processing method of application program and electronic equipment
CN113395588A (en) * 2021-06-23 2021-09-14 北京字跳网络技术有限公司 Video processing method, device, equipment and storage medium
CN113794831B (en) * 2021-08-13 2023-08-25 维沃移动通信(杭州)有限公司 Video shooting method, device, electronic equipment and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150068838A (en) * 2013-12-12 2015-06-22 엘지전자 주식회사 Electronic device and method for controlling of the same
CN106658114A (en) * 2016-11-30 2017-05-10 乐视控股(北京)有限公司 Video playing method and device
CN107959755A (en) * 2017-11-27 2018-04-24 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN107995429A (en) * 2017-12-22 2018-05-04 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN108093171A (en) * 2017-11-30 2018-05-29 努比亚技术有限公司 A kind of photographic method, terminal and computer readable storage medium
CN110072070A (en) * 2019-03-18 2019-07-30 华为技术有限公司 A kind of multichannel kinescope method and equipment
CN110383820A (en) * 2018-05-07 2019-10-25 深圳市大疆创新科技有限公司 Method for processing video frequency, system, the system of terminal device, movable fixture

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012139343A1 (en) * 2011-04-15 2012-10-18 海尔集团公司 Play control system and method
CN107872623B (en) * 2017-12-22 2019-11-26 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer readable storage medium
CN109348155A (en) * 2018-11-08 2019-02-15 北京微播视界科技有限公司 Video recording method, device, computer equipment and storage medium
CN110086998B (en) * 2019-05-27 2021-04-06 维沃移动通信有限公司 Shooting method and terminal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150068838A (en) * 2013-12-12 2015-06-22 엘지전자 주식회사 Electronic device and method for controlling of the same
CN106658114A (en) * 2016-11-30 2017-05-10 乐视控股(北京)有限公司 Video playing method and device
CN107959755A (en) * 2017-11-27 2018-04-24 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN108093171A (en) * 2017-11-30 2018-05-29 努比亚技术有限公司 A kind of photographic method, terminal and computer readable storage medium
CN107995429A (en) * 2017-12-22 2018-05-04 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN110383820A (en) * 2018-05-07 2019-10-25 深圳市大疆创新科技有限公司 Method for processing video frequency, system, the system of terminal device, movable fixture
CN110072070A (en) * 2019-03-18 2019-07-30 华为技术有限公司 A kind of multichannel kinescope method and equipment

Also Published As

Publication number Publication date
CN111050070A (en) 2020-04-21

Similar Documents

Publication Publication Date Title
CN111050070B (en) Video shooting method and device, electronic equipment and medium
CN107995429B (en) Shooting method and mobile terminal
CN108668083B (en) Photographing method and terminal
CN110740259B (en) Video processing method and electronic equipment
CN108495029B (en) Photographing method and mobile terminal
CN110365907B (en) Photographing method and device and electronic equipment
CN110007837B (en) Picture editing method and terminal
CN109361867B (en) Filter processing method and mobile terminal
CN111010610B (en) Video screenshot method and electronic equipment
US11778304B2 (en) Shooting method and terminal
CN109102555B (en) Image editing method and terminal
CN109683777B (en) Image processing method and terminal equipment
CN109474787B (en) Photographing method, terminal device and storage medium
CN111147779B (en) Video production method, electronic device, and medium
CN109246474B (en) Video file editing method and mobile terminal
CN108984143B (en) Display control method and terminal equipment
CN108924422B (en) Panoramic photographing method and mobile terminal
CN111177420A (en) Multimedia file display method, electronic equipment and medium
CN111464746B (en) Photographing method and electronic equipment
CN111182211B (en) Shooting method, image processing method and electronic equipment
CN110855921A (en) Video recording control method and electronic equipment
CN110086998B (en) Shooting method and terminal
CN110022445B (en) Content output method and terminal equipment
CN110865752A (en) Photo viewing method and electronic equipment
CN111064888A (en) Prompting method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant