CN113556482A - Video processing method, device and system based on multi-camera shooting - Google Patents

Video processing method, device and system based on multi-camera shooting Download PDF

Info

Publication number
CN113556482A
CN113556482A CN202010333401.4A CN202010333401A CN113556482A CN 113556482 A CN113556482 A CN 113556482A CN 202010333401 A CN202010333401 A CN 202010333401A CN 113556482 A CN113556482 A CN 113556482A
Authority
CN
China
Prior art keywords
video
camera
template
shot
editing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010333401.4A
Other languages
Chinese (zh)
Inventor
卢昳
江运柱
杨怀渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010333401.4A priority Critical patent/CN113556482A/en
Publication of CN113556482A publication Critical patent/CN113556482A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/278Subtitling

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a video processing method, a device and a system based on multi-camera shooting. The method comprises the following steps: starting a corresponding camera according to the selected display template, and corresponding the video shot by the camera to a corresponding area in the display template for display; and finishing the shooting of the video, editing the shot video according to the determined editing template and then outputting the edited video, wherein the editing template comprises a display template. The method and the device can enable a user to conveniently acquire the composite video of the videos synchronously shot by the multiple cameras, and achieve the synchronization and the harmony of all video frames in the composite video.

Description

Video processing method, device and system based on multi-camera shooting
Technical Field
The invention relates to the technical field of multimedia, in particular to a video processing method, a device and a system based on multi-camera shooting.
Background
The Vlog is a kind of blog, is called video blog or video log in all, and means video blog, video weblog, variants from blog, and emphasizes timeliness, and the Vlog author writes personal blog by replacing characters or photos with images, and uploads the blog to be shared with net friends.
The blogger often encounters such shooting pain points, when shooting a tourist attraction for playing a card, the blogger can only shoot the tourist attraction with a single lens, if the blogger needs to add the blogger to the tourist attraction, a section of video of the blogger needs to be shot again, the two videos are fused and clipped to be realized, the operation is troublesome, the blogger cannot realize synchronization with the tourist attraction, and even the situation of inconsistent pictures is easy to occur.
Disclosure of Invention
In view of the above, the present invention has been made to provide a method, apparatus and system for video processing based on multi-camera shooting that overcome or at least partially solve the above-mentioned problems.
In a first aspect, an embodiment of the present invention provides a method for processing a video shot based on multiple cameras, including:
starting a corresponding camera according to the selected display template, and corresponding the video shot by the camera to a corresponding area in the display template for display;
and finishing the shooting of the video, editing the shot video according to the determined editing template and then outputting the edited video, wherein the editing template comprises the display template.
In some optional embodiments, the editing template is a template to be selected in a preset editing template library.
In some optional embodiments, the display template can be switched according to selection in the video shooting process.
In some optional embodiments, if the editing template is a splicing template, the editing and outputting the shot video according to the determined editing template specifically includes:
and splicing the shot videos into one video according to the determined splicing template and then outputting the video.
In some optional embodiments, the splicing the shot video into a video according to the determined splicing template and outputting the video includes:
aligning video frames in the shot video according to shooting time, and determining a plurality of groups of video frame groups with consistent time;
corresponding the video frames in the video frame group to the corresponding areas in the determined splicing template, and splicing the video frames into a new video frame;
and connecting the new video frames in series into a video according to the time sequence and then outputting the video.
In some optional embodiments, chronologically concatenating the new video frames into a video further comprises:
performing at least one of the following on the new video frame:
adding audio to the selected new video frame;
adding characters to the selected new video frame;
adding pictures to the selected new video frame;
pruning the new video frame.
In some optional embodiments, editing and outputting the shot video according to the determined editing template specifically includes:
and acquiring a locally stored or network-side video material, and splicing the video material and at least one shot video into a video according to the determined splicing template.
In some optional embodiments, if the editing template is a serial template, the editing and outputting the photographed video according to the determined editing template specifically includes:
and determining the video frames to be reserved in the shot video, and serially connecting the video frames to be reserved into a video according to the time sequence and then outputting the video.
In some optional embodiments, determining a video frame to be retained in the captured video specifically includes:
determining the video frame to be reserved from the shot video according to at least one of the following methods:
determining a video frame to be reserved in the video according to a preset rule;
determining a selected video frame in the video as a video frame to be reserved;
and determining the video frames which are not selected in the video as the video frames to be reserved.
In some optional embodiments, determining a video frame to be retained in the captured video specifically includes:
determining at least one selectable video and one non-selectable video from the shot videos;
determining a selection time period according to operation information of at least one cutting frame of selectable videos, determining video frames in the selection time period of the selectable videos as video frames to be reserved, and determining video frames outside the selection time period in the non-selectable videos as the video frames to be reserved.
In some optional embodiments, after the corresponding camera is started according to the selected presentation template, the method further includes:
pausing the shooting of the camera according to the received pause instruction;
if a shooting continuing instruction is received, continuing to execute the corresponding area display of the video shot by the camera in the display template;
if an exit instruction is received, exiting shooting of the camera;
and if a back deletion instruction is received, deleting the corresponding video frame in the current video according to the back deletion instruction.
In a second aspect, an embodiment of the present invention provides a video processing method based on multi-camera shooting, including:
starting the front camera and the rear camera, and enabling the front video shot by the front camera and the rear video shot by the rear camera to correspond to corresponding areas in the selected display template for display;
and editing the front video shot by the front camera and the rear video shot by the rear camera into a video according to the determined editing template and outputting the video.
In some optional embodiments, if the editing template is a splicing template, editing a front video shot by a front camera and a rear video shot by a rear camera into one video according to a certain editing template, and then outputting the video, specifically including:
and splicing the front video shot by the front camera and the rear video shot by the rear camera into a video according to the splicing template, and then outputting the video.
In some optional embodiments, if the editing template is a serial template, editing a front video shot by a front camera and a rear video shot by a rear camera into one video according to a certain editing template, and then outputting the video, specifically including:
determining video frames to be reserved in a front video shot by a front camera and a rear video shot by a rear camera, and connecting the video frames in series into a video according to the time sequence.
In some optional embodiments, determining video frames to be retained in a front video shot by a front camera and a rear video shot by a rear camera specifically includes:
determining a video frame selected from a rear video shot by a rear camera as a video frame to be reserved;
and determining a time period corresponding to the selected video frame, and determining video frames outside the time period in the front video shot by the front camera as video frames to be reserved.
In some optional embodiments, determining video frames to be retained in a front video shot by a front camera and a rear video shot by a rear camera specifically includes:
determining a selection time period according to operation information of at least one cutting frame of a rear video shot by a rear camera, and determining a video frame in the selection time period of the rear video as a video frame to be reserved;
and determining the video frames outside the selected time period in the front video shot by the front camera as the video frames to be reserved.
In a third aspect, an embodiment of the present invention provides a live broadcast method based on multi-camera shooting, including:
the shooting client starts a corresponding camera according to the selected display template, and corresponds the video shot by the camera to a corresponding area in the display template for display;
and editing the video shot by the camera in the current interval into a video according to the display template according to a preset interval, and sending the edited video to a playing client for playing.
In some optional embodiments, if the display template is a splicing template, editing a video shot by the camera in the current interval into a video according to the display template, specifically including:
and splicing the videos shot by the camera in the current interval into one video according to the splicing template.
In some optional embodiments, if the display template is a serial template, editing a video shot by the camera in the current interval into a video according to the display template, specifically including:
determining video frames to be reserved in the video shot by the camera in the current interval, and connecting the video frames to be reserved into a video in series according to the time sequence.
In a fourth aspect, an embodiment of the present invention provides a video communication method based on multi-camera shooting, including:
the first communication client starts a corresponding camera according to the selected display template, and corresponds the video shot by the camera to a corresponding area in the display template for display;
editing the video shot by the camera in the current interval into a video according to the display template according to a preset interval, and sending the edited video to a second communication client for playing
In a fifth aspect, an embodiment of the present invention provides a video processing apparatus based on multi-camera shooting, including:
the shooting module is used for starting the corresponding camera according to the selected display template and corresponding the video shot by the camera to the corresponding area in the display template for display;
and the editing module is used for editing and outputting the video shot by the shooting module according to a determined editing template, and the editing template comprises the display template.
In a sixth aspect, an embodiment of the present invention provides a video processing apparatus based on front and back camera shooting, including:
the shooting module is used for starting the front camera and the rear camera and enabling the front video shot by the front camera and the rear video shot by the rear camera to correspond to the corresponding areas in the selected display template for display;
and the editing module is used for editing the front video and the rear video shot by the shooting module into a video according to the determined editing template and then outputting the video.
In a seventh aspect, an embodiment of the present invention provides a multimedia distribution system, including a shooting client and at least one browsing client;
the shooting client is used for obtaining an edited video according to the method of any one of claims 1 to 16;
and the browsing client is used for playing the received edited video.
In an eighth aspect, embodiments of the present invention provide a computer-readable storage medium having stored thereon computer instructions, which when executed by a processor, implement the above-described method.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
according to the video processing method based on multi-camera shooting provided by the embodiment of the invention, the corresponding camera is started according to the selected display template, and the video shot by the camera is displayed in a corresponding area in the display template in real time, so that a user can flexibly select the display template and browse the video displayed according to the display template in real time in the shooting process; in the editing process after the video shooting, a user can flexibly set an editing template, and the video shot by the camera is edited into a video to be output according to the editing template. The method and the device have the advantages that the user can conveniently acquire the composite video of the videos synchronously shot by the multiple cameras, the video editing template is flexibly set and has various forms, and the synchronization and the harmony of the video frames in the composite video are realized.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of a video processing method based on multi-camera shooting according to an embodiment of the present invention;
fig. 2 is a flowchart of a specific implementation of a video processing method based on multi-camera shooting according to a second embodiment of the present invention;
fig. 3 is a flowchart of a specific implementation of a video processing method based on multi-camera shooting according to a third embodiment of the present invention;
fig. 4 is a flowchart illustrating a detailed implementation of a video processing method according to a fourth embodiment of the present invention;
fig. 5 is a flowchart of a specific implementation of a video processing method based on front and rear camera shooting in the fifth embodiment of the present invention;
FIG. 6 is an exemplary diagram of a display template in an embodiment of the present invention;
FIG. 7 is a diagram illustrating an exemplary method for determining a video frame to be retained according to an embodiment of the present invention;
FIG. 8 is an exemplary diagram of a personalized video distribution setting in an embodiment of the present invention;
fig. 9 is a flowchart of a specific implementation of a live broadcast method based on multi-camera shooting in a sixth embodiment of the present invention;
fig. 10 is a flowchart illustrating a specific implementation of a video communication method based on multi-camera shooting according to a seventh embodiment of the present invention;
fig. 11 is a schematic structural diagram of a video processing apparatus based on multi-camera shooting according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a video processing apparatus based on front and rear camera shooting according to an embodiment of the present invention;
FIG. 13 is a schematic structural diagram of a multimedia distribution system according to an embodiment of the present invention;
fig. 14 is a schematic structural diagram of a live broadcast system based on multi-camera shooting in an embodiment of the present invention;
fig. 15 is a schematic structural diagram of a video communication system based on multi-camera shooting according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In order to solve the problem of poor video synchronization and coordination in the prior art, embodiments of the present invention provide a method, an apparatus, and a system for processing a video shot based on multiple cameras, which enable a user to conveniently obtain a composite video of videos shot synchronously by multiple cameras, and implement synchronization and connectivity of video frames in the composite video.
Example one
An embodiment of the present invention provides a video processing method based on multi-camera shooting, a flow of which is shown in fig. 1, and the method includes the following steps:
step S11: and starting the corresponding camera according to the selected display template, and corresponding the video shot by the camera to the corresponding area in the display template for display.
The corresponding camera is started according to the display template selected by the user, specifically, the corresponding relationship between the display template and the camera is unique, that is, one display template corresponds to at least one determined camera, and at this time, the corresponding camera can be determined and started directly according to the display template selected by the user and the corresponding relationship between the display template and the camera. Optionally, the corresponding relationship between the display template and the cameras may also be non-unique, that is, one display template corresponds to a combination of multiple cameras, each combination of cameras includes at least one camera, at this time, all possible cameras need to be determined according to the display template selected by the user, all possible cameras are displayed to the user for the user to further select, and the camera selected by the user is turned on.
Specifically, when only one camera is started, the shot video can be directly displayed according to the display template; when a plurality of opened cameras are available, the video shot by each camera corresponds to the corresponding area in the display template for display. When the videos displayed in the shooting process comprise videos shot by a plurality of cameras, the videos are only played in an overlapping mode, the videos are not synthesized, and the videos are independent.
In an alternative embodiment, the presentation template may be switched according to choice during the video capture process. The user can switch the display template at any time during the shooting process. The switchable display template is a template to be selected in a preset display template library.
In an optional embodiment, after starting the corresponding camera according to the selected display template, the method further includes pausing shooting of the camera according to the received pause instruction; after the shooting of the camera is suspended, the following different operations are executed according to the different subsequently received instructions:
(1) and if the shooting continuing instruction is received.
And continuing shooting by the camera.
(2) If an exit instruction is received.
And exiting shooting of the camera. Whether the user quits can be further determined, and if yes, the currently shot video is discarded; if not, further determining whether the user needs to continue shooting or pause shooting, or other operations.
(3) If a back deletion instruction is received.
And deleting the corresponding video frame in the current video according to the back deletion instruction.
(4) And if receiving the shooting ending instruction.
It is determined that the photographing is completed, the subsequent step S12 is performed.
Step S12: and finishing the shooting of the video, and editing and outputting the shot video according to the determined editing template.
Specifically, the editing template includes a display template, and the editing template is a template to be selected in a preset editing template library.
After the video is shot, a selected editing template can be obtained; alternatively, the edit template may be determined from the presentation template. If receiving the selection information of the user about the editing template, determining the editing template according to the user selection; and if the selection information of the user about the editing template is not received, determining the current display template as the editing template.
Specifically, the method for editing and outputting the shot video according to the determined editing template at least comprises the following three conditions:
(1) and when the shot videos are multiple videos and the editing template is a splicing template.
And splicing the shot videos into one video according to the determined splicing template and then outputting the video.
(2) And when the shot videos are a plurality of videos and the editing template is a serial template.
And determining the video frames to be reserved in the shot video, and connecting the video frames to be reserved into a video in series according to the time sequence.
(3) When the shot video is a single video.
And acquiring a locally stored or network-side video material, and splicing the video material and the shot video into a video according to the determined splicing template.
Specifically, the video material may be video, audio, image, and the like.
Optionally, when the shot video is multiple videos, in the process of splicing or connecting the shot radio frequencies in series, a locally stored or network-side video material may also be obtained, and the locally stored or network-side video material and at least one shot video are spliced into one video according to the determined splicing template.
The specific video editing method is described in detail in the following embodiments.
According to the video processing method based on multi-camera shooting provided by the embodiment of the invention, the corresponding camera is started according to the selected display template, and the video shot by the camera is displayed in a corresponding area in the display template in real time, so that a user can flexibly select the display template and browse the video displayed according to the display template in real time in the shooting process; in the editing process after the video shooting, a user can flexibly set an editing template, and the video shot by the camera is edited into a video to be output according to the editing template. The method and the device have the advantages that the user can conveniently acquire the composite video of the videos synchronously shot by the multiple cameras, the video editing template is flexibly set and has various forms, and the synchronization and the harmony of the video frames in the composite video are realized.
Example two
The second embodiment of the present invention provides a video processing method based on multi-camera shooting, the flow of which is shown in fig. 2, and the method includes the following steps:
step S21: and starting a plurality of corresponding cameras according to the selected display template, and corresponding the videos shot by the cameras to corresponding areas in the display template for display.
Step S22: and finishing the shooting of the video and determining a splicing template.
Specifically, the splicing template may be determined according to the selection, or may be determined according to the current display template. The user can flexibly select the splicing template.
Step S23: aligning the video frames in the shot video according to the shooting time, and determining a plurality of groups of video frame groups with consistent time.
In step S21, before the corresponding multiple cameras are started according to the selected display template, the time of each camera needs to be unified, specifically, the time of each camera may be unified into the system time. Preferably, the shooting frequency of each camera is set to be a uniform numerical value, so that the number of video frames shot by each camera per second is consistent, the video frames in the shot video are aligned according to the shooting time, a plurality of groups of video frame groups with consistent time are determined, the video frames in each video frame group are the video frames shot by different cameras, the time of the video frames is consistent, the video frames are shot at the same time, and the synchronization of scenes in the spliced video frames is ensured.
Step S24: and corresponding the video frames in the video frame group to the corresponding areas in the determined splicing template, and splicing the video frames into a new video frame.
Optionally, the new video frame may also include not only the video frame in the video shot by each camera, but also at least one of the following items may be executed on the new video frame in the process of splicing the video frames or after the splicing of all the video frames is completed:
adding audio to the selected new video frame;
adding characters to the selected new video frame;
adding pictures to the selected new video frame;
the new video frame is pruned.
Therefore, the video splicing modes are more flexible and diversified, the video shot by each camera is spliced together, and simultaneously, the locally stored video material can be obtained or downloaded from a network end according to selection, background music is added for the video, dubbing is carried out again, the existing video or picture is spliced, and the like, and new video frames which are not wanted to be reserved can be deleted.
Step S25: and (4) connecting the new video frames in series into a video according to the time sequence and outputting the video.
After the splicing of the video frames is completed, the spliced new video frames are connected in series to form a video and then output.
EXAMPLE III
The third embodiment of the present invention provides a video processing method based on multi-camera shooting, the flow of which is shown in fig. 3, and the method includes the following steps:
step S31: and starting a plurality of corresponding cameras according to the selected display template, and corresponding the videos shot by the cameras to corresponding areas in the display template for display.
Step S32: and finishing the shooting of the video and determining the tandem template.
Specifically, the series template may be determined according to the selection, or may be determined according to the current presentation template.
Step S33: and determining the video frames to be reserved in the shot video.
In one embodiment, the method may include determining a video frame to be retained from the captured video by at least one of:
(1) and determining the video frames to be reserved in the video according to a preset rule.
Specifically, it may be preset that, in a captured video, a video frame in which video is retained in each time period, and a video frame of the video in the time period is determined as a video frame to be retained.
(2) The selected video frame in the video is determined as the video frame to be retained.
The video frame to be retained may be determined according to a selection of a user, specifically, according to a selected time period, or may be directly determined as to be retained.
(3) And determining the video frames which are not selected in the video as the video frames to be reserved.
Specifically, according to the selected time period or the selected specific video frame, the unselected video frame in the video is determined as the video frame to be retained; optionally, for one video a, the other videos are video frames determined to be retained according to the selection of the user, the time period of the video frames determined to be retained according to the selection is determined, and the video frames outside the time period in the video a are determined as the video frames to be retained.
For example, in one embodiment, the method may include determining at least one selectable video and one non-selectable video from the captured videos; according to the operation information of at least one cutting frame of the selectable video, determining a selection time period, determining video frames in the selection time period of the selectable video as video frames to be reserved, and determining video frames outside the selection time period in the non-selectable video as the video frames to be reserved.
After determining the video frame to be reserved, at least one of the following items can be executed on the video frame to be reserved:
adding audio to the selected video frame to be preserved;
adding characters to the selected video frame to be reserved;
adding pictures to the selected video frames to be reserved;
the video frames to be retained are pruned.
Step S34: the video frames to be reserved are connected in series into a video according to the time sequence and then output.
The shooting time of each video frame is connected, so that the connection of each video frame in the video formed in series is realized. Optionally, the shooting time of each video frame may be intermittent, but the video frames are connected in series according to the time sequence, so that the phenomenon of time conflict still does not exist, and the connectivity of each video frame is also realized.
Example four
The fourth embodiment of the present invention provides a video processing method, a flow of which is shown in fig. 4, and the method includes the following steps:
step S41: and starting the corresponding camera according to the selected display template, and displaying the video shot by the camera according to the display template.
Step S42: and finishing the shooting of the video and determining a splicing template.
Specifically, the splice template may be determined by selection.
Step S43: and acquiring a locally stored or network-side video material, splicing the video material with the shot video into a video according to the determined splicing template, and outputting the video.
Specifically, the acquired video material may be a video, an audio, a picture, or the like, or may be a segment of text. The method can be characterized in that a plurality of video frames are obtained by decoding the shot video, the video material is spliced for the selected video frames according to the splicing template to obtain new video frames, and the new video frames are connected in series according to the time sequence.
Although the video processing method provided by the fourth embodiment of the invention is also shot in a single shot mode, the selected video material can be spliced according to the splicing template selected by the user in the process of outputting the video after the shooting is finished, the output video is spliced, the user does not need to obtain the shot original video first, and then the video and the selected material are spliced together by other modes, so that the operation of the user is more convenient, the labor and the time are saved, and the user experience is improved.
EXAMPLE five
The fifth embodiment of the invention provides a video processing method based on front and back camera shooting, which comprises the following steps: starting the front camera and the rear camera, and enabling the front video shot by the front camera and the rear video shot by the rear camera to correspond to corresponding areas in the selected display template for display; and editing the front video shot by the front camera and the rear video shot by the rear camera into a video according to the determined editing template and outputting the video. Referring to fig. 5, a specific implementation flow may include the following steps:
step S51: and starting the corresponding camera according to the selected display template.
Specifically, referring to fig. 6, the display template set by the user may include each template in fig. 6, the display template a is a single shot display, if the user selects the display template a, shooting by a front camera or shooting by a rear camera may be further determined, for example, if the user does not select, shooting by the rear camera is defaulted, and if the user selects the front camera, shooting by a front suspected head is determined; and determining whether to shoot horizontally or vertically according to the current horizontal and vertical positions of the screen of the terminal held by the user.
If the user selects the display template B, C, D or E, the picture-in-picture display is performed, and the situation that the front camera and the rear camera need to be synchronously started for shooting is determined; the video shot by the front camera or the video shot by the rear camera can be displayed in the corresponding area of the display template by default, and if different selections of the user are obtained, the corresponding relation between the corresponding area of the display template and the videos shot by the front camera and the rear camera is determined according to the selections of the user. The subsequent edit templates corresponding to the presentation template B, C, D and E are default to the splice template. Optionally, the display template corresponding to the display template D can also be displayed up and down; the display template corresponding to the display template E can also be displayed in the left lower part and the right upper part.
And if the user selects the display template F, determining that the front camera and the rear camera need to be synchronously started for shooting. And the subsequent editing template corresponding to the display template F is defaulted to be a series template.
Optionally, after the front and rear photographing cameras are determined to be started according to the user selection, whether front and rear double-lens photographing is supported or not can be detected, and if not, whether the user selects single-lens photographing or not is determined.
Optionally, in the shooting process, the user can switch the display template at any time; if the current shooting is carried out by two cameras, the user can also set the shooting to be carried out by a single camera at any time, and the shooting can be realized by closing one area in the display template.
In an optional embodiment, after starting the corresponding camera according to the selected display template, the method further includes pausing shooting of the camera according to the received pause instruction; after the shooting of the camera is suspended, the following different operations are executed according to the different subsequently received instructions:
(1) and if the shooting continuing instruction is received.
And continuing shooting by the camera.
(2) If an exit instruction is received.
And exiting shooting of the camera. Whether the user quits can be further determined, and if yes, the currently shot video is discarded; if not, further determining whether the user needs to continue shooting or pause shooting, or other operations.
(3) If a back deletion instruction is received.
And deleting the corresponding video frame in the current video according to the back deletion instruction.
(4) And if receiving the shooting ending instruction.
It is determined that the photographing is completed. When the single camera takes a shot, the subsequent step S52 is executed; when the front and rear cameras are shooting synchronously, step S55 is executed.
Step S52: and displaying the video shot by the camera according to the display template in real time.
When single camera was shot, only before the camera or only when the back camera was shot promptly, the video that will shoot in real time directly shows according to the show template.
Step S53: and finishing the shooting of the video and determining a splicing template.
It may be determined that the photographing is completed according to the acquired photographing stop instruction. The stitching template may be determined by selection.
Step S54: and acquiring a locally stored or network-side video material, and splicing the video material and the shot video into a video according to the determined splicing template.
Step S55: and corresponding the front video shot by the front camera and the rear video shot by the rear camera to the corresponding areas in the selected display template for display.
When the front camera and the rear camera shoot synchronously, the front video shot by the front camera and the rear video shot by the rear camera are displayed in corresponding areas in the selected display template in real time.
Step S56: and finishing shooting the video and determining an editing template.
Specifically, the edit template may be determined according to the selection, or may be determined according to the current presentation template.
When the editing template is determined to be the splicing template, executing step S57; when it is determined that the edit template is the concatenated template, step S58 is performed.
Step S57: and splicing the front video shot by the front camera and the rear video shot by the rear camera into a video according to the splicing template and then outputting the video.
In one embodiment, the method can include aligning the video frames in the shot front video and shot back video according to shooting time, and determining multiple groups of video frame groups with consistent time; corresponding the video frames in the video frame group to the corresponding areas in the determined splicing template, and splicing the video frames into a new video frame; and (4) connecting the new video frames in series into a video according to the time sequence and outputting the video.
Step S58: determining video frames to be reserved in a front video shot by a front camera and a rear video shot by a rear camera, and connecting the video frames in series into a video according to the time sequence.
In one embodiment, determining video frames to be retained in a front video shot by a front camera and a rear video shot by a rear camera may include: determining a video frame selected from a rear video shot by a rear camera as a video frame to be reserved; and determining a time period corresponding to the selected video frame, and determining video frames outside the time period in the front video shot by the front camera as the video frames to be reserved.
In one embodiment, determining video frames to be retained in a front video shot by a front camera and a rear video shot by a rear camera may include: determining a selection time period according to the operation information of at least one cutting frame of the rear video shot by the rear camera, and determining a video frame in the selection time period of the rear video as a video frame to be reserved; and determining the video frames outside the selected time period in the front video shot by the front camera as the video frames to be reserved.
Referring to fig. 7, if the user clicks the rear video, a clipping box appears, and the user can drag and change the range of the clipping box by using a gesture to adjust the time period corresponding to the rear video frame to be reserved; if the user needs to add another cutting frame, a new cutting frame is created on the unselected part of the clicked video, the new cutting frame is dragged by a gesture to realize length control, and by analogy, at least one time period set by the user by using the cutting frame is obtained, and the video frame in the time period in the later video is determined as the video frame to be reserved. Video frames outside the selected time period in the front video are determined as video frames to be retained.
The video processing method based on front and back camera shooting provided by the fifth embodiment of the invention can be applied to video publishing in Vlog blogs. When the blogger shoots a personal blog, for example, when going to a tourist attraction and punching a card, the blogger can select to shoot himself at the same time of shooting the attraction, and can select the front and rear double-camera synchronous shooting, specifically select the corresponding display template through the front and rear double-camera shooting, for example, select any one of the display templates B-F shown in fig. 6. In the shooting process, the scenic spot videos shot by the rear camera and the videos shot by the front camera of the blogger correspond to corresponding areas in the display template for display, the two videos are displayed in a superposed playing mode and are not synthesized, the calculation amount is reduced, the requirement of real-time preview is met, the synchronism of the previewed videos is higher, and the feeling degree is improved.
The blogger can decide whether to switch the display template according to the currently displayed video effect at any time and can switch the display template at any time. During shooting, operations such as pausing and deleting videos in the shot videos can be performed. After shooting is finished, synthesizing the video according to the current display template set by the blogger, or synthesizing the video according to other templates selected by the blogger before editing, specifically splicing or connecting the front and rear videos in series. Therefore, after the editing is finished, a synthesized video is output for the user, and if the whole process is splicing and editing, the blogger only needs to select a display template and click to shoot and stop (the setting of personalized requirements is removed); if the video frame is edited in series, only the video frame to be reserved is selected in the editing process on the basis. Meanwhile, in the editing process, the blogger can clip the video, or select and add background music, pictures or characters and the like to the video, and can dub the video again. The scene spot is not required to be shot firstly, then the scene spot is shot, and then the two videos are synthesized, the scene spot and the video are shot synchronously through the front camera and the rear camera, the shooting process of the blogger is facilitated, various personalized requirements can be flexibly achieved, and the synthesized videos are high in synchronism and good in linkage.
Optionally, if the blogger does not need to implement the synchronization between the blogger and the scenic spot, the blogger can select a single camera to shoot as long as the blogger appears in the video, for example, the camera shoots the scenic spot after the selection, only the shot video or image needs to be selected as the added video material in the editing process, and the obtained video is the video of the scenic spot where the blogger is spliced. The photographed scenic spots do not need to be acquired first, then own videos or images are added into the acquired videos by other methods, so that the bloggers can conveniently release the microblog logs, and the time interval from log photographing to release is shortened.
After the blogger obtains the output synthesized video by using the method, the blogger can publish the video through the Vlog blog, and as shown in fig. 8, before publishing, the user can select a cover for the video, set the position where the user shoots the video, select a circle for publishing, and the like, and can also set whether the published video is allowed to be quoted, and the like.
EXAMPLE six
The sixth embodiment of the invention provides a live broadcast method based on multi-camera shooting, which is shown in fig. 9 and comprises the following steps:
step S91: and the shooting client starts a corresponding camera according to the selected display template, and corresponds the video shot by the camera to a corresponding area in the display template for display.
Step S92: and editing the video shot by the camera in the current interval into a video according to the display template according to the preset interval.
Specifically, when the display template is a spliced template, videos shot by a camera in the current interval are spliced into one video according to the spliced template; when the display template is a serial template, determining video frames to be reserved in the video shot by the camera in the current interval, and serially connecting the video frames to be reserved into a video according to the time sequence.
When a shooting client needs to shoot objects in different visual angle ranges within a lens for live broadcasting, in the prior art, multiple devices shoot and synthesize live broadcasting at the same time, so that the live broadcasting time delay is prolonged, the shooting cost is high, and manpower is wasted; in the live broadcast method based on multi-camera shooting in the sixth embodiment of the invention, a plurality of cameras with different visual angle ranges are arranged at one shooting client, so that synchronous multi-visual-angle shooting is realized, and videos synchronously shot by the plurality of cameras are spliced into one video according to the splicing template at preset intervals and are played by each playing client, so that the labor is saved, the cost is saved, and the live broadcast time-lag is reduced.
For example, the shooting client needs to live conversation scenes of two people or two groups of people, the front camera and the rear camera can be synchronously started, and videos shot by the front camera and the rear camera synchronously are spliced into one video according to a preset interval and sent to each playing client to be played.
Specifically, the dialog scene may be interview, teaching, chat, and the like.
For example, an education institution wants to directly broadcast the learning situation of a student in a classroom to a parent waiting outside the classroom, and a live broadcast method based on multi-camera shooting provided by the sixth embodiment of the present invention may also be adopted.
EXAMPLE seven
The seventh embodiment of the present invention provides a communication method based on multi-camera shooting, which is shown in fig. 10 and includes the following steps:
step S101: and the first communication client starts a corresponding camera according to the selected display template, and corresponds the video shot by the camera to a corresponding area in the display template for display.
Step S102: and editing the video shot by the camera in the current interval into a video according to the display template according to the preset interval, and sending the edited video to the second communication client for playing.
The first and second communication clients are not particularly specific to a certain communication client, and when a communication client that performs synchronous shooting with a plurality of cameras is referred to as a first client, the other clients that communicate with the first client are second communication clients.
An example of an applicable scenario of the communication may be that at least one of the plurality of communication clients is provided with a plurality of cameras, for example, front and rear lens cameras. When a user corresponding to a communication client provided with front and rear lens cameras contains two people or two groups of people, the people who face to each other need to be displayed in the video of communication at the same time, the front and rear cameras can be selected to be used synchronously for shooting, the videos shot by the front and rear cameras in the current interval are spliced into one video according to a display template according to the preset interval, and the spliced video is sent to the second communication client for playing.
Specifically, the communication is Instant Messaging (IM), and may be a video conference, a chat, and other scenes.
Based on the inventive concept of the present invention, an embodiment of the present invention further provides a video processing apparatus based on multi-camera shooting, which has a structure as shown in fig. 11 and includes:
the shooting module 111 is configured to start a corresponding camera according to the selected display template, and correspond a video shot by the camera to a corresponding area in the display template for display;
and the editing module 112 is configured to edit and output the video shot by the shooting module 111 according to a determined editing template, where the editing template includes the display template.
In an embodiment, if the editing template is a splicing template, the editing module 112 edits and outputs the shot video according to the determined editing template, which is specifically configured to:
and splicing the shot videos into one video according to the determined splicing template and then outputting the video.
In an embodiment, the editing module 112 splices the shot videos into one video according to the determined splicing template, and outputs the video, specifically configured to:
aligning video frames in the shot video according to shooting time, and determining a plurality of groups of video frame groups with consistent time; corresponding the video frames in the video frame group to the corresponding areas in the determined splicing template, and splicing the video frames into a new video frame; and connecting the new video frames in series into a video according to the time sequence and then outputting the video.
In one embodiment, the editing module 112 is further configured to, before temporally concatenating the new video frames into a video:
performing at least one of the following on the new video frame: adding audio to the selected new video frame; adding characters to the selected new video frame; adding pictures to the selected new video frame; pruning the new video frame.
In an embodiment, the editing module 112 edits and outputs the shot video according to the determined editing template, specifically to:
and acquiring a locally stored or network-side video material, and splicing the video material and at least one shot video into a video according to the determined splicing template.
In an embodiment, if the editing template is a serial template, the editing module 112 edits and outputs the shot video according to the determined editing template, which is specifically configured to:
and determining the video frames to be reserved in the shot video, and serially connecting the video frames to be reserved into a video according to the time sequence and then outputting the video.
In one embodiment, the editing module 112 determines the video frames to be retained in the captured video, and is specifically configured to:
determining the video frame to be reserved from the shot video according to at least one of the following methods: determining a video frame to be reserved in the video according to a preset rule; determining a selected video frame in the video as a video frame to be reserved; and determining the video frames which are not selected in the video as the video frames to be reserved.
In one embodiment, the editing module 112 determines the video frames to be retained in the captured video, and is specifically configured to:
determining at least one selectable video and one non-selectable video from the shot videos; determining a selection time period according to operation information of at least one cutting frame of selectable videos, determining video frames in the selection time period of the selectable videos as video frames to be reserved, and determining video frames outside the selection time period in the non-selectable videos as the video frames to be reserved.
In one embodiment, the shooting module 111, after starting the corresponding camera according to the selected display template, is further configured to:
pausing the shooting of the camera according to the received pause instruction; if a shooting continuing instruction is received, continuing to execute the corresponding area display of the video shot by the camera in the display template; if an exit instruction is received, exiting shooting of the camera; and if a back deletion instruction is received, deleting the corresponding video frame in the current video according to the back deletion instruction.
Based on the inventive concept of the present invention, an embodiment of the present invention further provides a video processing apparatus based on front and back camera shooting, which has a structure as shown in fig. 12, and includes:
the shooting module 121 is configured to start the front and rear cameras, and display the front video shot by the front camera and the rear video shot by the rear camera in corresponding areas in the selected display template;
and the editing module 122 is configured to edit the front video and the rear video captured by the capturing module 121 into one video according to a certain editing template and output the video.
In an embodiment, if the editing template is a splicing template, the editing module 122 edits the front video shot by the front camera and the rear video shot by the rear camera into one video according to the determined editing template, and then outputs the video, which is specifically configured to:
and splicing the front video shot by the front camera and the rear video shot by the rear camera into a video according to the splicing template, and then outputting the video.
In an embodiment, if the editing template is a serial template, the editing module 122 edits the front video shot by the front camera and the rear video shot by the rear camera into one video according to the determined editing template, and then outputs the video, which specifically includes:
determining video frames to be reserved in a front video shot by a front camera and a rear video shot by a rear camera, and connecting the video frames in series into a video according to the time sequence.
In one embodiment, the editing module 122 determines video frames to be retained in the front video shot by the front camera and the rear video shot by the rear camera, and is specifically configured to:
determining a video frame selected from a rear video shot by a rear camera as a video frame to be reserved; and determining a time period corresponding to the selected video frame, and determining video frames outside the time period in the front video shot by the front camera as video frames to be reserved.
In one embodiment, the editing module 122 determines video frames to be retained in the front video shot by the front camera and the rear video shot by the rear camera, and is specifically configured to:
determining a selection time period according to operation information of at least one cutting frame of a rear video shot by a rear camera, and determining a video frame in the selection time period of the rear video as a video frame to be reserved; and determining the video frames outside the selected time period in the front video shot by the front camera as the video frames to be reserved.
Based on the inventive concept of the present invention, an embodiment of the present invention further provides a multimedia distribution system, which has a structure as shown in fig. 13 and includes a shooting client and at least one browsing client;
and the shooting client 131 is used for sending the edited video obtained by the method to the browsing client 132 for playing.
Specifically, the shooting client 131 sends the video to the browsing client 132, which may be a server of a multimedia distribution system or a cloud.
Based on the inventive concept of the present invention, an embodiment of the present invention further provides a live broadcast system based on multi-camera shooting, which has a structure shown in fig. 14 and includes: a photographing client 141 and at least one playing client 142;
the shooting client 141 is configured to start a corresponding camera according to the selected display template, and correspond a video shot by the camera to a corresponding area in the display template for display; and editing the video shot by the camera in the current interval into a video according to the display template according to a preset interval, and sending the edited video to the playing client 142 for playing.
Specifically, the shooting client 141 sends the edited video to the playing client 142, and the video may be sent through a server of a live broadcast system or through a cloud.
Based on the inventive concept of the present invention, an embodiment of the present invention further provides a video communication system based on multi-camera shooting, which has a structure as shown in fig. 15, and includes a first communication client 151 and at least one second communication client 152;
the first communication client 151 is configured to start a corresponding camera according to the selected presentation template, and correspond a video captured by the camera to a corresponding area in the presentation template for presentation; and editing the video shot by the camera in the current interval into a video according to the display template according to a preset interval, and sending the edited video to at least one second communication client 152 for playing.
Specifically, the first communication client 151 sends the edited video to the second communication client 152, which may be through a server of the video communication system or through a cloud.
With regard to the apparatus and system in the above embodiments, the specific manner in which the respective modules perform operations has been described in detail in relation to the embodiments of the method, and will not be elaborated upon here.
Based on the same inventive concept, the embodiment of the present invention also provides a computer readable storage medium, on which computer instructions are stored, and when the instructions are executed by a processor, the method is implemented.
Unless specifically stated otherwise, terms such as processing, computing, calculating, determining, displaying, or the like, may refer to an action and/or process of one or more processing or computing systems or similar devices that manipulates and transforms data represented as physical (e.g., electronic) quantities within the processing system's registers and memories into other data similarly represented as physical quantities within the processing system's memories, registers or other such information storage, transmission or display devices. Information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
It should be understood that the specific order or hierarchy of steps in the processes disclosed is an example of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not intended to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby expressly incorporated into the detailed description, with each claim standing on its own as a separate preferred embodiment of the invention.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. Of course, the processor and the storage medium may reside as discrete components in a user terminal.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in memory units and executed by processors. The memory unit may be implemented within the processor or external to the processor, in which case it can be communicatively coupled to the processor via various means as is known in the art.
What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean a "non-exclusive or". The terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.

Claims (24)

1. A video processing method based on multi-camera shooting comprises the following steps:
starting a corresponding camera according to the selected display template, and corresponding the video shot by the camera to a corresponding area in the display template for display;
and finishing the shooting of the video, editing the shot video according to the determined editing template and then outputting the edited video, wherein the editing template comprises the display template.
2. The method of claim 1, wherein the editing template is a template to be selected in a preset editing template library.
3. The method of claim 1, wherein the presentation template is selectable during the video capture.
4. The method according to claim 1, wherein if the editing template is a splicing template, the editing and outputting the shot video according to the determined editing template specifically comprises:
and splicing the shot videos into one video according to the determined splicing template and then outputting the video.
5. The method according to claim 4, wherein the step of splicing the shot videos into one video according to the determined splicing template and outputting the video comprises the steps of:
aligning video frames in the shot video according to shooting time, and determining a plurality of groups of video frame groups with consistent time;
corresponding the video frames in the video frame group to the corresponding areas in the determined splicing template, and splicing the video frames into a new video frame;
and connecting the new video frames in series into a video according to the time sequence and then outputting the video.
6. The method of claim 5, wherein chronologically concatenating the new video frames into a video front, further comprises:
performing at least one of the following on the new video frame:
adding audio to the selected new video frame;
adding characters to the selected new video frame;
adding pictures to the selected new video frame;
pruning the new video frame.
7. The method according to claim 1, wherein editing and outputting the shot video according to the determined editing template specifically comprises:
and acquiring a locally stored or network-side video material, and splicing the video material and at least one shot video into a video according to the determined splicing template.
8. The method according to claim 1, wherein if the editing template is a serial template, the editing and outputting the shot video according to the determined editing template specifically comprises:
and determining the video frames to be reserved in the shot video, and serially connecting the video frames to be reserved into a video according to the time sequence and then outputting the video.
9. The method according to claim 8, wherein determining the video frames to be retained in the captured video specifically comprises:
determining the video frame to be reserved from the shot video according to at least one of the following methods:
determining a video frame to be reserved in the video according to a preset rule;
determining a selected video frame in the video as a video frame to be reserved;
and determining the video frames which are not selected in the video as the video frames to be reserved.
10. The method according to claim 8, wherein determining the video frames to be retained in the captured video specifically comprises:
determining at least one selectable video and one non-selectable video from the shot videos;
determining a selection time period according to operation information of at least one cutting frame of selectable videos, determining video frames in the selection time period of the selectable videos as video frames to be reserved, and determining video frames outside the selection time period in the non-selectable videos as the video frames to be reserved.
11. The method of any one of claims 1-10, wherein, upon activating the corresponding camera according to the selected presentation template, further comprising:
pausing the shooting of the camera according to the received pause instruction;
if a shooting continuing instruction is received, continuing to execute the corresponding area display of the video shot by the camera in the display template;
if an exit instruction is received, exiting shooting of the camera;
and if a back deletion instruction is received, deleting the corresponding video frame in the current video according to the back deletion instruction.
12. A video processing method based on front and back camera shooting comprises the following steps:
starting the front camera and the rear camera, and enabling the front video shot by the front camera and the rear video shot by the rear camera to correspond to corresponding areas in the selected display template for display;
and editing the front video shot by the front camera and the rear video shot by the rear camera into a video according to the determined editing template and outputting the video.
13. The method according to claim 12, wherein if the editing template is a stitching template, editing the front video captured by the front camera and the rear video captured by the rear camera into one video according to the determined editing template, and then outputting the video, specifically comprising:
and splicing the front video shot by the front camera and the rear video shot by the rear camera into a video according to the splicing template, and then outputting the video.
14. The method according to claim 12, wherein if the editing template is a tandem template, editing the front video captured by the front camera and the rear video captured by the rear camera into one video according to the determined editing template, and then outputting the video, specifically comprising:
determining video frames to be reserved in a front video shot by a front camera and a rear video shot by a rear camera, and connecting the video frames in series into a video according to the time sequence.
15. The method of claim 14, wherein determining video frames to be retained in the front video captured by the front camera and the rear video captured by the rear camera comprises:
determining a video frame selected from a rear video shot by a rear camera as a video frame to be reserved;
and determining a time period corresponding to the selected video frame, and determining video frames outside the time period in the front video shot by the front camera as video frames to be reserved.
16. The method of claim 14, wherein determining video frames to be retained in the front video captured by the front camera and the rear video captured by the rear camera comprises:
determining a selection time period according to operation information of at least one cutting frame of a rear video shot by a rear camera, and determining a video frame in the selection time period of the rear video as a video frame to be reserved;
and determining the video frames outside the selected time period in the front video shot by the front camera as the video frames to be reserved.
17. A live broadcast method based on multi-camera shooting comprises the following steps:
the shooting client starts a corresponding camera according to the selected display template, and corresponds the video shot by the camera to a corresponding area in the display template for display;
and editing the video shot by the camera in the current interval into a video according to the display template according to a preset interval, and sending the edited video to a playing client for playing.
18. The method according to claim 17, wherein if the display template is a stitching template, editing the video shot by the camera in the current interval into a video according to the display template, specifically comprising:
and splicing the videos shot by the camera in the current interval into one video according to the splicing template.
19. The method according to claim 17, wherein if the display template is a serial template, editing the video shot by the camera in the current interval into a video according to the display template, specifically comprising:
determining video frames to be reserved in the video shot by the camera in the current interval, and connecting the video frames to be reserved into a video in series according to the time sequence.
20. A video communication method based on multi-camera shooting comprises the following steps:
the first communication client starts a corresponding camera according to the selected display template, and corresponds the video shot by the camera to a corresponding area in the display template for display;
and editing the video shot by the camera in the current interval into a video according to the display template according to a preset interval, and sending the edited video to a second communication client for playing.
21. A video processing apparatus based on multi-camera shooting, comprising:
the shooting module is used for starting the corresponding camera according to the selected display template and corresponding the video shot by the camera to the corresponding area in the display template for display;
and the editing module is used for editing and outputting the video shot by the shooting module according to a determined editing template, and the editing template comprises the display template.
22. A video processing apparatus based on front and rear camera shooting, comprising:
the shooting module is used for starting the front camera and the rear camera and enabling the front video shot by the front camera and the rear video shot by the rear camera to correspond to the corresponding areas in the selected display template for display;
and the editing module is used for editing the front video and the rear video shot by the shooting module into a video according to the determined editing template and then outputting the video.
23. A multimedia release system comprises a shooting client and at least one browsing client;
the shooting client is used for obtaining an edited video according to the method of any one of claims 1 to 16;
and the browsing client is used for playing the received edited video.
24. A computer-readable storage medium having stored thereon computer instructions, which when executed by a processor, implement the method of any of claims 1-20.
CN202010333401.4A 2020-04-24 2020-04-24 Video processing method, device and system based on multi-camera shooting Pending CN113556482A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010333401.4A CN113556482A (en) 2020-04-24 2020-04-24 Video processing method, device and system based on multi-camera shooting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010333401.4A CN113556482A (en) 2020-04-24 2020-04-24 Video processing method, device and system based on multi-camera shooting

Publications (1)

Publication Number Publication Date
CN113556482A true CN113556482A (en) 2021-10-26

Family

ID=78129716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010333401.4A Pending CN113556482A (en) 2020-04-24 2020-04-24 Video processing method, device and system based on multi-camera shooting

Country Status (1)

Country Link
CN (1) CN113556482A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106165430A (en) * 2016-06-29 2016-11-23 北京小米移动软件有限公司 Net cast method and device
CN107105315A (en) * 2017-05-11 2017-08-29 广州华多网络科技有限公司 Live broadcasting method, the live broadcasting method of main broadcaster's client, main broadcaster's client and equipment
US20170332020A1 (en) * 2015-02-04 2017-11-16 Tencent Technology (Shenzhen) Company Ltd. Video generation method, apparatus and terminal
CN110072070A (en) * 2019-03-18 2019-07-30 华为技术有限公司 A kind of multichannel kinescope method and equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170332020A1 (en) * 2015-02-04 2017-11-16 Tencent Technology (Shenzhen) Company Ltd. Video generation method, apparatus and terminal
CN106165430A (en) * 2016-06-29 2016-11-23 北京小米移动软件有限公司 Net cast method and device
CN107105315A (en) * 2017-05-11 2017-08-29 广州华多网络科技有限公司 Live broadcasting method, the live broadcasting method of main broadcaster's client, main broadcaster's client and equipment
CN110072070A (en) * 2019-03-18 2019-07-30 华为技术有限公司 A kind of multichannel kinescope method and equipment

Similar Documents

Publication Publication Date Title
US10735798B2 (en) Video broadcast system and a method of disseminating video content
US9852768B1 (en) Video editing using mobile terminal and remote computer
CN109151537B (en) Video processing method and device, electronic equipment and storage medium
WO2020107297A1 (en) Video clipping control method, terminal device, system
US10090018B2 (en) Method and device for generating video slides
CA2991623A1 (en) Media production system with scheduling feature
JPH11234560A (en) Multimedia edit method and device therefor
CN108702464B (en) Video processing method, control terminal and mobile device
CN112866796A (en) Video generation method and device, electronic equipment and storage medium
CN111866434A (en) Video co-shooting method, video editing device and electronic equipment
CN112218154B (en) Video acquisition method and device, storage medium and electronic device
US9966110B2 (en) Video-production system with DVE feature
CN113473207B (en) Live broadcast method and device, storage medium and electronic equipment
US20240121452A1 (en) Video processing method and apparatus, device, and storage medium
CN113542624A (en) Method and device for generating commodity object explanation video
KR102274723B1 (en) Device, method and computer program for editing time slice images
US9325776B2 (en) Mixed media communication
CN106504077A (en) A kind of method and device for showing information of real estate
CN113497894B (en) Video shooting method, device, terminal and storage medium
CN113660528A (en) Video synthesis method and device, electronic equipment and storage medium
CN105814905A (en) Method and system for synchronizing usage information between device and server
KR101425950B1 (en) Method for generating a sound series of photographs and apparatus for generating and reproducing such sound series
KR101879166B1 (en) A real-world studio system capable of producing contents using the control of a virtual studio and its operating method
CN107204026B (en) Method and device for displaying animation
CN113556482A (en) Video processing method, device and system based on multi-camera shooting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination