WO2023040844A1 - 视频处理方法、装置、电子设备及可读存储介质 - Google Patents

视频处理方法、装置、电子设备及可读存储介质 Download PDF

Info

Publication number
WO2023040844A1
WO2023040844A1 PCT/CN2022/118527 CN2022118527W WO2023040844A1 WO 2023040844 A1 WO2023040844 A1 WO 2023040844A1 CN 2022118527 W CN2022118527 W CN 2022118527W WO 2023040844 A1 WO2023040844 A1 WO 2023040844A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
input
processing device
image sequence
window
Prior art date
Application number
PCT/CN2022/118527
Other languages
English (en)
French (fr)
Other versions
WO2023040844A9 (zh
WO2023040844A8 (zh
Inventor
陈喆
Original Assignee
维沃移动通信(杭州)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信(杭州)有限公司 filed Critical 维沃移动通信(杭州)有限公司
Publication of WO2023040844A1 publication Critical patent/WO2023040844A1/zh
Publication of WO2023040844A9 publication Critical patent/WO2023040844A9/zh
Publication of WO2023040844A8 publication Critical patent/WO2023040844A8/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Definitions

  • the present application belongs to the field of video processing, and in particular relates to a video processing method, device, electronic equipment and readable storage medium.
  • the editing of video recordings is mainly carried out on the PC (personal computer, personal computer) side, and the video recorded by the mobile phone is edited on the PC side through professional video editing software, but the editing operation is relatively complicated, and the threshold is difficult for ordinary users. Higher, more suitable for professional users to operate.
  • the purpose of the embodiments of the present application is to provide a video processing method, device, electronic device and readable storage medium, which can solve the problem of complex and difficult operations in video clipping in the related art.
  • the embodiment of the present application provides a video processing method, the method comprising:
  • the first video image sequence collected by the first camera of the first video processing device is displayed on the video preview interface
  • a target video is generated according to the first video image sequence and the second video image sequence
  • the target video includes at least one frame of the first video image in the first video image sequence and at least one frame of the second video image in the second video image sequence.
  • the embodiment of the present application provides a first video processing device, which includes:
  • a first receiving module configured to receive a first input from a user to the first video processing device
  • the first display module is configured to display the first video image sequence captured by the first camera of the first video processing device on the video preview interface in response to the first input;
  • a generating module configured to generate a target video according to the first video image sequence and the second video image sequence when the video preview interface displays the second video image sequence captured by the second camera of the second video processing device ;
  • the target video includes at least one frame of the first video image in the first video image sequence and at least one frame of the second video image in the second video image sequence.
  • an embodiment of the present application provides an electronic device, the electronic device includes a processor, a memory, and a program or instruction stored in the memory and operable on the processor, and the program or instruction is The processor implements the steps of the method described in the first aspect when executed.
  • an embodiment of the present application provides a readable storage medium, on which a program or an instruction is stored, and when the program or instruction is executed by a processor, the steps of the method described in the first aspect are implemented .
  • the embodiment of the present application provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run programs or instructions, so as to implement the first aspect the method described.
  • video image sequences collected by cameras of different video processing devices may be displayed on the video preview interface, and according to the first video image sequence and the second video image sequence respectively collected by the cameras of different video processing devices,
  • To perform video clipping generate a target image comprising at least one frame of the first video image and at least one frame of the second video image, wherein the at least one frame of the first video image is from the first video image sequence generated by the first video processing device, and At least one frame of the second video image is from the second video image sequence generated by the second video processing device.
  • the video processing method of the embodiment of the present application can perform video clipping on different video image sequences generated by different video processing devices when the video preview interface displays the video image sequences collected by the cameras of different video processing devices, thereby generating the target video, There is no need to use professional video editing software, which reduces the difficulty and complexity of video editing operations.
  • Fig. 1 is one of the flowcharts of the video processing method provided by the embodiment of the present application.
  • Fig. 2A is one of the schematic diagrams of the video processing interface provided by the embodiment of the present application.
  • Fig. 2B is the second schematic diagram of the video processing interface provided by the embodiment of the present application.
  • Fig. 2C is the third schematic diagram of the video processing interface provided by the embodiment of the present application.
  • FIG. 2D is the fourth schematic diagram of the video processing interface provided by the embodiment of the present application.
  • FIG. 2E is the fifth schematic diagram of the video processing interface provided by the embodiment of the present application.
  • Fig. 2F is the sixth schematic diagram of the video processing interface provided by the embodiment of the present application.
  • FIG. 2G is the seventh schematic diagram of the video processing interface provided by the embodiment of the present application.
  • FIG. 2H is the eighth schematic diagram of the video processing interface provided by the embodiment of the present application.
  • Fig. 2I is the ninth schematic diagram of the video processing interface provided by the embodiment of the present application.
  • Fig. 2J is the tenth schematic diagram of the video processing interface provided by the embodiment of the present application.
  • 2K is the eleventh schematic diagram of the video processing interface provided by the embodiment of the present application.
  • Fig. 2L is the twelfth schematic diagram of the video processing interface provided by the embodiment of the present application.
  • FIG. 2M is the thirteenth schematic diagram of the video processing interface provided by the embodiment of the present application.
  • Fig. 3 is the second flowchart of the video processing method provided by the embodiment of the present application.
  • Fig. 4A is the fourteenth schematic diagram of the video processing interface provided by the embodiment of the present application.
  • Fig. 4B is the fifteenth schematic diagram of the video processing interface provided by the embodiment of the present application.
  • FIG. 5 is a block diagram of a video processing device provided in an embodiment of the present application.
  • FIG. 6 is one of the schematic diagrams of the hardware structure of the electronic device provided by the embodiment of the present application.
  • FIG. 7 is the second schematic diagram of the hardware structure of the electronic device provided by the embodiment of the present application.
  • FIG. 1 shows a flowchart of a video processing method according to an embodiment of the present application.
  • the method can be applied to a first video processing device, and the method can include the following steps:
  • Step 101 receiving a first input from a user to a first video processing device.
  • the first input may include but not limited to: the user's click input on the first video processing device, or a voice command input by the user, or a specific gesture input by the user; Examples are not limited to this.
  • the specific gesture in the embodiment of the present application can be any one of a single-click gesture, a sliding gesture, a drag gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture, and a double-click gesture;
  • the click input in the example may be a single click input, a double click input, or any number of click inputs, etc., and may also be a long press input or a short press input.
  • Step 102 in response to the first input, display a first video image sequence captured by a first camera of the first video processing device on a video preview interface.
  • the video preview interface can be a video shooting preview interface
  • the first video image sequence here can be a video image sequence captured by the first camera in real time
  • the real-time recorded video can be recorded in the shooting preview interface. Displayed frame by frame.
  • this video preview interface can also be the playback preview interface of the generated video, so the first video image sequence here can be the video image sequence collected in advance by the first camera, and the recorded video can be recorded in the playback preview interface.
  • the finished video is displayed frame by frame.
  • Step 103 when the video preview interface displays the second video image sequence captured by the second camera of the second video processing device, generate a target video according to the first video image sequence and the second video image sequence.
  • the second video image sequence is similar to the above-mentioned first video image sequence, and may be a video image sequence of a video recorded in real time, or may be a video image sequence of a recorded video.
  • the second video processing device is communicatively connected with the first video processing device, therefore, the first video processing device can not only display the first video image sequence of the first video processing device on the video preview interface, but also display other video images.
  • the second video image sequence generated by the processing device here, the second video processing device.
  • the following content takes the video image sequence as the video image sequence collected in real time by each camera, and the video preview interface as the shooting preview interface as an example.
  • the video image sequence is the video image sequence of the recorded video, this
  • implementation principles of the methods in the embodiments of the application are similar, and thus will not be repeated one by one.
  • the present application does not limit the display order of the first video image sequence and the second video image sequence in the video preview interface.
  • the target video includes at least one frame of the first video image in the first video image sequence and at least one frame of the second video image in the second video image sequence.
  • video image sequences collected by cameras of different video processing devices may be displayed on the video preview interface, and according to the first video image sequence and the second video image sequence respectively collected by the cameras of different video processing devices,
  • To perform video clipping generate a target image comprising at least one frame of the first video image and at least one frame of the second video image, wherein the at least one frame of the first video image is from the first video image sequence generated by the first video processing device, and At least one frame of the second video image is from the second video image sequence generated by the second video processing device.
  • the video processing method of the embodiment of the present application can perform video clipping on different video image sequences generated by different video processing devices when the video preview interface displays the video image sequences collected by the cameras of different video processing devices, thereby generating the target video, There is no need to use professional video editing software, which reduces the difficulty and complexity of video editing operations.
  • the video preview interface includes a main window and a first sub-window, the main window is used to display the first video image sequence, and the first sub-window is used to display the second video image sequence.
  • different windows in the video preview interface are used to display video data recorded in real time by different video processing devices.
  • the shooting preview interface may include a main window and at least one sub-window, optionally, the number of the sub-windows may be multiple, for displaying a plurality of other video processing devices (i.e. Other video processing devices) video image sequence recorded in real time;
  • main window and sub-windows in the shooting preview interface display video image sequences of different video processing devices, and different sub-windows may also display video image sequences of different video processing devices.
  • the first video processing device is a mobile phone M
  • other video processing devices communicatively connected to the first video processing device include mobile phone A, mobile phone B, and mobile phone C as an example for illustration.
  • the video image sequences displayed in different windows in the shooting preview interface can be the video image sequences recorded in real time by different mobile phones at different shooting angles in the same shooting scene, or they can be the result of multi-camera video recording in different shooting scenes. sequence of video images.
  • the shooting scene may be a sports scene, such as playing basketball, playing football and other sports scenes.
  • the video processing device in the embodiment of the present application may be a mobile terminal, including a mobile phone, a tablet, and the like.
  • a mobile phone is taken as an example for illustration.
  • the user can enter the multi-camera video editing mode after zooming the screen with two fingers (for example, zooming to the minimum), thereby displaying the Preview interface, the shooting preview interface as shown in Figure 2B, through the above-mentioned zooming operation, the shooting preview interface is divided into multiple windows, wherein, a larger window is the main window 21, which defaults to the display of the method of the embodiment of the present application
  • the image collected by the camera of the first video processing device such as mobile phone M
  • the remaining smaller windows are sub-windows, and Fig.
  • connection modes between different video processing devices may be WiFi (wireless network), Bluetooth, etc., and WiFi connection is used as an example for description in the following.
  • mobile phone M establishes WiFi connections with mobile phone A, mobile phone B, and mobile phone C, then the video data recorded in real time by mobile phone A, mobile phone B, and mobile phone C can be transmitted to mobile phone M in real time.
  • the multi-camera video editing mode requires multiple mobile phones to work together, so the first video processing device first needs to be connected to multiple mobile phones.
  • the shooting preview interface shown in FIG. 2C that is, the multi-camera video editing mode interface
  • the user can click on any sub-window, here sub-window 22 , to display the mobile phone search interface as shown in FIG. 2D .
  • the mobile phone M can establish a wifi hotspot and wait for other mobile phones to connect.
  • other mobile phones are also in the multi-camera video editing mode, they can search for nearby WiFi signals.
  • the sub-window is not clicked, it will only search for nearby WiFi signals.
  • the sub-window is clicked, Just create a WiFi hotspot.
  • the WiFi hotspot can be a passwordless WiFi hotspot.
  • the normal video recording mode can be switched to the multi-camera video editing mode by pinching the shooting preview interface, such as the multi-camera video editing mode interface of mobile phone A shown in Figure 2E, and the multi-camera video editing mode of mobile phone B shown in Figure 2F Mode interface, the multi-camera video editing mode interface of mobile phone C shown in Figure 2G;
  • the main window in each multi-machine video editing mode interface of mobile phone A, mobile phone B, mobile phone C shows the video content of each mobile phone's own recording;
  • the principles of Fig. 2E, Fig. 2F, and Fig. 2G are similar to the multi-camera video editing mode interface of the mobile phone M shown in Fig. 2C, and will not be repeated here.
  • the hotspot information of the WiFi hotspot may carry some parameter information of the mobile phone M, for example, may include parameters indicating that it is a multi-device video editing mode, and identification information of the mobile phone M, etc.
  • the WiFi connection can be performed through authentication. For WiFi connection, no authentication is required, and WiFi connection can be made directly.
  • WiFi hotspot mobile phone when two mobile phones are connected to WiFi through the WiFi hotspot of the multi-machine video editing mode for the first time, it can be realized in the following manner: For other mobile phones in the multi-machine video editing mode WiFi hotspot mobile phone), after searching the WiFi hotspot, if the hotspot information of the searched WiFi hotspot is found to be a WiFi hotspot in multi-camera video editing mode, you can actively connect to the WiFi hotspot and enter the authentication mode.
  • other mobile phones can send the information of their own mobile phones to mobile phone M (also known as the main mobile phone) through authentication, and wait for the connection application of the main mobile phone.
  • the main mobile phone can display the identification information of each mobile phone requesting authentication in the mobile phone search interface.
  • WiFi hotspot mobile phone After searching for WiFi hotspots, if the hotspot information of the searched WiFi hotspot is found to be a WiFi hotspot in multi-camera video editing mode, you can actively connect to the WiFi hotspot, so that the mobile phone M can establish a WiFi connection with other mobile phones .
  • the method according to the embodiment of the present application may include: displaying at least one device identifier, where the device identifier is used to indicate a video processing device that is communicatively connected to the first video processing device; A third input of the target device identification in a device identification; finally, in response to the third input, in the second sub-window, display the third video image sequence captured by the third camera of the third video processing device, the The third video processing device is a video processing device indicated by the target device identifier.
  • the third input may include but not limited to: the user's click input on the target device identifier, or a voice command input by the user, or a specific gesture input by the user; This is not limited.
  • the specific gesture in the embodiment of the present application can be any one of a single-click gesture, a sliding gesture, a drag gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture, and a double-click gesture;
  • the click input in the example may be a single click input, a double click input, or any number of click inputs, etc., and may also be a long press input or a short press input.
  • the device identification can be the identification information of each mobile phone displayed on the mobile phone search interface as shown in Figure 2D, here is mobile phone A, mobile phone B, and mobile phone C; and the mobile phone search interface can also display the control of mobile phone M 31.
  • the identification information of each mobile phone can be displayed on the mobile phone search interface according to the distance and orientation of each mobile phone relative to mobile phone M.
  • mobile phone C is the closest to mobile phone M, followed by Mobile phone B is farthest from mobile phone A.
  • the user can drag the device identification of the mobile phone (taking mobile phone C as an example) to be connected to the area 32 to be connected, and then click the "connect" control 33, and the mobile phone M can connect to the mobile phone.
  • C for example, the third video processing device here
  • Mobile phone C can receive the connection application from mobile phone M, so click "Agree" in the multi-camera video editing mode of mobile phone C, and then the two-way communication between mobile phone C and mobile phone M will become; in addition, because mobile phone C does not need to do multi-camera video editing , it only needs to provide the real-time recorded video image sequence of its shooting angle and transmit it to the mobile phone M, therefore, the mobile phone C can exit the multi-camera video editing mode, and only the preview screen of the video shot by the mobile phone C can be displayed on the mobile phone C.
  • mobile phone M can also click other sub-windows in Figure 2C to connect to more other mobile phones through WiFi for video recording and editing.
  • mobile phone M and mobile phone B also establish a WiFi connection
  • mobile phone M and mobile phone A also establish a WiFi connection.
  • mobile phone A, mobile phone B, and mobile phone C that have established a WiFi connection with mobile phone M can transmit their respective real-time recorded videos to mobile phone M through the WiFi connection in real time.
  • the sub-window 22 (such as the second sub-window) in the multi-camera video editing mode interface of the mobile phone M is used to display the preview picture of the video recorded by the mobile phone C (such as the third video processing device); the sub-window 23 (such as the first sub-window) is used to display the preview picture of the video recorded by mobile phone B (such as the second video processing device); the sub-window 24 is used to display the preview picture of the video recorded by mobile phone A; the main window 21 is used in the initial state Displays the preview screen of the video recorded by mobile phone M.
  • the method of how to display the video image sequence of the second video processing device is similar to the method of displaying the video recorded by the camera of the mobile phone C in the sub-window 22 as an example here, and will not be repeated one by one.
  • the main window is initially used to display the first video processing device, i.e. the preview screen of the video recorded by the mobile phone M, and the sub-window is used to display the preview screens of the video recorded by other mobile phones connected to the mobile phone M in communication;
  • the main window may not initially display any video recorded by the device, that is, the preview image of the video recorded by the mobile phone M is also displayed in a sub-window.
  • the device identification of the video processing device used to indicate the communication connection with the first video processing device by displaying the device identification of the video processing device used to indicate the communication connection with the first video processing device, and receiving the user's third input of the target device identification in the device identification, in response to the third Input, display the third video image sequence collected by the third camera of the video processing device indicated by the target device identifier in the sub-window, realize video recording in a multi-camera mode, and edit video images of different camera positions Generate the target video, and by connecting multiple video processing devices to the first video processing device, the function of clipping the video while displaying the recorded video can be realized, and the video displayed in the main window can be used as the target video (what you see is what you get) , simplifying the complexity of video editing.
  • the other video processing devices can be recorded in real time.
  • the preview image of the video is displayed in the sub-window of the video preview interface of the first video processing device, and different sub-windows display the preview images of videos recorded by different video processing devices, so that different other videos can be distinguished through different sub-windows Processing different videos collected by the device, and on the basis of the video recording function, through the mutual communication of multiple video processing devices, the function of editing video while recording video can be realized, and the video screen of the main window can be viewed as WYSIWYG In this way, video editing is realized, which simplifies the complexity of video editing; in addition, by displaying the preview screen of the video recorded in real time by the first video processing device in the main window, the initial video segment of the target video obtained by editing comes from the first The video data recorded by a video processing device, and the first video processing device is used
  • the main window may be larger in size than the sub-window, and located near the center of the video preview interface, so as to facilitate users to browse the video content displayed in the main window.
  • the method according to the embodiment of the present application may further include: determining the relative shooting orientation of the third camera according to the image content of the third video image sequence and the first video image sequence, and the relative shooting The orientation is the shooting orientation of the third camera relative to the first camera; then, according to the relative shooting orientation, determine the target display position of the second sub-window.
  • the user can also change the sub-window by dragging the sub-window displaying video content to other sub-windows (which may display video content or not display video content).
  • Position the position of the sub-window is determined according to the relative shooting orientation.
  • Fig. 2H what the sub-window 22 shows in Fig. 2H is the video picture of mobile phone A
  • what the sub-window 23 shows is the video picture of mobile phone B
  • what the main window 21 shows is the video picture of the current mobile phone, that is, mobile phone M.
  • the user can move the sub-window 23 to Any small window on the right side of the main window 21 conveniently shows the shooting orientation of the camera corresponding to the video picture in the sub-window 23, relative to the shooting orientation of the camera corresponding to the video picture in the main window 21.
  • the user can click the sub-window 22 in FIG. 2C to trigger the display of the interface in FIG. 2D, thereby realizing the connection between the first video processing device and the third video processing device by operating FIG. 2D;
  • the sub-window 22 (second sub-window) is used to display the third video image sequence captured by the third camera of the third video processing device.
  • the shooting orientation of the third camera relative to the first camera can be determined according to the image content of the third video image sequence and the image content of the first video image sequence, thereby adjusting the second sub-window to The corresponding position is displayed.
  • the position of the sub-window can be adjusted automatically or manually according to the relative shooting orientation.
  • the shooting orientation of the third camera relative to the first camera may be determined according to the image content of the third video image sequence and the first video image sequence; thus based on the shooting orientation and the position of the main window, determine the target display position of the second sub-window, so that the user can identify the shooting angle of the camera corresponding to each sub-window through the relative positional relationship between the second sub-window and the main window in the video preview interface .
  • the mobile phone M is shooting directly in front of the object
  • the mobile phone C is shooting the object at the northwest corner of the mobile phone M
  • the video content shot by the mobile phone C can be displayed in the sub-window 22 .
  • the video processing method in the embodiment of the present application further includes: receiving a fourth input from the user on the video preview interface; during the recording process of the first video image sequence and the second video image sequence In the case of a sequence of video images captured in real time, in response to the fourth input, control the first camera and the second camera to stop capturing video images; In a case where the video image in the second video image sequence is a video image in the recorded second video, in response to the fourth input, stop playing the first video and the second video.
  • the fourth input may include, but is not limited to: the user's click input on the video preview interface, or a voice command input by the user, or a specific gesture input by the user; This is not limited.
  • the specific gesture in the embodiment of the present application can be any one of a single-click gesture, a sliding gesture, a drag gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture, and a double-click gesture;
  • the click input in the example may be a single click input, a double click input, or any number of click inputs, etc., and may also be a long press input or a short press input.
  • the main window 21 has a preset control 31 , and the start and stop of video recording can be controlled by clicking the preset control 31 .
  • mobile phone M can control mobile phone A, mobile phone B, mobile phone C connected with mobile phone M by clicking on the preset control 31, and mobile phone M to start video recording or end video recording; Control the start or end of the video recording of the mobile phone.
  • the traditional video recording interface of mobile phone A has a control 41. The user of mobile phone A can start the video recording of mobile phone A by clicking the control 41, or end the video recording of mobile phone A. recording.
  • the first video processing device and other video processing devices communicatively connected with the first video processing device can be controlled to start or stop (including pause) video recording in a unified manner.
  • the operation can realize the unified control of multiple machines through one-key operation on the main window.
  • the window for displaying video data of other video processing devices also has preset controls for controlling other video processing devices.
  • the video recording status of the processing device includes the recording status and the recording pause status.
  • the status of the control 32 in the sub-window 22 indicates that the mobile phone A is currently recording
  • the status of the control 33 in the sub-window 23 indicates that the mobile phone B It is currently in the pause recording state
  • the state of the control 34 in the sub-window 24 indicates that the mobile phone C is currently in the pause recording state.
  • the recording state of the video processing device can be controlled through the preset control. And the user can intuitively understand whether the video processing device is recording or pausing recording through the state of the preset control.
  • step 103 when step 103 is executed, a second input from the user on the first sub-window may be received; and then, in response to the second input, the main window and the first sub-window are exchanged. displaying content; finally, performing video splicing on at least one frame of the first video image and at least one frame of the second video image displayed in the main window to obtain the target video.
  • the second input may include, but is not limited to: the user's click input on the first sub-window, or a voice command input by the user, or a specific gesture input by the user; There is no limit to this.
  • the second input may also be an input causing a partial window area overlap between the sub-window and the main window, for example, an input of dragging the first sub-window to the main window.
  • the specific gesture in the embodiment of the present application can be any one of a single-click gesture, a sliding gesture, a drag gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture, and a double-click gesture;
  • the click input in the example may be a single click input, a double click input, or any number of click inputs, etc., and may also be a long press input or a short press input.
  • the video processing devices for the video data displayed in the first sub-window and the main window can be exchanged, and the first video segment and the second video segment can be displayed in the main window according to the sequence Splicing is performed to generate a target video, wherein the first video segment and the second video segment are respectively video data displayed in the main window from different video processing devices.
  • the video processing device corresponding to the video data displayed in the main window may be other video processing devices, or may be the first video processing device.
  • the video data of the video processing device is taken as an example, but the present application is not limited thereto.
  • the video data of the first video processing device may also be displayed in the sub-window in an initial state, that is, before receiving the first input.
  • the main window displays the video screen of the target video that has been finally recorded and edited, and what you see is what you get.
  • the sub-window displays pictures taken by other video processing devices.
  • the user of mobile phone M finds that the video content of other mobile phones connected to mobile phone M is more suitable and needs to be added to the target video, he can drag the corresponding sub-window to the position of the main window. Drag the window to the main window to trigger video clipping.
  • the user of the mobile phone M drags the sub-window 22 to the main window 21 in the direction of the arrow to switch camera positions.
  • the input time point of the first input is t1, before t1, the main window plays the video content recorded by mobile phone M, such as the first video clip (including at least one frame of the first video image);
  • the video content recorded by the mobile phone A corresponding to the dragged sub-window, for example, the second video clip (including at least one frame of the second video image) is played through the main window. Therefore, through the first input, the display content of the main window can be switched from the first video segment to the second video segment, and the content displayed in the main window is the final recorded target video.
  • the splicing is performed according to time sequence, and the first video segment and the second video segment are spliced to obtain the target video.
  • the target video can also be obtained through multiple first inputs.
  • the sub-window can be dragged to the main window, i.e. the first input, so that in response to the first input, the video content displayed in the sub-window is added to the target video;
  • the window is dragged to the first input of the main window, so that in response to the first input, the video content displayed by the other sub-window is also added to the target video.
  • the content displayed in the main window is saved to the main mobile phone to obtain the target video. For example, in a sports scene, you can switch to a suitable shooting angle for video recording at any time.
  • the first video image sequence captured by the first camera of the first video processing device may be displayed in the main window of the video preview interface, and the second video image sequence may be displayed in the first sub-window of the video preview interface.
  • the target video can be obtained based on the content displayed in the main window.
  • the videos displayed in the main window are sequentially spliced according to the display order to obtain the target video, which can be based on at least Two video processing devices are used to record the same scene, which not only reduces the operation difficulty and complexity of video editing, but also improves the video processing efficiency.
  • multiple video processing devices can be used for video recording, and the video data recorded in real time by different video processing devices are displayed in different windows of the video preview interface.
  • the input to the main window can realize the camera position switching in the video recording process; when recording video, the video data from different video processing devices displayed in the main window, that is, the first video data and the second video The data are spliced according to the display order in the main window, so that the same scene can be recorded based on at least two video processing devices, which not only reduces the operation difficulty and complexity of video editing, but also improves the video processing efficiency.
  • the user can switch the position of the video recorder in real time by dragging different sub-windows to the main window, which improves the user's experience in the video recording process. operability in .
  • the method in the embodiment of the present application may further include: saving the target video and the video recorded by each video processing device when the video processing device corresponding to each window in the shooting preview interface stops video recording.
  • Video data for each video segment spliced in the target video, save the corresponding time point of each video segment in the target video, and the corresponding time point in the video data recorded by the associated video processing device mapping relationship between them.
  • the first video processing device in the embodiment of the present application establishes a communication connection with other video processing devices, after the other video processing devices start recording, the first video processing device can receive the video content recorded in real time by other video processing devices, and When all video processing devices stop recording videos, save the video data recorded by each video processing device, and save the obtained target video.
  • the method according to the embodiment of the present application may further include: receiving a fifth input from the user on the target video; in response to the fifth input, displaying a video adjustment window, the video adjustment window including adjustment controls, At least one first video thumbnail and at least one second video thumbnail; said at least one first video thumbnail is a thumbnail of said at least one frame of first video image, said at least one second video thumbnail is said at least one second video thumbnail
  • the thumbnail image of the at least one frame of the second video image, the adjustment control is used to update the video frame of the target video; then, receive the sixth input from the user on the adjustment control; in response to the sixth input, update The display position of the adjustment control is adjusted, and the video frame of the target video is updated according to the updated display position of the adjustment control.
  • the implementation of the fifth input and the sixth input in this embodiment and the seventh input and the eighth input in the following embodiments can refer to the relevant exemplary description about the first input above, the principles are similar, and will not be repeated here. Let me repeat them one by one.
  • the above-mentioned target video may be saved in the photo album of the mobile phone M, and the user clicks an edit control (ie, the fifth input) on the target video saved in the photo album.
  • an edit control ie, the fifth input
  • the interface of the video adjustment window shown in FIG. 2M of the target video can be entered.
  • the video adjustment window includes a main playback progress bar of the target video, wherein the main playback progress bar includes a preset identification 53, and the preset identification 53 changes with the video playback progress in the video playback window Move on the playback progress bar.
  • the preset logos in this application are used to indicate text, symbols, images, etc. of information, and controls or other containers can be used as carriers for displaying information, including but not limited to text logos, symbol logos, and image logos.
  • the video adjustment window includes a sub-playing progress bar of each video clip in the target video, wherein a movable adjustment control is displayed at the joint of different video clips.
  • FIG. 2M Exemplarily, as shown in Figure 2M, comprise video play window 54 in the video adjustment window, this video play window 54 is used for displaying the picture of target video; , the main playback progress bar 52 has a preset logo 53 that moves with the playback time.
  • the target video is sequentially spliced from a piece of video A, a piece of video B, and a piece of video C.
  • the video editing interface also includes a plurality of sub-play progress bars above the main play progress bar 52.
  • the sub-play progress bars of the video clips that form the target video can be divided into multiple rows according to the order of play time from front to back.
  • an adjustment control 51 can also be included,
  • the adjustment control 51 can be understood as a fine-tuning control, and the time point of camera position switching in the progress bar of the complete target video can correspond to a movable adjustment control.
  • the recorded target video is composed of three segments of video A, video B and video C, and then includes two adjustment controls 51, one is used to adjust the video frame at the splicing place of video A and video B, and the other is used to adjust the video frame The video frame where B and video C are spliced.
  • dragging the preset logo 53 can control the playback progress of the video in the video playback window 54, in addition, by clicking the preset logo 53, the video in the video playback window 54 can be controlled to pause or continue In the two states of playing, pausing and continuing to play, the display pattern of the preset logo 53 can be different.
  • the preset logo on the main playback progress bar in the video editing interface can not only control the playback progress of the video in the video playback window by moving it; , to change the playback status of the video in the video playback window.
  • the video frames at the splicing location can be adjusted through the adjustment control; the thumbnails of the video frames at the splicing location are respectively displayed on the left and right sides of the adjustment control.
  • two thumbnails can be displayed on both sides of the adjustment control 51 between video A and video B, including: the thumbnail of the last frame image of video A located above the sub-play progress bar 61 71, and the thumbnail 72 of the first frame image of video B located above the sub-play progress bar 62; in addition, FIG. 2M also shows two thumbnails of another video frame at the splicing place, which will not be repeated here.
  • the adjustment control 51 has not moved yet, so after the adjustment control 51 moves left or right along the direction of the arrow in Figure 2M, the position where the adjustment control 51 stays can correspond to different video clips At the splicing point, the thumbnails of the two frames of images from different video clips at the splicing point are also displayed on the left and right sides of the adjustment control 51 .
  • the thumbnails of the two frames of images at the splicing place of the two video clips are displayed, then when the user moves the adjustment control to fine-tune the target video, he can judge the splicing of the video screen by browsing the two thumbnails at the splicing place is it suitable.
  • the preset control 51 can be moved left or right to trigger the adjustment of the splicing of different video clips in the target video, after the adjustment, the save control 55 can be clicked to update the target video .
  • the step of updating the video frame of the target video when performing the step of updating the video frame of the target video, it may be implemented by at least one of the following steps: updating the spliced end video frame of the first video image sequence; updating the splicing of the second video image sequence A starting video frame; increasing or decreasing a spliced video frame of the first sequence of video images; increasing or decreasing a spliced video frame of the second sequence of video images.
  • the spliced video frame indicates a video frame used for splicing the target video during switching.
  • the adjustment control 52 corresponding to the splicing of video A and video B in Fig. 2M move to the left the length of the progress bar corresponding to the duration of 2s, then it is necessary to update video A (i.e. the first video image sequence here) in the target video
  • the video frame at the splicing end position in can be optionally reduced the spliced video frame of video A, here is the video frame that reduces the tail 2s duration of video A;
  • the video frame at the splicing start position in is optionally obtained from the original video to which video B belongs, and the corresponding video frame is added to the second video image sequence.
  • the adjustment control 52 is moved to the left as an example for illustration.
  • the adjustment control 52 is moved to the right, the method is similar and will not be repeated here.
  • the adjustment control 51 between the two sub-play progress bars of moving video A and video B in the above-mentioned FIG. 2M as an example, by moving the adjustment control 51 to the left, the adjustment control 51 moves close to the sub-play progress bar of video A , and the sub-play progress bar moves far away from video B, there is a preset mapping relationship between the moving distance and the number of video frames, and the target frame number to be adjusted can be determined based on the moving distance, for example, 3 frames, then it can be reduced by 3 at the end of video A For each video frame, add 3 video frames to the head of video B, and the data source of the added 3 video frames is the complete original video data recorded by the video processing device corresponding to video B.
  • the video clips (1s-3s video clips) are played through the video playback window 54 .
  • the spliced video frame when adjusting the splicing position of the target video, can be previewed, and the adjustment result can be previewed, so as to ensure that the desired effect of the user can be achieved after fine-tuning the video .
  • the method in this embodiment of the present application may further include:
  • Step 201 receiving a seventh input from a user on the target video.
  • the processing object can be other videos, for example, it can be recorded by the first video processing device, the second video processing device, or other video processing devices. videos, or other videos downloaded from the Internet, etc.
  • Step 202 in response to the seventh input, display a first video editing window of the target video on the first video processing device.
  • Step 203 receiving an eighth input from the user on the first video editing window.
  • Step 204 in response to the eighth input, update the target video according to editing information, where the editing information is determined according to the eighth input.
  • Step 205 sending the editing information to a second video processing device, so that the second video processing device synchronously updates the target video according to the editing information.
  • mobile phone M can send the target video to mobile phone A, mobile phone B, and mobile phone C, so that these three mobile phones can also obtain the target video.
  • connection method between the mobile phone M and other video processing devices is similar to the example above, but in terms of triggering, it can also be as follows: As shown in Figure 4A, on the mobile phone M, the user opens the target video in the photo album , click the multi-machine collaborative editing control 82, then the target video in the window 81 can be edited synchronously by multiple machines.
  • mobile phone A, mobile phone B, and mobile phone C also display the interface shown in Figure 4A, and all mobile phones successfully connected with mobile phone M all display the target video, and the editing options are the same.
  • FIG. 4A shows various editing options, which will not be repeated here.
  • the mobile phone M can share the editing information corresponding to the editing option to the mobile phone A, mobile phone B, and mobile phone C.
  • mobile phone A, mobile phone B, or mobile phone C selects an editing option for the target video to edit, it can also synchronously send the editing information corresponding to the editing option to mobile phone M in real time, and mobile phone M can send the received
  • the editing information of the mobile phone is shared with other mobile phones synchronously, therefore, the editing information among the four mobile phones is shared.
  • mobile phone A can add subtitles to the video
  • mobile phone B can adjust the filter for the video
  • mobile phone C can edit the duration of the video.
  • the mobile phone M can be used to share the editing information on each mobile phone side, and then various editing operations performed by different mobile phones on the target video can be displayed synchronously.
  • the mobile phone M can display the edited preview image in window 81; in addition, since other mobile phones have also edited the target video, window 81 can also edit the target video for other mobile phones. Edited video effects to preview.
  • the mobile phone M saves the edited video by clicking the save control in FIG. 4A or FIG. 4B; Synchronize the saved video to the phone M.
  • multiple video processing devices can be used to edit the target video, and multiple video processing devices can perform multiple video editing operations with different functions, so as to meet the user's needs for multi-person collaboration in the process of editing videos , to improve editing efficiency.
  • the editing operation being performed by a mobile phone will affect the editing operations of other mobile phones, it can be marked and prompted.
  • mobile phone A is editing the duration of video
  • other mobile phones mark the time period that is deleted in the progress bar by special color (optionally, in other embodiments, can also include the time period that is increased), other mobile phones
  • the side is marked with a special color in the progress bar, indicating that the video segment was cut.
  • the editing function of other mobile phones is set to gray, prompting other users that the editing function corresponding to the gray control is being processed by another mobile phone, if the gray editing function is clicked control, it will prompt that other users are already using the editing function.
  • FIG. 4B it is assumed that the editing function of the gray “Music” control is performed by mobile phone A, and the editing function of the “Beauty” control is performed by mobile phone B. Therefore, in FIG.
  • the video processing method provided in the embodiment of the present application may be executed by a video processing device, or a control module in the video processing device for executing the video processing method.
  • the video processing device provided in the embodiment of the present application is described by taking the video processing device executing the video processing method as an example.
  • the first video processing device 300 includes:
  • the first receiving module 301 is configured to receive a user's first input to the first video processing device
  • the first display module 302 is configured to display the first video image sequence captured by the first camera of the first video processing device on the video preview interface in response to the first input;
  • the generating module 303 is configured to generate a target according to the first video image sequence and the second video image sequence when the video preview interface displays the second video image sequence captured by the second camera of the second video processing device video;
  • the target video includes at least one frame of the first video image in the first video image sequence and at least one frame of the second video image in the second video image sequence.
  • video image sequences collected by cameras of different video processing devices may be displayed on the video preview interface, and according to the first video image sequence and the second video image sequence respectively collected by the cameras of different video processing devices,
  • To perform video clipping generate a target image comprising at least one frame of the first video image and at least one frame of the second video image, wherein the at least one frame of the first video image is from the first video image sequence generated by the first video processing device, and At least one frame of the second video image is from the second video image sequence generated by the second video processing device.
  • the video processing method of the embodiment of the present application can perform video clipping on different video image sequences generated by different video processing devices when the video preview interface displays the video image sequences collected by the cameras of different video processing devices, thereby generating the target video, There is no need to use professional video editing software, which reduces the difficulty and complexity of video editing operations.
  • the video preview interface includes a main window and a first sub-window, the main window is used to display the first video image sequence, and the first sub-window is used to display the second video image sequence;
  • the generating module 303 includes:
  • a first receiving submodule configured to receive a second input from a user on the first subwindow
  • an exchange sub-module configured to exchange the display content in the main window and the first sub-window in response to the second input
  • the splicing sub-module is configured to splice at least one frame of the first video image and at least one frame of the second video image displayed in the main window to obtain the target video.
  • the display content of the main window and the first sub-window can be exchanged by performing a second input to the first sub-window, so that the main window is switched and displayed as the video captured by the second camera
  • the target video is obtained based on the content displayed in the main window.
  • the videos displayed in the main window are sequentially spliced according to the display order to obtain the target video.
  • the same scene can be recorded based on at least two video processing devices. It not only reduces the operation difficulty and complexity of video editing, but also improves the video processing efficiency.
  • the first video processing device 300 further includes:
  • the second display module is used to display at least one device identification, and the device identification is used to indicate a video processing device connected to the first video processing device in communication;
  • a second receiving module configured to receive a third input from a user on a target device identifier in the at least one device identifier
  • the third display module is configured to display a third video image sequence captured by a third camera of a third video processing device in a second sub-window in response to the third input, and the third video processing device is the third video processing device
  • the target device identifier indicates the video processing device.
  • the device identification of the video processing device used to indicate the communication connection with the first video processing device by displaying the device identification of the video processing device used to indicate the communication connection with the first video processing device, and receiving the user's third input of the target device identification in the device identification, in response to the third Input, display the third video image sequence collected by the third camera of the video processing device indicated by the target device identifier in the sub-window, realize video recording in a multi-camera mode, and edit video images of different camera positions Generate the target video, and by communicating with the first video processing device, a plurality of video processing devices can realize the function of clipping the video while displaying the recorded video, and the video displayed in the main window can be used as the target video (what you see is what you get ), simplifying the complexity of video editing.
  • the first video processing device 300 further includes:
  • the first determination module is configured to determine the relative shooting orientation of the third camera according to the image content of the third video image sequence and the first video image sequence, and the relative shooting orientation is the relative shooting orientation of the third camera. at the shooting position of the first camera;
  • the second determining module is configured to determine the target display position of the second sub-window according to the relative shooting orientation.
  • the shooting orientation of the third camera relative to the first camera may be determined according to the image content of the third video image sequence and the first video image sequence; thus based on the shooting orientation and the position of the main window, determine the target display position of the second sub-window, so that the user can identify the shooting angle of the camera corresponding to each sub-window through the relative positional relationship between the second sub-window and the main window in the video preview interface .
  • the first video processing device 300 further includes:
  • a third receiving module configured to receive a fourth input from the user on the video preview interface
  • the first control module is configured to control the first video image sequence and the second video image sequence in response to the fourth input when the first video image sequence and the second video image sequence are video image sequences collected in real time during video recording.
  • the camera and the second camera stop collecting video images;
  • the second control module is used for when the first video image sequence is a video image in the recorded first video, and the second video image sequence is a video image in the recorded second video, In response to the fourth input, stop playing the first video and the second video.
  • the first video processing device 300 further includes:
  • a fourth receiving module configured to receive a fifth input from the user on the target video
  • the fourth display module is configured to display a video adjustment window in response to the fifth input, and the video adjustment window includes adjustment controls, at least one first video thumbnail and at least one second video thumbnail; the at least one The first video thumbnail is the thumbnail of the at least one frame of the first video image, the at least one second video thumbnail is the thumbnail of the at least one frame of the second video image, and the adjustment control is used to update the The video frame of the target video;
  • a fifth receiving module configured to receive a sixth input from the user on the adjustment control
  • a first updating module configured to update the display position of the adjustment control in response to the sixth input, and update the video frame of the target video according to the updated display position of the adjustment control.
  • the video frame at the splicing point in the target video can be adjusted, and the user can accurately adjust the position of the adjustment control by browsing the thumbnails of the video frame at the splicing point, and then To achieve the purpose of accurately adjusting the stitching in the target video.
  • the first update module is also configured to perform at least one of the following steps:
  • Stitched video frames of the second sequence of video images are increased or decreased.
  • the start video frame and the end video frame at the splicing location can be increased, decreased, or updated according to the actual needs of the user, so that the video frames at the splicing location are more appropriate.
  • the first video processing device 300 further includes:
  • a sixth receiving module configured to receive a seventh input from the user on the target video
  • a fifth display module configured to display a first video editing window of the target video on the first video processing device in response to the seventh input
  • a seventh receiving module configured to receive an eighth input from the user to the first video editing window
  • a second update module configured to update the target video according to edit information in response to the eighth input, the edit information is determined according to the eighth input;
  • a sending module configured to send the editing information to a second video processing device, so that the second video processing device synchronously updates the target video according to the editing information.
  • multiple video processing devices can be used to edit the target video, and multiple video processing devices can perform multiple video editing operations with different functions, so as to meet the user's needs for multi-person collaboration in the process of editing videos , to improve editing efficiency.
  • the video processing device in this embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal.
  • the device may be a mobile electronic device or a non-mobile electronic device.
  • the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a handheld computer, a vehicle electronic device, a wearable device, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook or a personal digital assistant (personal digital assistant). assistant, PDA), etc.
  • the non-mobile electronic device may be a personal computer (personal computer, PC), television (television, TV), teller machine or self-service machine, etc., which are not specifically limited in this embodiment of the present application.
  • the video processing device in this embodiment of the present application may be a device with an operating system.
  • the operating system may be an Android operating system, an iOS operating system, or other possible operating systems, which are not specifically limited in this embodiment of the present application.
  • the video processing device provided in the embodiment of the present application can implement the various processes implemented in the foregoing method embodiments, and details are not repeated here to avoid repetition.
  • the embodiment of the present application further provides an electronic device 2000, including a processor 2002, a memory 2001, and programs or instructions stored in the memory 2001 and operable on the processor 2002,
  • an electronic device 2000 including a processor 2002, a memory 2001, and programs or instructions stored in the memory 2001 and operable on the processor 2002,
  • the program or instruction is executed by the processor 2002, each process of the above-mentioned embodiment of the video processing method can be realized, and the same technical effect can be achieved. To avoid repetition, details are not repeated here.
  • the electronic devices in the embodiments of the present application include the above-mentioned mobile electronic devices and non-mobile electronic devices.
  • FIG. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
  • the electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010, etc. part.
  • the electronic device 1000 can also include a power supply (such as a battery) for supplying power to various components, and the power supply can be logically connected to the processor 1010 through the power management system, so that the management of charging, discharging, and function can be realized through the power management system. Consumption management and other functions.
  • a power supply such as a battery
  • the structure of the electronic device shown in FIG. 7 does not constitute a limitation to the electronic device.
  • the electronic device may include more or fewer components than shown in the figure, or combine some components, or arrange different components, and details will not be repeated here. .
  • the user input unit 1007 is configured to receive a first input from the user to the first video processing device
  • a display unit 1006 configured to display the first video image sequence captured by the first camera of the first video processing device on a video preview interface in response to the first input;
  • the processor 1010 is configured to generate a target according to the first video image sequence and the second video image sequence when the video preview interface displays the second video image sequence captured by the second camera of the second video processing device.
  • the target video includes at least one frame of the first video image in the first video image sequence and at least one frame of the second video image in the second video image sequence.
  • video image sequences collected by cameras of different video processing devices may be displayed on the video preview interface, and according to the first video image sequence and the second video image sequence respectively collected by the cameras of different video processing devices,
  • To perform video clipping generate a target image comprising at least one frame of the first video image and at least one frame of the second video image, wherein the at least one frame of the first video image is from the first video image sequence generated by the first video processing device, and At least one frame of the second video image is from the second video image sequence generated by the second video processing device.
  • the video processing method of the embodiment of the present application can perform video clipping on different video image sequences generated by different video processing devices when the video preview interface displays the video image sequences collected by the cameras of different video processing devices, thereby generating the target video, There is no need to use professional video editing software, which reduces the difficulty and complexity of video editing operations.
  • the video preview interface includes a main window and a first sub-window, the main window is used to display the first video image sequence, and the first sub-window is used to display the second video image sequence;
  • a user input unit 1007 configured to receive a second input from the user on the first sub-window
  • Processor 1010 configured to, in response to the second input, exchange the display content in the main window and the first sub-window; at least one frame of the first video image displayed in the main window and at least one frame Video splicing is performed on the second video image to obtain the target video.
  • the display content of the main window and the first sub-window can be exchanged by performing a second input to the first sub-window, so that the main window is switched and displayed as the video captured by the second camera Image sequence, so that the first sub-window is switched and displayed as a video image sequence collected by the first camera, since different video processing devices can perform video shooting of the same scene from different angles, thereby realizing camera position switching in the video recording process;
  • the target video is obtained based on the content displayed in the main window.
  • the videos displayed in the main window are sequentially spliced according to the display order to obtain the target video.
  • the same scene can be recorded based on at least two video processing devices. It not only reduces the operation difficulty and complexity of video editing, but also improves the video processing efficiency.
  • a display unit 1006 configured to display at least one device identifier, where the device identifier is used to indicate a video processing device that is communicatively connected to the first video processing device;
  • a user input unit 1007 configured to receive a third input from a user on a target device identifier in the at least one device identifier
  • the display unit 1006 is configured to, in response to the third input, display in the second sub-window a third video image sequence captured by a third camera of a third video processing device, the third video processing device being the target
  • the device identifier indicates the video processing device.
  • the device identification of the video processing device used to indicate the communication connection with the first video processing device by displaying the device identification of the video processing device used to indicate the communication connection with the first video processing device, and receiving the user's third input of the target device identification in the device identification, in response to the third Input, display the third video image sequence collected by the third camera of the video processing device indicated by the target device identifier in the sub-window, realize video recording in a multi-camera mode, and edit video images of different camera positions Generate the target video, and by communicating with the first video processing device, a plurality of video processing devices can realize the function of clipping the video while displaying the recorded video, and the video displayed in the main window can be used as the target video (what you see is what you get ), simplifying the complexity of video editing.
  • the processor 1010 is configured to determine the relative shooting orientation of the third camera according to the image content of the third video image sequence and the first video image sequence, and the relative shooting orientation is the first video image sequence.
  • the shooting orientation of the third camera relative to the first camera may be determined according to the image content of the third video image sequence and the first video image sequence; thus based on the shooting orientation and the position of the main window, determine the target display position of the second sub-window, so that the user can identify the shooting angle of the camera corresponding to each sub-window through the relative positional relationship between the second sub-window and the main window in the video preview interface .
  • the user input unit 1007 is configured to receive a fourth input from the user on the video preview interface
  • Processor 1010 configured to control the first camera in response to the fourth input when the first video image sequence and the second video image sequence are video image sequences collected in real time during video recording and the second camera stops collecting video images; the first video image sequence is the video image in the first video that has been recorded, and the second video image sequence is the video image in the second video that has been recorded In the case of , in response to the fourth input, stop playing the first video and the second video.
  • the user input unit 1007 is configured to receive a fifth input from the user on the target video
  • the display unit 1006 is configured to display a video adjustment window in response to the fifth input, the video adjustment window including adjustment controls, at least one first video thumbnail and at least one second video thumbnail; the at least one first video thumbnail A video thumbnail is a thumbnail of the at least one frame of the first video image, and the at least one second video thumbnail is a thumbnail of the at least one frame of the second video image, and the adjustment control is used to update the The video frame of the target video;
  • a user input unit 1007 configured to receive a sixth input from the user on the adjustment control
  • the processor 1010 is configured to update the display position of the adjustment control in response to the sixth input, and update the video frame of the target video according to the updated display position of the adjustment control.
  • the video frame at the splicing point in the target video can be adjusted, and the user can accurately adjust the position of the adjustment control by browsing the thumbnails of the video frame at the splicing point, and then To achieve the purpose of making accurate adjustments to the stitching in the target video.
  • the processor 1010 is configured to update the splicing end video frame of the first video image sequence; update the splicing start video frame of the second video image sequence; increase or decrease the splicing start video frame of the first video image sequence Stitching video frames; increasing or decreasing the stitched video frames of said second sequence of video images.
  • the start video frame and the end video frame at the splicing location can be increased, decreased, or updated according to the actual needs of the user, so that the video frames at the splicing location are more appropriate.
  • the user input unit 1007 is configured to receive a seventh input from the user on the target video;
  • a display unit 1006, configured to display a first video editing window of the target video on the first video processing device in response to the seventh input;
  • a user input unit 1007 configured to receive an eighth input from the user on the first video editing window
  • a processor 1010 configured to, in response to the eighth input, update the target video according to editing information, where the editing information is determined according to the eighth input;
  • the radio frequency unit 1001 is configured to send the editing information to a second video processing device, so that the second video processing device synchronously updates the target video according to the editing information.
  • multiple video processing devices can be used to edit the target video, and multiple video processing devices can perform multiple video editing operations with different functions, so as to meet the user's needs for multi-person collaboration in the process of editing videos , to improve editing efficiency.
  • the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, and the graphics processor 10041 is used for the image capture device (such as the image data of the still picture or video obtained by the camera) for processing.
  • the display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the user input unit 1007 includes a touch panel 10071 and other input devices 10072 .
  • the touch panel 10071 is also called a touch screen.
  • the touch panel 10071 may include two parts, a touch detection device and a touch controller.
  • Other input devices 10072 may include, but are not limited to, physical keyboards, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, and joysticks, which will not be repeated here.
  • the memory 1009 can be used to store software programs as well as various data, including but not limited to application programs and operating systems.
  • Processor 1010 may integrate an application processor and a modem processor, wherein the application processor mainly processes operating systems, user interfaces, and application programs, and the modem processor mainly processes wireless communications. It can be understood that the foregoing modem processor may not be integrated into the processor 1010 .
  • the embodiment of the present application also provides a readable storage medium, on which a program or instruction is stored, and when the program or instruction is executed by a processor, each process of the above-mentioned video processing method embodiment is realized, and the same To avoid repetition, the technical effects will not be repeated here.
  • the processor is the processor in the electronic device described in the above embodiments.
  • the readable storage medium includes computer readable storage medium, such as computer read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
  • the embodiment of the present application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement the above video processing method embodiment Each process can achieve the same technical effect, so in order to avoid repetition, it will not be repeated here.
  • chips mentioned in the embodiments of the present application may also be called system-on-chip, system-on-chip, system-on-a-chip, or system-on-a-chip.
  • the term “comprising”, “comprising” or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article or apparatus comprising a set of elements includes not only those elements, It also includes other elements not expressly listed, or elements inherent in the process, method, article, or device. Without further limitations, an element defined by the phrase “comprising a " does not preclude the presence of additional identical elements in the process, method, article, or apparatus comprising that element.
  • the scope of the methods and devices in the embodiments of the present application is not limited to performing functions in the order shown or discussed, and may also include performing functions in a substantially simultaneous manner or in reverse order according to the functions involved. Functions are performed, for example, the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)
  • User Interface Of Digital Computer (AREA)
  • Studio Devices (AREA)

Abstract

本申请公开了一种视频处理方法和装置,属于视频处理领域。该方法包括:接收用户对第一视频处理装置的第一输入;响应于所述第一输入,在视频预览界面显示所述第一视频处理装置的第一摄像头采集的第一视频图像序列;在视频预览界面显示第二视频处理装置的第二摄像头采集的第二视频图像序列的情况下,根据所述第一视频图像序列和所述第二视频图像序列,生成目标视频;其中,所述目标视频包括所述第一视频图像序列中的至少一帧第一视频图像和所述第二视频图像序列中的至少一帧第二视频图像。

Description

视频处理方法、装置、电子设备及可读存储介质
相关申请的交叉引用
本申请要求于2021年09月16日提交中国国家知识产权局、申请号为202111091200.9、申请名称为“视频处理方法、装置、电子设备及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于视频处理领域,具体涉及一种视频处理方法、装置、电子设备及可读存储介质。
背景技术
随着5G技术的发展,视频实时传输的速度和质量都得到很大的提高。电子设备的摄像头成像质量也越来越高。如今,利用电子设备录像和剪辑视频成为产品开发趋势。
目前,对于录像视频的剪辑主要还是在PC(personal computer,个人电脑)端进行,通过专业的视频剪辑软件在PC端对手机录像的视频进行剪辑,但是剪辑操作比较复杂,对普通用户而言门槛较高,更适合专业用户操作。
发明内容
本申请实施例的目的是提供一种视频处理方法、装置、电子设备及可读存储介质,能够解决相关技术中在进行视频剪辑时操作较为复杂,且操作难度大的问题。
第一方面,本申请实施例提供了一种视频处理方法,该方法包括:
接收用户对第一视频处理装置的第一输入;
响应于所述第一输入,在视频预览界面显示所述第一视频处理装置的第一 摄像头采集的第一视频图像序列;
在视频预览界面显示第二视频处理装置的第二摄像头采集的第二视频图像序列的情况下,根据所述第一视频图像序列和所述第二视频图像序列,生成目标视频;
其中,所述目标视频包括所述第一视频图像序列中的至少一帧第一视频图像和所述第二视频图像序列中的至少一帧第二视频图像。
第二方面,本申请实施例提供了一种第一视频处理装置,该装置包括:
第一接收模块,用于接收用户对第一视频处理装置的第一输入;
第一显示模块,用于响应于所述第一输入,在视频预览界面显示所述第一视频处理装置的第一摄像头采集的第一视频图像序列;
生成模块,用于在视频预览界面显示第二视频处理装置的第二摄像头采集的第二视频图像序列的情况下,根据所述第一视频图像序列和所述第二视频图像序列,生成目标视频;
其中,所述目标视频包括所述第一视频图像序列中的至少一帧第一视频图像和所述第二视频图像序列中的至少一帧第二视频图像。
第三方面,本申请实施例提供了一种电子设备,该电子设备包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤。
第四方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤。
第五方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法。
在本申请实施例中,可以在视频预览界面显示不同视频处理装置的摄像头所采集的视频图像序列,并根据不同视频处理装置的摄像头所各自采集的第一视频图像序列和第二视频图像序列,来进行视频剪辑,生成包括至少一帧第一 视频图像和至少一帧第二视频图像的目标图像,其中,至少一帧第一视频图像来自第一视频处理装置生成的第一视频图像序列,而至少一帧第二视频图像则来自第二视频处理装置生成的第二视频图像序列。本申请实施例的视频处理方法能够在视频预览界面显示不同视频处理装置的摄像头各自采集的视频图像序列的情况下,对不同视频处理装置生成的不同视频图像序列进行视频剪辑,从而生成目标视频,无需采用专业的视频剪辑软件,降低了视频剪辑的操作难度和复杂度。
附图说明
图1是本申请实施例提供的视频处理方法的流程图之一;
图2A是本申请实施例提供的视频处理界面的示意图之一;
图2B是本申请实施例提供的视频处理界面的示意图之二;
图2C是本申请实施例提供的视频处理界面的示意图之三;
图2D是本申请实施例提供的视频处理界面的示意图之四;
图2E是本申请实施例提供的视频处理界面的示意图之五;
图2F是本申请实施例提供的视频处理界面的示意图之六;
图2G是本申请实施例提供的视频处理界面的示意图之七;
图2H是本申请实施例提供的视频处理界面的示意图之八;
图2I是本申请实施例提供的视频处理界面的示意图之九;
图2J是本申请实施例提供的视频处理界面的示意图之十;
图2K是本申请实施例提供的视频处理界面的示意图之十一;
图2L是本申请实施例提供的视频处理界面的示意图之十二;
图2M是本申请实施例提供的视频处理界面的示意图之十三;
图3是本申请实施例提供的视频处理方法的流程图之二;
图4A是本申请实施例提供的视频处理界面的示意图之十四;
图4B是本申请实施例提供的视频处理界面的示意图之十五;
图5是本申请实施例提供的视频处理装置的框图;
图6是本申请实施例提供的电子设备的硬件结构示意图之一;
图7是本申请实施例提供的电子设备的硬件结构示意图之二。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
下面结合附图,通过可选的实施例及其应用场景对本申请实施例提供的视频处理方法进行详细地说明。
参照图1,示出了本申请一个实施例的视频处理方法的流程图,该方法可以应用于第一视频处理装置,所述方法可以包括如下步骤:
步骤101,接收用户对第一视频处理装置的第一输入。
示例性地,第一输入可以包括但不限于:用户对第一视频处理装置的点击输入,或者为用户输入的语音指令,或者为用户输入的特定手势;可以根据实际使用需求确定,本申请实施例对此不作限定。
其中,本申请实施例中的特定手势可以为单击手势、滑动手势、拖动手势、压力识别手势、长按手势、面积变化手势、双按手势、双击手势中的任意一种;本申请实施例中的点击输入可以为单击输入、双击输入或任意次数的点击输入等,还可以为长按输入或短按输入。
步骤102,响应于所述第一输入,在视频预览界面显示所述第一视频处理装置的第一摄像头采集的第一视频图像序列。
示例性地,该视频预览界面可以是视频的拍摄预览界面,那么这里的第一视频图像序列则可以是第一摄像头实时采集得到的视频图像序列,可以在拍摄预览界面对实时录制的视频作一帧一帧显示。
示例性地,该视频预览界面也可以是已生成的视频的播放预览界面,那么这里的第一视频图像序列则可以是第一摄像头预先采集的视频图像序列,可以在播放预览界面中对已录制完成的视频一帧一帧地显示。
步骤103,在视频预览界面显示第二视频处理装置的第二摄像头采集的第二视频图像序列的情况下,根据所述第一视频图像序列和所述第二视频图像序列,生成目标视频。
本实施例中,第二视频图像序列与上述第一视频图像序列类似,可以是实时录制的视频的视频图像序列,也可以是已录制完成的视频的视频图像序列。
需要说明的是,第二视频处理装置与第一视频处理装置通信连接,因此,第一视频处理装置不仅可以在视频预览界面显示第一视频处理装置的第一视频图像序列,还可以显示其他视频处理装置(这里为第二视频处理装置)所生成的第二视频图像序列。
为了便于理解,下述内容以视频图像序列为各个摄像头实时采集的视频图像序列、视频预览界面为拍摄预览界面为例进行说明,当视频图像序列为已录制完成的视频的视频图像序列时,本申请实施例的方法的执行原理类似,因此不再一一赘述。
此外,本申请对于第一视频图像序列与第二视频图像序列在视频预览界面中的显示顺序不做限制。
其中,所述目标视频包括所述第一视频图像序列中的至少一帧第一视频图像和所述第二视频图像序列中的至少一帧第二视频图像。
在本申请实施例中,可以在视频预览界面显示不同视频处理装置的摄像头所采集的视频图像序列,并根据不同视频处理装置的摄像头所各自采集的第一 视频图像序列和第二视频图像序列,来进行视频剪辑,生成包括至少一帧第一视频图像和至少一帧第二视频图像的目标图像,其中,至少一帧第一视频图像来自第一视频处理装置生成的第一视频图像序列,而至少一帧第二视频图像则来自第二视频处理装置生成的第二视频图像序列。本申请实施例的视频处理方法能够在视频预览界面显示不同视频处理装置的摄像头各自采集的视频图像序列的情况下,对不同视频处理装置生成的不同视频图像序列进行视频剪辑,从而生成目标视频,无需采用专业的视频剪辑软件,降低了视频剪辑的操作难度和复杂度。
可选地,所述视频预览界面包括主窗口和第一子窗口,所述主窗口用于显示所述第一视频图像序列,所述第一子窗口用于显示所述第二视频图像序列。
可选地,所述视频预览界面中不同的窗口用于显示不同视频处理装置实时录制的视频数据。
其中,拍摄预览界面可以包括一个主窗口和至少一个子窗口,可选地,该子窗口的数量可以是多个,用于显示与第一视频处理装置通信连接的多个其他视频处理装置(即其他视频处理装置)所实时录制的视频图像序列;
此外,拍摄预览界面中主窗口和子窗口显示不同视频处理装置的视频图像序列,并且不同子窗口也可以显示不同视频处理装置的视频图像序列。
后文举例以第一视频处理装置为手机M,与第一视频处理装置通信连接的其他视频处理装置包括手机A、手机B和手机C为例进行说明。
此外,拍摄预览界面中不同窗口所显示的视频图像序列,可以是同一拍摄场景下,由不同手机以不同拍摄角度实时录制得到的视频图像序列,也可以是不同拍摄场景下的多机位录像的视频图像序列。该拍摄场景可以是运动场景,例如打篮球、踢足球等运动场景。
本申请实施例中的视频处理装置可以是移动终端,包括手机、平板等。示例性地,如图2A所示,以手机为例进行说明,在手机相机的录像界面11中,用户通过双指缩放屏幕(例如缩放到最小)后可以进入多机录像剪辑模式,从而显示拍摄预览界面,该拍摄预览界面如图2B所示,经过上述缩放操作,该 拍摄预览界面被划分为多个窗口,其中,一个较大的窗口为主窗口21,默认显示本申请实施例的方法的第一视频处理装置(例如手机M)的摄像头所采集的图像;剩余的较小的窗口为子窗口,图2B示出了8个子窗口(例如其中一个子窗口22),默认为待连接状态,用加号表示,因此,在该待连接状态下,手机M还没有连接其他视频处理装置(例如手机)进行多机录像剪辑。
其中,不同视频处理装置之间的连接方式可以是WiFi(无线网络)、蓝牙等,后文以WiFi连接为例进行说明,蓝牙等通信连接方式同理,不再赘述。
例如手机M与手机A、手机B、手机C建立了WiFi连接,则手机A、手机B、手机C所实时录制的视频数据,可以实时传送给手机M。
示例性地,多机录像剪辑模式需要多台手机协同工作,所以第一视频处理装置首先需要连接多台手机。在图2C所示的拍摄预览界面,即多机录像剪辑模式界面中,用户可以通过点击任一子窗口,这里为子窗口22,即可显示如图2D所示的手机搜索界面。
其中,在用户点击任意子窗口之后,手机M就可以建立起wifi热点,等待其它手机连接。当其它手机也是处于多机录像剪辑模式时,可以搜索附近的WiFi信号,其中,在多机录像剪辑模式中,如果没有点击子窗口,则只会搜索附近的WiFi信号,如果点击了子窗口,就建立WiFi热点。该WiFi热点可以是无密码的WiFi热点。
其中,通过对拍摄预览界面进行手指缩放可以从普通录像模式切换至多机录像剪辑模式,例如图2E所示的手机A的多机录像剪辑模式界面、图2F所示的手机B的多机录像剪辑模式界面、图2G所示的手机C的多机录像剪辑模式界面;其中,手机A、手机B、手机C的各多机录像剪辑模式界面中的主窗口显示各手机自己录制的视频内容;其中,图2E、图2F、图2G的原理与图2C所示的手机M的多机录像剪辑模式界面类似,这里不再一一赘述。
需要说明的是,图2A~图2M中相同的附图标记表示相同的对象,因此,不再对不同附图中的相同附图标记做一一解释说明,参照其他附图的解释即可。
本实施例中,WiFi热点的热点信息可以携带手机M的一些参数信息,例如可以包括用于表示是多机录像剪辑模式的参数,以及手机M的标识信息等。
本实施例中,在两个手机首次通过多机录像剪辑模式的WiFi热点进行WiFi连接时,可以通过鉴权的方式进行WiFi连接,而如果两个手机并非首次通过多机录像剪辑模式的WiFi热点进行WiFi连接,则无需鉴权,可以直接进行WiFi连接。
本实施例中,在两个手机首次通过多机录像剪辑模式的WiFi热点进行WiFi连接时,可以通过如下方式来实现:对于处于多机录像剪辑模式中的其他手机(手机M之外的没有建立WiFi热点的手机),在搜索WiFi热点后,如果发现搜索到的WiFi热点的热点信息是多机录像剪辑模式的WiFi热点,就可以主动连接该WiFi热点,进入鉴权模式。可选地,其他手机可以将自己手机的信息通过鉴权的方式发送给手机M(又名主手机),等待主手机的连接申请。一但有其它手机请求连接该主手机的WiFi热点,如图2D所示,主手机就可以在手机搜索界面中显示请求鉴权的各手机的标识信息。
本实施例中,在两个手机并非首次通过多机录像剪辑模式的WiFi热点进行WiFi连接,可以通过如下方式来实现:对于处于多机录像剪辑模式中的其他手机(手机M之外的没有建立WiFi热点的手机),在搜索WiFi热点后,如果发现搜索到的WiFi热点的热点信息是多机录像剪辑模式的WiFi热点,就可以主动连接该WiFi热点,从而使得手机M与其他手机建立WiFi连接。
可选地,根据本申请实施例的方法可以包括:显示至少一个装置标识,所述装置标识用于指示与所述第一视频处理装置通信连接的视频处理装置;然后,接收用户对所述至少一个装置标识中的目标装置标识的第三输入;最后,响应于所述第三输入,在第二子窗口中,显示第三视频处理装置的第三摄像头采集的第三视频图像序列,所述第三视频处理装置为所述目标装置标识指示的视频处理装置。
示例性地,第三输入可以包括但不限于:用户对目标装置标识的点击输入,或者为用户输入的语音指令,或者为用户输入的特定手势;可以根据实际使用 需求确定,本申请实施例对此不作限定。
其中,本申请实施例中的特定手势可以为单击手势、滑动手势、拖动手势、压力识别手势、长按手势、面积变化手势、双按手势、双击手势中的任意一种;本申请实施例中的点击输入可以为单击输入、双击输入或任意次数的点击输入等,还可以为长按输入或短按输入。
示例性地,装置标识可以为如图2D所示的手机搜索界面所显示的各手机的的标识信息,这里为手机A、手机B、手机C;并且,手机搜索界面还可以显示手机M的控件31,那么在显示各请求鉴权的手机的标识信息时,则可以依据各手机相对手机M的距离和方位,在手机搜索界面中显示各手机的标识信息,这里手机C距离手机M最近,其次手机B,最远为手机A。
在手机M的手机搜索界面中,用户可以将想要连接的手机(以手机C为例进行说明)的装置标识拖入待连接区域32,然后点击“连接”控件33,手机M就可以向手机C(例如为这里的第三视频处理装置)发出连接申请。手机C可以收到手机M连接申请,从而在手机C的多机录像剪辑模式中点击“同意”,则手机C与手机M之间变成双向通讯;另外,由于手机C无需做多机录像剪辑,只需提供其拍摄角度的实时录制的视频图像序列并传输至手机M,因此,手机C可以退出多机录像剪辑模式,并可以只在手机C上显示手机C拍摄的视频的预览画面。
同理,手机M还可以通过点击图2C中的其他子窗口的方式来通过WiFi连接更多其他的手机用于视频录制和剪辑,例如手机M与手机B也建立WiFi连接,手机M与手机A也建立WiFi连接。那么与手机M建立WiFi连接的手机A、手机B、手机C均可以将各自实时录制的视频,实时地通过WiFi连接传送给手机M。
如图2H所示,手机M的多机录像剪辑模式界面中的子窗口22(例如第二子窗口)用于显示手机C(例如第三视频处理装置)录制的视频的预览画面;子窗口23(例如第一子窗口)用于显示手机B(例如第二视频处理装置)录制的视频的预览画面;子窗口24用于显示手机A录制的视频的预览画面;主窗 口21初始状态下用于显示手机M录制的视频的预览画面。
对于图1实施例中,如何显示第二视频处理装置的视频图像序列的方法与这里举例的在子窗口22中显示手机C的摄像头录制的视频的方法类似,不再一一赘述。
那么在手机M、手机A、手机B、手机C均开始录像之后,则上述三个子窗口以及上述主窗口则显示的是各端实时录制的视频的预览画面。
在本示例中,主窗口初始状态下用于显示第一视频处理装置,即手机M录制的视频的预览画面,子窗口用于显示与手机M通信连接的其他手机录制的视频的预览画面;在其他实施例中,主窗口也可以初始时不显示任何设备录制的视频,即手机M所录制的视频的预览图像也是显示在一个子窗口中。
在本申请实施例中,通过显示用于指示与第一视频处理装置通信连接的视频处理装置的装置标识,并接收用户对该装置标识中目标装置标识的第三输入,来响应于该第三输入,在子窗口中显示该目标装置标识指示的视频处理装置的第三摄像头所采集的第三视频图像序列,实现以多机位的方式进行视频录制,并对不同机位的视频图像进行剪辑生成目标视频,通过将多个视频处理装置与第一视频处理装置进行通信连接,能够实现一边显示录制的视频一边剪辑视频的功能,将主窗口中显示的视频作为目标视频(所见即所得),简化了视频编辑的复杂度。
此外,在本申请实施例中,通过将第一视频处理装置与至少一个其他视频处理装置(即第一视频处理装置之外的视频处理装置)进行通信连接,那么可以将其他视频处理装置实时录制的视频的预览图像,显示在第一视频处理装置的视频预览界面的子窗口中,且不同子窗口显示不同视频处理装置所录制的视频的预览画面,从而可以通过不同子窗口来区分不同其他视频处理装置所采集的不同视频,并且能够在视频录像功能的基础上,通过多个视频处理装置的互相通信,实现一边录制视频一边剪辑视频的功能,将主窗口的视频画面以所见即所得的方式实现了视频剪辑,简化了视频编辑的复杂度;另外,通过将第一视频处理装置实时录制的视频的预览画面显示在主窗口中,使得剪辑得到的目 标视频的起始视频片段来源于第一视频处理装置录制的视频数据,而第一视频处理装置则作为视频剪辑的控制设备,因此,通过主窗口得到的目标视频更符合视频剪辑场景。
可选地,该主窗口可以在尺寸上相对子窗口较大,且位于视频预览界面的靠近中心的位置,从而便于用户浏览主窗口中显示的视频内容。
可选地,根据本申请实施例的方法还可以包括:根据所述第三视频图像序列和所述第一视频图像序列的图像内容,确定所述第三摄像头的相对拍摄方位,所述相对拍摄方位为所述第三摄像头相对于所述第一摄像头的拍摄方位;然后,根据所述相对拍摄方位,确定所述第二子窗口的目标显示位置。
可选地,如图2H所示,用户还可以通过将显示有视频内容的子窗口拖动到其他子窗口(可以显示有视频内容,或者未显示有视频内容)的方式,来改变子窗口的位置,子窗口的位置是根据相对拍摄方位决定的。
例如,图2H中子窗口22显示的为手机A的录像画面,子窗口23显示的为手机B的录像画面,主窗口21显示的为当前手机,即手机M的录像画面。假设手机A位于手机M的左侧对被拍摄人物进行拍摄,手机B位于手机M的右侧对被拍摄人物进行拍摄,手机M对被拍摄人物的正面进行拍摄,用户可以将子窗口23移动到主窗口21右边的任意一个小窗口,来方便地表示出该子窗口23中的录像画面对应的摄像头的拍摄方位,相对于主窗口21中的录像画面对应的摄像头的拍摄方位。
在上述实施例中,用户可以通过点击图2C中的子窗口22,来触发图2D的界面的显示,从而通过对图2D进行操作来实现第一视频处理装置与第三视频处理装置的连接;在第一视频处理装置与第三视频处理装置通信连接之后,子窗口22(第二子窗口)用于显示第三视频处理装置的第三摄像头所采集的第三视频图像序列。在本实施例中,可以根据该第三视频图像序列的图像内容,与第一视频图像序列的图像内容,来确定第三摄像头相对于第一摄像头的拍摄方位,从而将第二子窗口调整到对应的位置进行显示。本申请中可以根据相对拍摄方位自动或手动对子窗口的位置进行调整。
在本申请实施例中,可以根据所述第三视频图像序列和所述第一视频图像序列的图像内容,确定所述第三摄像头相对于所述第一摄像头的拍摄方位;从而基于该拍摄方位以及主窗口的位置,确定所述第二子窗口的目标显示位置,使得用户通过视频预览界面中第二子窗口与主窗口之间的相对位置关系,来识别各子窗口对应的摄像头的拍摄角度。
例如手机M在拍摄对象的正前方拍摄,手机C在手机M的西北角拍摄该拍摄对象,则可以将手机C拍摄的视频内容显示在子窗口22中。
可选地,本申请实施例的所述视频处理方法还包括:接收用户对所述视频预览界面的第四输入;在所述第一视频图像序列和所述第二视频图像序列为录像过程中实时采集的视频图像序列的情况下,响应于所述第四输入,控制所述第一摄像头和所述第二摄像头停止采集视频图像;在所述第一视频图像序列为已录制的第一视频中的视频图像,且所述第二视频图像序列为已录制的第二视频中的视频图像的情况下,响应于所述第四输入,停止播放所述第一视频和所述第二视频。
示例性地,第四输入可以包括但不限于:用户对视频预览界面的点击输入,或者为用户输入的语音指令,或者为用户输入的特定手势;可以根据实际使用需求确定,本申请实施例对此不作限定。
其中,本申请实施例中的特定手势可以为单击手势、滑动手势、拖动手势、压力识别手势、长按手势、面积变化手势、双按手势、双击手势中的任意一种;本申请实施例中的点击输入可以为单击输入、双击输入或任意次数的点击输入等,还可以为长按输入或短按输入。
以所述第一视频图像序列和所述第二视频图像序列为录像过程中实时采集的视频图像序列为例进行说明。
示例性地,如图2I所示,主窗口21具有预设控件31,通过点击预设控件31可以控制视频录制的启动和停止。可选地,手机M可以通过点击预设控件31来控制与手机M连接的手机A、手机B、手机C,以及手机M开始录像或结束录像;而手机A、手机B、手机C各自只可以控制本体手机的视频录制开 始或结束,如图2J所示,手机A的传统视频录制界面具有控件41,手机A的用户可以通过点击控件41来启动手机A进行视频录制,或结束手机A的视频录制。
其中,手机A、手机B、手机C中的任意一个手机,如果通过点击各自手机的视频控制控件(例如图2J中的控件41),暂停了视频录制,则主手机,即手机M可以控制该暂停录制的手机继续进行视频录制。
在本申请实施例中,通过对所述视频预览界面的第四输入,可以在主窗口和子窗口显示的视频内容为实时采集的视频图像序列的情况下,响应于所述第四输入,控制各窗口对应的摄像头停止采集视频图像;在主窗口和子窗口显示的视频内容为已录制的视频中的视频图像的情况下,可以停止各个窗口播放已录制的各个视频,通过对视频预览界面的一键输入,实现对多个摄像头所采集的视频图像的停止录制,或者停止播放。
示例性地,通过对主窗口中的预设控件的输入,则可以控制第一视频处理装置,以及与第一视频处理装置通信连接的其他视频处理装置统一执行启动或停止(包括暂停)视频录制的操作,能够通过对主窗口的一键操作,实现对多机统一控制。
可选地,如图2I所示,用于显示其他视频处理装置(即与第一视频处理装置通信连接的其他视频处理装置)的视频数据的窗口中还具有预设控件,用于控制其他视频处理装置的视频录制状态,包括正在录制状态和暂停录制状态,可选地,子窗口22内的控件32的状态表示手机A当前为正在录制状态,子窗口23内的控件33的状态表示手机B当前为暂停录制状态,子窗口24内的控件34的状态表示手机C当前为暂停录制状态。
在本申请实施例中,由于显示视频处理装置的窗口中设有预设控件,通过预设控件可以控制视频处理装置的录制状态。而用户通过预设控件的状态,可以直观了解视频处理装置是正在录制状态还是暂停录制状态。
可选地,在执行步骤103时,可以通过接收用户对所述第一子窗口的第二输入;然后,响应于所述第二输入,交换所述主窗口和所述第一子窗口中的显 示内容;最后,将所述主窗口中显示的至少一帧第一视频图像和至少一帧第二视频图像进行视频拼接,得到所述目标视频。
示例性地,第二输入可以包括但不限于:用户对第一子窗口的点击输入,或者为用户输入的语音指令,或者为用户输入的特定手势;可以根据实际使用需求确定,本申请实施例对此不作限定。
示例性地,所述第二输入还可以是使得子窗口和主窗口之间存在部分窗口区域的重叠的输入,例如将所述第一子窗口拖动至所述主窗口的输入。
其中,本申请实施例中的特定手势可以为单击手势、滑动手势、拖动手势、压力识别手势、长按手势、面积变化手势、双按手势、双击手势中的任意一种;本申请实施例中的点击输入可以为单击输入、双击输入或任意次数的点击输入等,还可以为长按输入或短按输入。
示例性地,可以将所述第一子窗口和所述主窗口各自显示的视频数据的视频处理装置进行互换,以及将第一视频片段和第二视频片段按照在所述主窗口的显示顺序进行拼接,生成目标视频,其中,所述第一视频片段和所述第二视频片段分别为所述主窗口所显示的来自不同视频处理装置的视频数据。
其中,在接收到第一输入之前,主窗口所显示的视频数据对应的视频处理装置可以是其他视频处理装置,也可以是第一视频处理装置,上述示例均以主窗口初始状态下显示第一视频处理装置的视频数据为例,但是本申请并不限于此。第一视频处理装置的视频数据在初始状态下,即接收到第一输入之前也可以显示在子窗口中。
本实施例中,主窗口显示的是最终录制并剪辑好的目标视频的视频画面,所见即所得。而子窗口显示其他视频处理装置的拍摄画面。在视频录制过程中,如果手机M的用户发现手机M所连接的其他手机的录像内容更合适,需要添加到目标视频中,则可以将对应的子窗口拖动到主窗口的位置,通过将子窗口拖动到主窗口,来触发视频的剪辑。示例性地,如图2K所示,手机M的用户将子窗口22按照箭头方向拖动至主窗口21,以切换机位。
示例性地,例如第一输入的输入时间点为t1,在t1之前,主窗口播放手机 M录制的视频内容,例如第一视频片段(包括至少一帧第一视频图像);在t1之后,被拖动的子窗口对应的手机A录制的视频内容,例如第二视频片段(包括至少一帧第二视频图像)则通过主窗口播放。因此,通过第一输入,可以使得主窗口的显示内容从第一视频片段切换为第二视频片段,而主窗口所显示的内容即为最终录制得到的目标视频。在拼接时,按照时间顺序进行拼接,将第一视频片段和第二视频片段拼接,得到目标视频。
当然,目标视频也可以经过多次第一输入来得到,例如,当用户希望将一个子窗口所显示的视频内容增加至目标视频时,则可将该子窗口拖动到主窗口,即第一输入,从而响应于该第一输入,将该子窗口显示的视频内容增加至目标视频;如果用户又希望将另一个子窗口所显示的视频内容增加至目标视频,则继续进行将该另一个子窗口拖动到主窗口的第一输入,从而响应于该第一输入,将该另一个子窗口所显示的视频内容也增加至目标视频。当最后一个录像的手机停止录像或主手机停止录像时,则将主窗口显示的内容保存到主手机中,即得到目标视频。例如,在运动场景中,可以随时切换至合适的拍摄角度进行视频录制。
本实施例中,经过图2K中示出的拖动操作,则跳转到图2L所示的界面,即将第一视频处理装置(手机M)在t1之后录制的视频内容在子窗口22显示,将手机A在t1之后录制的视频内容在主窗口21显示;最终录制得到的目标视频由至少两段视频拼接而成。
在本申请实施例中,可以在视频预览界面的主窗口中显示第一视频处理装置的第一摄像头所采集的第一视频图像序列,而在视频预览界面的第一子窗口中显示第二视频处理装置的第二摄像头所采集的第二视频图像序列;那么在录制视频时,可以通过对第一子窗口进行第二输入的方式,来交换主窗口和第一子窗口的显示内容,即使得主窗口切换显示为第二摄像头所采集的视频图像序列,使得第一子窗口切换显示为第一摄像头所采集的视频图像序列,由于不同视频处理装置可以从不同角度进行同一场景的视频拍摄,从而可以实现视频录制过程中的机位切换;可以基于主窗口内显示的内容得到目标视频,在一个例 子中,将主窗口中所显示的视频按照显示顺序依次进行拼接,从而得到目标视频,可以基于至少两个视频处理装置来对同一场景下进行录制,不仅降低了剪辑视频的操作难度和复杂度,而且提升了视频处理效率。
在本申请实施例中,可以借助多个视频处理装置进行视频录制,并在视频预览界面的不同窗口中显示不同视频处理装置实时录制的视频数据,在视频录制过程中,通过将子窗口拖动至主窗口的输入,可以实现视频录制过程中的机位切换;在录制视频时,将在所述主窗口所分别显示的来自不同视频处理装置的视频数据,即第一视频数据和第二视频数据,按照在主窗口的显示顺序进行拼接,从而可以基于至少两个视频处理装置来对同一场景进行录制,不仅降低剪辑视频的操作难度和复杂度,而且提升了视频处理效率。其中,通过将多个视频处理装置录制的视频内容实时显示在一个视频处理装置的设备上,用户可以通过拖动不同子窗口到主窗口的方式实时地切换录像机位,提高了用户在视频录像过程中的可操作性。
可选地,本申请实施例的方法还可以包括:在所述拍摄预览界面中每个窗口对应的视频处理装置均停止视频录制的情况下,保存所述目标视频以及每个视频处理装置录制的视频数据;对于所述目标视频中拼接的每个视频片段,保存所述每个视频片段在所述目标视频中对应的时间点,以及在所属视频处理装置录制的视频数据中对应的时间点之间的映射关系。
在本申请实施例的第一视频处理装置与其他视频处理装置建立通信连接之后,在其他视频处理装置开始录像后,第一视频处理装置就可以接收其他视频处理装置实时录制的视频内容,并在所有视频处理装置均停止录制视频的情况下,将每个视频处理装置录制的视频数据进行保存,以及保存得到的目标视频。
可选地,根据本申请实施例的方法还可以包括:接收用户对所述目标视频的第五输入;响应于所述第五输入,显示视频调节窗口,所述视频调节窗口中包括调节控件,至少一个第一视频缩略图和至少一个第二视频缩略图;所述至少一个第一视频缩略图为所述至少一帧第一视频图像的缩略图,所述至少一个 第二视频缩略图为所述至少一帧第二视频图像的缩略图,所述调节控件用于更新所述目标视频的视频帧;然后,接收用户对所述调节控件的第六输入;响应于所述第六输入,更新所述调节控件的显示位置,并根据更新后的所述调节控件的显示位置,更新所述目标视频的视频帧。
其中,本实施例中的第五输入、第六输入以及下述实施例的第七输入、第八输入的实现方式可以参照上文关于第一输入的相关示例性描述,原理类似,这里不再一一赘述。
示例性地,上述目标视频可以保存在手机M的相册中,用户通过对相册中保存的目标视频点击编辑控件(即第五输入)。其中,在点击编辑控件之后,则可以进入目标视频的图2M所示的视频调节窗口的界面。
可选地,所述视频调节窗口包括所述目标视频的主播放进度条,其中,所述主播放进度条包括预设标识53,所述预设标识53随视频播放窗口内视频播放进度的变化在所述播放进度条上移动。
其中,本申请中的预设标识用于指示信息的文字、符号、图像等,可以以控件或者其他容器作为显示信息的载体,包括但不限于文字标识、符号标识、图像标识。
所述视频调节窗口包括所述目标视频中每个视频片段的子播放进度条,其中,不同视频片段的拼接处显示有可移动的调节控件。
示例性地,如图2M所示,视频调节窗口中包括视频播放窗口54,该视频播放窗口54用于显示目标视频的画面;主播放进度条52为视频播放窗口54内播放的视频的进度条,该主播放进度条52上带有随播放时间移动的预设标识53。
此外,如图2M所示,例如目标视频依次由一段视频A、一段视频B和一段视频C拼接而成。视频剪辑界面还包括在主播放进度条52上方的多个子播放进度条,可选地,可以将组成该目标视频的视频片段的子播放进度条,按照播放时间从前到后的顺序分成多行显示,这里依次包括视频A的子播放进度条61、视频B的子播放进度条62、视频C的子播放进度条63;此外,在不同子 播放进度条的拼接处,还可以包括调节控件51,该调节控件51可以理解为微调控件,完整的目标视频的进度条中机位切换的时间点都可以对应一个可移动的调节控件。例如,录制完成的目标视频由视频A、视频B和视频C三个片段组成,则包括两个调节控件51,一个用于调节视频A和视频B拼接处的视频帧,另一个用于调节视频B和视频C拼接处的视频帧。
可选地,如图2M所示,拖动预设标识53可以控制视频播放窗口54内的视频的播放进度,此外,通过点击预设标识53可以控制视频播放窗口54中的视频暂停播放或继续播放,暂停播放和继续播放这两种状态下,预设标识53的显示图案可以不同。
在本申请实施例中,视频剪辑界面中的主播放进度条带有的预设标识,不仅可以通过移动它来控制视频播放窗口内视频的播放进度;而且,还可以通过对预设标识的输入,来改变视频播放窗口内视频的播放状态。
可选地,本申请实施例的方法可以通过调节控件调节拼接处的视频帧;拼接处视频帧的缩略图分别显示在调节控件的左右两侧。
示例性地,如图2M所示,视频A和视频B之间的调节控件51的两边可以分别显示两个缩略图,包括:位于子播放进度条61上方的视频A最后一帧图像的缩略图71,以及,位于子播放进度条62上方的视频B第一帧图像的缩略图72;另外,图2M中还示出了另外一个拼接处的视频帧的两个缩略图,这里不再赘述。
当然,这里的示例中,调节控件51还没有移动,那么在调节控件51在沿着图2M中的箭头方向向左或向右移动之后,该调节控件51所停留的位置可以对应不同的视频片段拼接处,那么该拼接处的来自不同视频片段的两帧图像的缩略图同样显示在调节控件51的左右两侧。
在本申请实施例中,显示两个视频片段的拼接处两帧图像的缩略图,那么用户在移动调节控件来微调目标视频时,可以通过浏览拼接处的两个缩略图,来判断视频画面拼接是否合适。
示例性地,图2M中,可以按照箭头方向,向左或向右移动预设控件51 来触发目标视频中不同视频片段的拼接处的调整,在调整之后,可以点击保存控件55来更新目标视频。
示例性地,以对图2M中视频A和视频B的两个子播放进度条之间的调节控件51的第六输入为例进行说明,通过向左移动该调节控件51,则可以对视频A从尾部减少几帧视频帧,以及对视频B在头部增加相同数量的视频帧,来实现视频A和视频B的拼接处视频帧的调整。
可选地,执行更新所述目标视频的视频帧的步骤时,可以通过以下至少一个步骤来实现:更新所述第一视频图像序列的拼接结束视频帧;更新所述第二视频图像序列的拼接起始视频帧;增加或减少所述第一视频图像序列的拼接视频帧;增加或减少所述第二视频图像序列的拼接视频帧。其中,拼接视频帧表示切换时用来拼接目标视频的视频帧。
例如,对图2M中视频A和视频B的拼接处对应的调节控件52,向左移动2s的时长对应的进度条长度,则需要更新视频A(即这里的第一视频图像序列)在目标视频中的拼接结束位置的视频帧,可选为减少视频A的拼接视频帧,这里为减少视频A的尾部2s时长的视频帧;以及更新视频B(即这里的第二视频图像序列)在目标视频中的拼接起始位置的视频帧,可选为从视频B所属的原始视频中,获取相应的视频帧增加到第二视频图像序列中。
这里以对调节控件52左移为例进行说明,当对调节控件52右移时,方法类似,这里不再赘述。
继续以上述图2M中,移动视频A和视频B的两个子播放进度条之间的调节控件51为例,通过向左移动该调节控件51,使得调节控件51靠近视频A的子播放进度条移动,而远离视频B的子播放进度条移动,移动距离与视频帧数存在预设的映射关系,可以基于移动距离确定需要调整的目标帧数,例如3帧,则可以在视频A的尾部减少3帧视频帧,在视频B的头部增加3帧视频帧,所增加的3帧视频帧的数据来源则是该视频B对应的视频处理装置所录制的完整的原始视频数据。
可选地,在通过调节控件进行微调之后,例如图2M中调节控件51经过 移动停留在视频A的第10s对应的位置,则可以将视频A的第7s~10s的视频片段和视频B头部的视频片段(第1s~3s的视频片段)拼接后,通过视频播放窗口54进行播放。
在本申请实施例中,当用户通过移动目标视频中拼接的两个视频片段之间的调节控件,不仅可以实现对该两个拼接的视频片段的拼接位置的调整;而且,还可以播放经过拼接位置调整后的两个视频片段,便于在用户通过浏览播放的该更新了拼接位置后的一段视频,判定不同视频片段间的更新后的拼接位置是否合适。
可选地,在本申请实施例中,在对目标视频的拼接处进行调整时,可以预览拼接的视频帧,且可以对调节结果进行预览,以确保对视频微调后可以达到用户想要的效果。
可选地,在步骤103之后,如图3所示,本申请实施例的方法还可以包括:
步骤201,接收用户对所述目标视频的第七输入。
本实施例的视频编辑以目标视频为例进行说明,在其他实施例中,本处理对象可以为其他视频,例如可以为上述第一视频处理装置、第二视频处理装置或者其他视频处理装置录制的视频,或者是从网上下载的其他视频等。
步骤202,响应于所述第七输入,在所述第一视频处理装置显示所述目标视频的第一视频编辑窗口。
步骤203,接收用户对所述第一视频编辑窗口的第八输入。
步骤204,响应于所述第八输入,根据编辑信息更新所述目标视频,所述编辑信息是根据所述第八输入确定的。
步骤205,将所述编辑信息发送至第二视频处理装置,以使所述第二视频处理装置根据所述编辑信息同步更新所述目标视频。
示例性地,在生成目标视频之后,手机M可以将目标视频发送给手机A、手机B、手机C,使得这三个手机也可以得到目标视频。
示例性地,手机M与其他视频处理装置的连接方式与上文举例类似,只是在触发方式上,还可以是如下方式:如图4A所示,在手机M中,用户打开 相册中的目标视频,点击多机协同编辑控件82,则可以对窗口81中的目标视频进行多机同步编辑。
其中,手机A、手机B、手机C也同样显示如图4A所示的界面,所有与手机M连接成功的手机都显示该目标视频,且编辑选项是相同的。图4A示出了各种编辑选项,这里不再赘述。
例如点击“美颜”控件,即进行第八输入。需要说明的是,为了避免多端对同一个视频进行同一编辑操作导致编辑紊乱,本申请实施例中,不同手机对同一视频进行不同的编辑。其中,用户在点击一个编辑选项之后,手机M就可以将该编辑选项对应的编辑信息共享至手机A、手机B、手机C。
同理,手机A或手机B或手机C对该目标视频选择了某个编辑选项进行编辑之后,同样可以实时地将该编辑选项对应的编辑信息同步发送给手机M,手机M又可以将接收到的编辑信息同步分享给其他手机,因此,这四个手机之间编辑信息是共享的。
在多个手机和手机M进行通信连接后,手机A可以为视频配字幕、手机B可以为视频调整滤镜、手机C对视频的时长进行剪辑等,如上文所述,由于手机M、手机A、手机B以及手机C,可以以手机M来实现各手机侧的编辑信息的共享,那么不同手机对目标视频所执行的多种编辑操作可以同步显示。
可选地,手机M在对目标视频进行编辑后,可以在窗口81中显示编辑后的预览图像;此外,由于其他手机也对该目标视频也进行了编辑,因此,窗口81还可以对其他手机编辑后的视频效果进行预览。
可选地,本实施例中,手机M通过点击图4A或图4B中的保存控件,对编辑后的视频进行保存;如果手机A、手机B、手机C侧也分别点击了保存控件,则会将保存后的视频同步至手机M。
在本申请实施例中,可以通过多个视频处理装置对目标视频进行编辑,多个视频处理装置可以进行不同功能的多种视频编辑操作,从而满足用户在剪辑视频过程中需要多人协作的需求,以提高编辑效率。
可选地,在不同视频处理装置的编辑信息之间存在冲突(例如手机A将目 标视频第1s~5s的视频帧剪辑掉,而手机M在对第1~10s的视频帧进行人脸美颜操作,显然这两种编辑信息之间存在冲突)的情况下,可以通过提示信息进行提示,待其中一个视频处理装置编辑好之后,再提醒其他视频处理装置进行相应的编辑。
本实施例中,如果一个手机正在执行的编辑操作会影响到其它手机的编辑操作,则可以进行标记和提示。比如手机A正在剪辑视频的时长,则在其他手机中通过特殊颜色在进度条中标记被删除的时间段(可选地,在其他实施例中,还可以包括被增加的时间段),其他手机侧在进度条中通过特殊颜色进行标记,表示该段视频片段被剪掉。在任意一个手机中点击图4A中的保存控件之后,都会将编辑结果同步到其他手机上。
可选地,如果已经有一个手机在进行某一项剪辑功能,则其它手机的该剪辑功能设置成灰色,提示其他用户灰色控件对应的剪辑功能正在由另一个手机处理,如果点击灰色的剪辑功能控件,就会提示已存在其它用户在使用该剪辑功能。示例性地,如图4B所示,假设灰色的“音乐”控件的编辑功能由手机A执行,“美颜”控件的编辑功能由手机B执行,因此,在手机M侧的图4B中,这两个编辑功能的选项都是灰色的,用户无法使用手机M这两个编辑功能对目标视频进行编辑,可以避免不同视频处理装置侧对目标视频进行同一编辑功能的编辑,造成编辑信息紊乱的问题。
需要说明的是,本申请实施例提供的视频处理方法,执行主体可以为视频处理装置,或者该视频处理装置中的用于执行视频处理方法的控制模块。本申请实施例中以视频处理装置执行视频处理方法为例,说明本申请实施例提供的视频处理装置。
参照图5,示出了本申请一个实施例的第一视频处理装置300的框图。该第一视频处理装置300包括:
第一接收模块301,用于接收用户对第一视频处理装置的第一输入;
第一显示模块302,用于响应于所述第一输入,在视频预览界面显示所述第一视频处理装置的第一摄像头采集的第一视频图像序列;
生成模块303,用于在视频预览界面显示第二视频处理装置的第二摄像头采集的第二视频图像序列的情况下,根据所述第一视频图像序列和所述第二视频图像序列,生成目标视频;
其中,所述目标视频包括所述第一视频图像序列中的至少一帧第一视频图像和所述第二视频图像序列中的至少一帧第二视频图像。
在本申请实施例中,可以在视频预览界面显示不同视频处理装置的摄像头所采集的视频图像序列,并根据不同视频处理装置的摄像头所各自采集的第一视频图像序列和第二视频图像序列,来进行视频剪辑,生成包括至少一帧第一视频图像和至少一帧第二视频图像的目标图像,其中,至少一帧第一视频图像来自第一视频处理装置生成的第一视频图像序列,而至少一帧第二视频图像则来自第二视频处理装置生成的第二视频图像序列。本申请实施例的视频处理方法能够在视频预览界面显示不同视频处理装置的摄像头各自采集的视频图像序列的情况下,对不同视频处理装置生成的不同视频图像序列进行视频剪辑,从而生成目标视频,无需采用专业的视频剪辑软件,降低了视频剪辑的操作难度和复杂度。
可选地,所述视频预览界面包括主窗口和第一子窗口,所述主窗口用于显示所述第一视频图像序列,所述第一子窗口用于显示所述第二视频图像序列;
所述生成模块303包括:
第一接收子模块,用于接收用户对所述第一子窗口的第二输入;
交换子模块,用于响应于所述第二输入,交换所述主窗口和所述第一子窗口中的显示内容;
拼接子模块,用于将所述主窗口中显示的至少一帧第一视频图像和至少一帧第二视频图像进行视频拼接,得到所述目标视频。
本实施例中,在录制视频时,可以通过对第一子窗口进行第二输入的方式,来交换主窗口和第一子窗口的显示内容,即使得主窗口切换显示为第二摄像头所采集的视频图像序列,使得第一子窗口切换显示为第一摄像头所采集的视频图像序列,由于不同视频处理装置可以从不同角度进行同一场景的视频拍摄, 从而可以实现视频录制过程中的机位切换;可以基于主窗口内显示的内容得到目标视频,可选为将主窗口中所显示的视频按照显示顺序依次进行拼接,从而得到目标视频,可以基于至少两个视频处理装置来对同一场景下进行录制,不仅降低了剪辑视频的操作难度和复杂度,而且提升了视频处理效率。
可选地,所述第一视频处理装置300还包括:
第二显示模块,用于显示至少一个装置标识,所述装置标识用于指示与所述第一视频处理装置通信连接的视频处理装置;
第二接收模块,用于接收用户对所述至少一个装置标识中的目标装置标识的第三输入;
第三显示模块,用于响应于所述第三输入,在第二子窗口中,显示第三视频处理装置的第三摄像头采集的第三视频图像序列,所述第三视频处理装置为所述目标装置标识指示的视频处理装置。
在本申请实施例中,通过显示用于指示与第一视频处理装置通信连接的视频处理装置的装置标识,并接收用户对该装置标识中目标装置标识的第三输入,来响应于该第三输入,在子窗口中显示该目标装置标识指示的视频处理装置的第三摄像头所采集的第三视频图像序列,实现以多机位的方式进行视频录制,并对不同机位的视频图像进行剪辑生成目标视频,通过将多个视频处理装置与第一视频处理装置进行通信连接,能够实现一边显示录制的视频,一边剪辑视频的功能,将主窗口中显示的视频作为目标视频(所见即所得),简化了视频编辑的复杂度。
可选地,所述第一视频处理装置300还包括:
第一确定模块,用于根据所述第三视频图像序列和所述第一视频图像序列的图像内容,确定所述第三摄像头的相对拍摄方位,所述相对拍摄方位为所述第三摄像头相对于所述第一摄像头的拍摄方位;
第二确定模块,用于根据所述相对拍摄方位,确定所述第二子窗口的目标显示位置。
在本申请实施例中,可以根据所述第三视频图像序列和所述第一视频图像 序列的图像内容,确定所述第三摄像头相对于所述第一摄像头的拍摄方位;从而基于该拍摄方位以及主窗口的位置,确定所述第二子窗口的目标显示位置,使得用户通过视频预览界面中第二子窗口与主窗口之间的相对位置关系,来识别各子窗口对应的摄像头的拍摄角度。
可选地,所述第一视频处理装置300还包括:
第三接收模块,用于接收用户对所述视频预览界面的第四输入;
第一控制模块,用于在所述第一视频图像序列和所述第二视频图像序列为录像过程中实时采集的视频图像序列的情况下,响应于所述第四输入,控制所述第一摄像头和所述第二摄像头停止采集视频图像;
第二控制模块,用于在所述第一视频图像序列为已录制的第一视频中的视频图像,且所述第二视频图像序列为已录制的第二视频中的视频图像的情况下,响应于所述第四输入,停止播放所述第一视频和所述第二视频。
在本申请实施例中,通过对所述视频预览界面的第四输入,可以在主窗口和子窗口显示的视频内容为实时采集的视频图像序列的情况下,响应于所述第四输入,控制各窗口对应的摄像头停止采集视频图像;在主窗口和子窗口显示的视频内容为已录制的视频中的视频图像的情况下,可以停止各个窗口播放已录制的各个视频,通过对视频预览界面的一键输入,实现对多个摄像头所采集的视频图像的停止录制,或者停止播放。
可选地,所述第一视频处理装置300还包括:
第四接收模块,用于接收用户对所述目标视频的第五输入;
第四显示模块,用于响应于所述第五输入,显示视频调节窗口,所述视频调节窗口中包括调节控件,至少一个第一视频缩略图和至少一个第二视频缩略图;所述至少一个第一视频缩略图为所述至少一帧第一视频图像的缩略图,所述至少一个第二视频缩略图为所述至少一帧第二视频图像的缩略图,所述调节控件用于更新所述目标视频的视频帧;
第五接收模块,用于接收用户对所述调节控件的第六输入;
第一更新模块,用于响应于所述第六输入,更新所述调节控件的显示位置, 并根据更新后的所述调节控件的显示位置,更新所述目标视频的视频帧。
在本申请实施例中,可以在生成剪辑的目标视频之后,对目标视频中拼接处的视频帧进行调节,用户能够通过浏览视频拼接处视频帧的缩略图来准确地调整调节控件的位置,进而达到对目标视频中的拼接处做准确调整的目的。
可选地,所述第一更新模块还用于执行以下至少一个步骤:
更新所述第一视频图像序列的拼接结束视频帧;
更新所述第二视频图像序列的拼接起始视频帧;
增加或减少所述第一视频图像序列的拼接视频帧;
增加或减少所述第二视频图像序列的拼接视频帧。
本实施例中,可以根据用户实际需要,增加、减少或更新拼接处的起始视频帧和结束视频帧,使得拼接处的视频帧更合适。
可选地,所述第一视频处理装置300还包括:
第六接收模块,用于接收用户对所述目标视频的第七输入;
第五显示模块,用于响应于所述第七输入,在所述第一视频处理装置显示所述目标视频的第一视频编辑窗口;
第七接收模块,用于接收用户对所述第一视频编辑窗口的第八输入;
第二更新模块,用于响应于所述第八输入,根据编辑信息更新所述目标视频,所述编辑信息是根据所述第八输入确定的;
发送模块,用于将所述编辑信息发送至第二视频处理装置,以使所述第二视频处理装置根据所述编辑信息同步更新所述目标视频。
在本申请实施例中,可以通过多个视频处理装置对目标视频进行编辑,多个视频处理装置可以进行不同功能的多种视频编辑操作,从而满足用户在剪辑视频过程中需要多人协作的需求,以提高编辑效率。
本申请实施例中的视频处理装置可以是装置,也可以是终端中的部件、集成电路、或芯片。该装置可以是移动电子设备,也可以为非移动电子设备。示例性的,移动电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer, UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,非移动电子设备可以为个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例中的视频处理装置可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为iOS操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。
本申请实施例提供的视频处理装置能够实现上述方法实施例实现的各个过程,为避免重复,这里不再赘述。
可选地,如图6所示,本申请实施例还提供一种电子设备2000,包括处理器2002,存储器2001,存储在存储器2001上并可在所述处理器2002上运行的程序或指令,该程序或指令被处理器2002执行时实现上述视频处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要注意的是,本申请实施例中的电子设备包括上述所述的移动电子设备和非移动电子设备。
图7为实现本申请实施例的一种电子设备的硬件结构示意图。
该电子设备1000包括但不限于:射频单元1001、网络模块1002、音频输出单元1003、输入单元1004、传感器1005、显示单元1006、用户输入单元1007、接口单元1008、存储器1009、以及处理器1010等部件。
本领域技术人员可以理解,电子设备1000还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器1010逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图7中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
其中,用户输入单元1007,用于接收用户对第一视频处理装置的第一输入;
显示单元1006,用于响应于所述第一输入,在视频预览界面显示所述第一视频处理装置的第一摄像头采集的第一视频图像序列;
处理器1010,用于在视频预览界面显示第二视频处理装置的第二摄像头采 集的第二视频图像序列的情况下,根据所述第一视频图像序列和所述第二视频图像序列,生成目标视频;
其中,所述目标视频包括所述第一视频图像序列中的至少一帧第一视频图像和所述第二视频图像序列中的至少一帧第二视频图像。
在本申请实施例中,可以在视频预览界面显示不同视频处理装置的摄像头所采集的视频图像序列,并根据不同视频处理装置的摄像头所各自采集的第一视频图像序列和第二视频图像序列,来进行视频剪辑,生成包括至少一帧第一视频图像和至少一帧第二视频图像的目标图像,其中,至少一帧第一视频图像来自第一视频处理装置生成的第一视频图像序列,而至少一帧第二视频图像则来自第二视频处理装置生成的第二视频图像序列。本申请实施例的视频处理方法能够在视频预览界面显示不同视频处理装置的摄像头各自采集的视频图像序列的情况下,对不同视频处理装置生成的不同视频图像序列进行视频剪辑,从而生成目标视频,无需采用专业的视频剪辑软件,降低了视频剪辑的操作难度和复杂度。
可选地,所述视频预览界面包括主窗口和第一子窗口,所述主窗口用于显示所述第一视频图像序列,所述第一子窗口用于显示所述第二视频图像序列;
用户输入单元1007,用于接收用户对所述第一子窗口的第二输入;
处理器1010,用于响应于所述第二输入,交换所述主窗口和所述第一子窗口中的显示内容;将所述主窗口中显示的至少一帧第一视频图像和至少一帧第二视频图像进行视频拼接,得到所述目标视频。
本实施例中,在录制视频时,可以通过对第一子窗口进行第二输入的方式,来交换主窗口和第一子窗口的显示内容,即使得主窗口切换显示为第二摄像头所采集的视频图像序列,使得第一子窗口切换显示为第一摄像头所采集的视频图像序列,由于不同视频处理装置可以从不同角度进行同一场景的视频拍摄,从而可以实现视频录制过程中的机位切换;可以基于主窗口内显示的内容得到目标视频,可选为将主窗口中所显示的视频按照显示顺序依次进行拼接,从而得到目标视频,可以基于至少两个视频处理装置来对同一场景下进行录制,不 仅降低了剪辑视频的操作难度和复杂度,而且提升了视频处理效率。
可选地,显示单元1006,用于显示至少一个装置标识,所述装置标识用于指示与所述第一视频处理装置通信连接的视频处理装置;
用户输入单元1007,用于接收用户对所述至少一个装置标识中的目标装置标识的第三输入;
显示单元1006,用于响应于所述第三输入,在第二子窗口中,显示第三视频处理装置的第三摄像头采集的第三视频图像序列,所述第三视频处理装置为所述目标装置标识指示的视频处理装置。
在本申请实施例中,通过显示用于指示与第一视频处理装置通信连接的视频处理装置的装置标识,并接收用户对该装置标识中目标装置标识的第三输入,来响应于该第三输入,在子窗口中显示该目标装置标识指示的视频处理装置的第三摄像头所采集的第三视频图像序列,实现以多机位的方式进行视频录制,并对不同机位的视频图像进行剪辑生成目标视频,通过将多个视频处理装置与第一视频处理装置进行通信连接,能够实现一边显示录制的视频,一边剪辑视频的功能,将主窗口中显示的视频作为目标视频(所见即所得),简化了视频编辑的复杂度。
可选地,处理器1010,用于根据所述第三视频图像序列和所述第一视频图像序列的图像内容,确定所述第三摄像头的相对拍摄方位,所述相对拍摄方位为所述第三摄像头相对于所述第一摄像头的拍摄方位;根据所述相对拍摄方位,确定所述第二子窗口的目标显示位置。
在本申请实施例中,可以根据所述第三视频图像序列和所述第一视频图像序列的图像内容,确定所述第三摄像头相对于所述第一摄像头的拍摄方位;从而基于该拍摄方位以及主窗口的位置,确定所述第二子窗口的目标显示位置,使得用户通过视频预览界面中第二子窗口与主窗口之间的相对位置关系,来识别各子窗口对应的摄像头的拍摄角度。
可选地,用户输入单元1007,用于接收用户对所述视频预览界面的第四输入;
处理器1010,用于在所述第一视频图像序列和所述第二视频图像序列为录像过程中实时采集的视频图像序列的情况下,响应于所述第四输入,控制所述第一摄像头和所述第二摄像头停止采集视频图像;在所述第一视频图像序列为已录制的第一视频中的视频图像,且所述第二视频图像序列为已录制的第二视频中的视频图像的情况下,响应于所述第四输入,停止播放所述第一视频和所述第二视频。
在本申请实施例中,通过对所述视频预览界面的第四输入,可以在主窗口和子窗口显示的视频内容为实时采集的视频图像序列的情况下,响应于所述第四输入,控制各窗口对应的摄像头停止采集视频图像;在主窗口和子窗口显示的视频内容为已录制的视频中的视频图像的情况下,可以停止各个窗口播放已录制的各个视频,通过对视频预览界面的一键输入,实现对多个摄像头所采集的视频图像的停止录制,或者停止播放。
可选地,用户输入单元1007,用于接收用户对所述目标视频的第五输入;
显示单元1006,用于响应于所述第五输入,显示视频调节窗口,所述视频调节窗口中包括调节控件,至少一个第一视频缩略图和至少一个第二视频缩略图;所述至少一个第一视频缩略图为所述至少一帧第一视频图像的缩略图,所述至少一个第二视频缩略图为所述至少一帧第二视频图像的缩略图,所述调节控件用于更新所述目标视频的视频帧;
用户输入单元1007,用于接收用户对所述调节控件的第六输入;
处理器1010,用于响应于所述第六输入,更新所述调节控件的显示位置,并根据更新后的所述调节控件的显示位置,更新所述目标视频的视频帧。
在本申请实施例中,可以在生成剪辑的目标视频之后,对目标视频中拼接处的视频帧进行调节,用户能够通过浏览视频拼接处视频帧的缩略图来准确地调整调节控件的位置,进而达到对目标视频中的拼接处做出准确调整的目的。
可选地,处理器1010,用于更新所述第一视频图像序列的拼接结束视频帧;更新所述第二视频图像序列的拼接起始视频帧;增加或减少所述第一视频图像序列的拼接视频帧;增加或减少所述第二视频图像序列的拼接视频帧。
本实施例中,可以根据用户实际需要,增加、减少或更新拼接处的起始视频帧和结束视频帧,使得拼接处的视频帧更合适。
可选地,用户输入单元1007,用于接收用户对所述目标视频的第七输入;
显示单元1006,用于响响应于所述第七输入,在所述第一视频处理装置显示所述目标视频的第一视频编辑窗口;
用户输入单元1007,用于接收用户对所述第一视频编辑窗口的第八输入;
处理器1010,用于响应于所述第八输入,根据编辑信息更新所述目标视频,所述编辑信息是根据所述第八输入确定的;
射频单元1001,用于将所述编辑信息发送至第二视频处理装置,以使所述第二视频处理装置根据所述编辑信息同步更新所述目标视频。
在本申请实施例中,可以通过多个视频处理装置对目标视频进行编辑,多个视频处理装置可以进行不同功能的多种视频编辑操作,从而满足用户在剪辑视频过程中需要多人协作的需求,以提高编辑效率。
应理解的是,本申请实施例中,输入单元1004可以包括图形处理器(Graphics Processing Unit,GPU)10041和麦克风10042,图形处理器10041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元1006可包括显示面板10061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板10061。用户输入单元1007包括触控面板10071以及其他输入设备10072。触控面板10071,也称为触摸屏。触控面板10071可包括触摸检测装置和触摸控制器两个部分。其他输入设备10072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。存储器1009可用于存储软件程序以及各种数据,包括但不限于应用程序和操作系统。处理器1010可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1010中。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序 或指令,该程序或指令被处理器执行时实现上述视频处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述视频处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘) 中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的可选实施方式,上述的可选实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (19)

  1. 一种视频处理方法,所述视频处理方法包括:
    接收用户对第一视频处理装置的第一输入;
    响应于所述第一输入,在视频预览界面显示所述第一视频处理装置的第一摄像头采集的第一视频图像序列;
    在视频预览界面显示第二视频处理装置的第二摄像头采集的第二视频图像序列的情况下,根据所述第一视频图像序列和所述第二视频图像序列,生成目标视频;
    其中,所述目标视频包括所述第一视频图像序列中的至少一帧第一视频图像和所述第二视频图像序列中的至少一帧第二视频图像。
  2. 根据权利要求1所述的视频处理方法,其中,所述视频预览界面包括主窗口和第一子窗口,所述主窗口用于显示所述第一视频图像序列,所述第一子窗口用于显示所述第二视频图像序列;
    所述根据所述第一视频图像序列和所述第二视频图像序列,生成目标视频,包括:
    接收用户对所述第一子窗口的第二输入;
    响应于所述第二输入,交换所述主窗口和所述第一子窗口中的显示内容;
    将所述主窗口中显示的至少一帧第一视频图像和至少一帧第二视频图像进行视频拼接,得到所述目标视频。
  3. 根据权利要求1所述的视频处理方法,其中,所述视频处理方法还包括:
    显示至少一个装置标识,所述装置标识用于指示与所述第一视频处理装置通信连接的视频处理装置;
    接收用户对所述至少一个装置标识中的目标装置标识的第三输入;
    响应于所述第三输入,在第二子窗口中,显示第三视频处理装置的第三摄像头采集的第三视频图像序列,所述第三视频处理装置为所述目标装置标识指示的视频处理装置。
  4. 根据权利要求3所述的视频处理方法,其中,所述视频处理方法还包括:
    根据所述第三视频图像序列和所述第一视频图像序列的图像内容,确定所述第三摄像头的相对拍摄方位,所述相对拍摄方位为所述第三摄像头相对于所述第一摄像头的拍摄方位;
    根据所述相对拍摄方位,确定所述第二子窗口的目标显示位置。
  5. 根据权利要求1所述的视频处理方法,其中,所述视频处理方法还包括:
    接收用户对所述视频预览界面的第四输入;
    在所述第一视频图像序列和所述第二视频图像序列为录像过程中实时采集的视频图像序列的情况下,响应于所述第四输入,控制所述第一摄像头和所述第二摄像头停止采集视频图像;
    在所述第一视频图像序列为已录制的第一视频中的视频图像,且所述第二视频图像序列为已录制的第二视频中的视频图像的情况下,响应于所述第四输入,停止播放所述第一视频和所述第二视频。
  6. 根据权利要求1所述的视频处理方法,其中,所述视频处理方法还包括:
    接收用户对所述目标视频的第五输入;
    响应于所述第五输入,显示视频调节窗口,所述视频调节窗口中包括调节控件,至少一个第一视频缩略图和至少一个第二视频缩略图;所述至少一个第一视频缩略图为所述至少一帧第一视频图像的缩略图,所述至少一个第二视频缩略图为所述至少一帧第二视频图像的缩略图,所述调节控件用于更新所述目标视频的视频帧;
    接收用户对所述调节控件的第六输入;
    响应于所述第六输入,更新所述调节控件的显示位置,并根据更新后的所述调节控件的显示位置,更新所述目标视频的视频帧。
  7. 根据权利要求6所述的视频处理方法,其中,所述更新所述目标视频的视频帧,包括以下至少一项:
    更新所述第一视频图像序列的拼接结束视频帧;
    更新所述第二视频图像序列的拼接起始视频帧;
    增加或减少所述第一视频图像序列的拼接视频帧;
    增加或减少所述第二视频图像序列的拼接视频帧。
  8. 根据权利要求1所述的视频处理方法,其中,所述根据所述第一视频图像序列和所述第二视频图像序列,生成目标视频之后,所述视频处理方法还包括:
    接收用户对所述目标视频的第七输入;
    响应于所述第七输入,在所述第一视频处理装置显示所述目标视频的第一视频编辑窗口;
    接收用户对所述第一视频编辑窗口的第八输入;
    响应于所述第八输入,根据编辑信息更新所述目标视频,所述编辑信息是根据所述第八输入确定的;
    将所述编辑信息发送至第二视频处理装置,以使所述第二视频处理装置根据所述编辑信息同步更新所述目标视频。
  9. 一种第一视频处理装置,包括:
    第一接收模块,用于接收用户对第一视频处理装置的第一输入;
    第一显示模块,用于响应于所述第一输入,在视频预览界面显示所述第一视频处理装置的第一摄像头采集的第一视频图像序列;
    生成模块,用于在视频预览界面显示第二视频处理装置的第二摄像头采集的第二视频图像序列的情况下,根据所述第一视频图像序列和所述第二视频图像序列,生成目标视频;
    其中,所述目标视频包括所述第一视频图像序列中的至少一帧第一视频图像和所述第二视频图像序列中的至少一帧第二视频图像。
  10. 根据权利要求9所述的视频处理装置,其中,所述视频预览界面包括主窗口和第一子窗口,所述主窗口用于显示所述第一视频图像序列,所述第一子窗口用于显示所述第二视频图像序列;
    所述生成模块包括:
    第一接收子模块,用于接收用户对所述第一子窗口的第二输入;
    交换子模块,用于响应于所述第二输入,交换所述主窗口和所述第一子窗口中的显示内容;
    拼接子模块,用于将所述主窗口中显示的至少一帧第一视频图像和至少一帧第二视频图像进行视频拼接,得到所述目标视频。
  11. 根据权利要求9所述的视频处理装置,其中,所述第一视频处理装置还包括:
    第二显示模块,用于显示至少一个装置标识,所述装置标识用于指示与所述第一视频处理装置通信连接的视频处理装置;
    第二接收模块,用于接收用户对所述至少一个装置标识中的目标装置标识的第三输入;
    第三显示模块,用于响应于所述第三输入,在第二子窗口中,显示第三视频处理装置的第三摄像头采集的第三视频图像序列,所述第三视频处理装置为所述目标装置标识指示的视频处理装置。
  12. 根据权利要求11所述的视频处理装置,其中,所述第一视频处理装置还包括:
    第一确定模块,用于根据所述第三视频图像序列和所述第一视频图像序列的图像内容,确定所述第三摄像头的相对拍摄方位,所述相对拍摄方位为所述第三摄像头相对于所述第一摄像头的拍摄方位;
    第二确定模块,用于根据所述相对拍摄方位,确定所述第二子窗口的目标显示位置。
  13. 根据权利要求9所述的视频处理装置,其中,所述第一视频处理装置还包括:
    第三接收模块,用于接收用户对所述视频预览界面的第四输入;
    第一控制模块,用于在所述第一视频图像序列和所述第二视频图像序列为录像过程中实时采集的视频图像序列的情况下,响应于所述第四输入,控制所述第一摄像头和所述第二摄像头停止采集视频图像;
    第二控制模块,用于在所述第一视频图像序列为已录制的第一视频中的视 频图像,且所述第二视频图像序列为已录制的第二视频中的视频图像的情况下,响应于所述第四输入,停止播放所述第一视频和所述第二视频。
  14. 根据权利要求9所述的视频处理装置,其中,所述第一视频处理装置还包括:
    第四接收模块,用于接收用户对所述目标视频的第五输入;
    第四显示模块,用于响应于所述第五输入,显示视频调节窗口,所述视频调节窗口中包括调节控件,至少一个第一视频缩略图和至少一个第二视频缩略图;所述至少一个第一视频缩略图为所述至少一帧第一视频图像的缩略图,所述至少一个第二视频缩略图为所述至少一帧第二视频图像的缩略图,所述调节控件用于更新所述目标视频的视频帧;
    第五接收模块,用于接收用户对所述调节控件的第六输入;
    第一更新模块,用于响应于所述第六输入,更新所述调节控件的显示位置,并根据更新后的所述调节控件的显示位置,更新所述目标视频的视频帧。
  15. 根据权利要求14所述的视频处理装置,其中,所述第一更新模块还用于执行以下至少一个步骤:
    更新所述第一视频图像序列的拼接结束视频帧;
    更新所述第二视频图像序列的拼接起始视频帧;
    增加或减少所述第一视频图像序列的拼接视频帧;
    增加或减少所述第二视频图像序列的拼接视频帧。
  16. 根据权利要求9所述的视频处理装置,其中,所述第一视频处理装置还包括:
    第六接收模块,用于接收用户对所述目标视频的第七输入;
    第五显示模块,用于响应于所述第七输入,在所述第一视频处理装置显示所述目标视频的第一视频编辑窗口;
    第七接收模块,用于接收用户对所述第一视频编辑窗口的第八输入;
    第二更新模块,用于响应于所述第八输入,根据编辑信息更新所述目标视频,所述编辑信息是根据所述第八输入确定的;
    发送模块,用于将所述编辑信息发送至第二视频处理装置,以使所述第二视频处理装置根据所述编辑信息同步更新所述目标视频。
  17. 一种电子设备,其中,包括处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1至8中任意一项所述的视频处理方法的步骤。
  18. 一种可读存储介质,其中,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1至8中任意一项所述的视频处理方法的步骤。
  19. 一种芯片,其中,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如权利要求1至8中任意一项所述的视频处理方法的步骤。
PCT/CN2022/118527 2021-09-16 2022-09-13 视频处理方法、装置、电子设备及可读存储介质 WO2023040844A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111091200.9 2021-09-16
CN202111091200.9A CN113794923A (zh) 2021-09-16 2021-09-16 视频处理方法、装置、电子设备及可读存储介质

Publications (3)

Publication Number Publication Date
WO2023040844A1 true WO2023040844A1 (zh) 2023-03-23
WO2023040844A9 WO2023040844A9 (zh) 2023-05-04
WO2023040844A8 WO2023040844A8 (zh) 2023-11-02

Family

ID=79183848

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/118527 WO2023040844A1 (zh) 2021-09-16 2022-09-13 视频处理方法、装置、电子设备及可读存储介质

Country Status (2)

Country Link
CN (1) CN113794923A (zh)
WO (1) WO2023040844A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113794923A (zh) * 2021-09-16 2021-12-14 维沃移动通信(杭州)有限公司 视频处理方法、装置、电子设备及可读存储介质
CN114845171A (zh) * 2022-03-21 2022-08-02 维沃移动通信有限公司 视频编辑方法、装置及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108933881A (zh) * 2017-05-22 2018-12-04 中兴通讯股份有限公司 一种视频处理方法及装置
US20190121395A1 (en) * 2016-04-12 2019-04-25 Samsung Electronics Co., Ltd. Image processing method and electronic device supporting same
CN110336968A (zh) * 2019-07-17 2019-10-15 广州酷狗计算机科技有限公司 视频录制方法、装置、终端设备及存储介质
CN113194227A (zh) * 2021-04-14 2021-07-30 上海传英信息技术有限公司 处理方法、移动终端和存储介质
CN113794923A (zh) * 2021-09-16 2021-12-14 维沃移动通信(杭州)有限公司 视频处理方法、装置、电子设备及可读存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013116163A1 (en) * 2012-01-26 2013-08-08 Zaletel Michael Edward Method of creating a media composition and apparatus therefore
CN113301351B (zh) * 2020-07-03 2023-02-24 阿里巴巴集团控股有限公司 视频播放方法、装置、电子设备及计算机存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190121395A1 (en) * 2016-04-12 2019-04-25 Samsung Electronics Co., Ltd. Image processing method and electronic device supporting same
CN108933881A (zh) * 2017-05-22 2018-12-04 中兴通讯股份有限公司 一种视频处理方法及装置
CN110336968A (zh) * 2019-07-17 2019-10-15 广州酷狗计算机科技有限公司 视频录制方法、装置、终端设备及存储介质
CN113194227A (zh) * 2021-04-14 2021-07-30 上海传英信息技术有限公司 处理方法、移动终端和存储介质
CN113794923A (zh) * 2021-09-16 2021-12-14 维沃移动通信(杭州)有限公司 视频处理方法、装置、电子设备及可读存储介质

Also Published As

Publication number Publication date
WO2023040844A9 (zh) 2023-05-04
CN113794923A (zh) 2021-12-14
WO2023040844A8 (zh) 2023-11-02

Similar Documents

Publication Publication Date Title
WO2023040844A1 (zh) 视频处理方法、装置、电子设备及可读存储介质
EP3905203B1 (en) Method and apparatus for processing video, and storage medium
CN107770627A (zh) 图像显示装置和操作图像显示装置的方法
WO2022116885A1 (zh) 拍照方法、装置、电子设备及存储介质
CN104796781A (zh) 视频片段提取方法及装置
WO2022100712A1 (zh) 真实环境画面中虚拟道具的显示方法、系统及存储介质
CN104902189A (zh) 图片处理方法及装置
KR20190013308A (ko) 이동 단말기 및 그 제어 방법
KR20180133743A (ko) 이동 단말기 및 그 제어 방법
WO2022252660A1 (zh) 一种视频拍摄方法及电子设备
JP7279108B2 (ja) ビデオ処理方法及び装置、記憶媒体
WO2022089284A1 (zh) 拍摄处理方法、装置、电子设备和可读存储介质
CN110636382A (zh) 在视频中添加可视对象的方法、装置、电子设备及存储介质
KR20180131908A (ko) 이동 단말기 및 그것의 동작방법
KR102575196B1 (ko) 촬영 방법, 촬영 장치, 전자기기 및 저장매체
WO2022205930A1 (zh) 图像效果的预览方法及图像效果的预览装置
WO2023174223A1 (zh) 视频录制方法、装置和电子设备
CN103995841A (zh) 定位照片的方法和装置
WO2023134583A1 (zh) 视频录制方法、装置及电子设备
CN112672061B (zh) 视频拍摄方法、装置、电子设备及介质
WO2023030306A1 (zh) 视频编辑方法、装置及电子设备
JP2018535454A (ja) 再生を制御する方法、装置、プログラム、及び記録媒体
CN113840070A (zh) 拍摄方法、装置、电子设备及介质
US20230345110A1 (en) Video capturing method and electronic device
WO2022199038A1 (zh) 画面调节参数调整方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22869213

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE