CN113794923A - Video processing method and device, electronic equipment and readable storage medium - Google Patents
Video processing method and device, electronic equipment and readable storage medium Download PDFInfo
- Publication number
- CN113794923A CN113794923A CN202111091200.9A CN202111091200A CN113794923A CN 113794923 A CN113794923 A CN 113794923A CN 202111091200 A CN202111091200 A CN 202111091200A CN 113794923 A CN113794923 A CN 113794923A
- Authority
- CN
- China
- Prior art keywords
- video
- input
- window
- video processing
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 36
- 238000012545 processing Methods 0.000 claims abstract description 237
- 238000000034 method Methods 0.000 claims abstract description 59
- 230000004044 response Effects 0.000 claims description 29
- 230000006870 function Effects 0.000 description 23
- 238000004891 communication Methods 0.000 description 19
- 238000010586 diagram Methods 0.000 description 14
- 230000000694 effects Effects 0.000 description 5
- 230000003247 decreasing effect Effects 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 230000003796 beauty Effects 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012356 Product development Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Television Signal Processing For Recording (AREA)
- User Interface Of Digital Computer (AREA)
- Studio Devices (AREA)
Abstract
The application discloses a video processing method and device, and belongs to the field of video processing. The method comprises the following steps: receiving a first input of a user to a first video processing device; responding to the first input, and displaying a first video image sequence acquired by a first camera of the first video processing device on a video preview interface; under the condition that a second video image sequence acquired by a second camera of a second video processing device is displayed on a video preview interface, generating a target video according to the first video image sequence and the second video image sequence; wherein the target video comprises at least one frame of a first video image in the first sequence of video images and at least one frame of a second video image in the second sequence of video images.
Description
Technical Field
The application belongs to the field of video processing, and particularly relates to a video processing method and device, electronic equipment and a readable storage medium.
Background
With the development of 5G technology, the speed and quality of video real-time transmission are greatly improved. The imaging quality of the camera of the electronic equipment is higher and higher. Nowadays, video recording and editing by using electronic devices is a product development trend.
At present, the video editing is mainly performed at a Personal Computer (PC), and the video recorded by a mobile phone is edited at the PC through professional video editing software, but the editing operation is relatively complex, the threshold is higher for ordinary users, and the method is more suitable for professional user operation.
Disclosure of Invention
An embodiment of the present application provides a video processing method, an apparatus, an electronic device, and a readable storage medium, which can solve the problems in the related art that an operation is complex and an operation difficulty is high when a video clip is performed.
In a first aspect, an embodiment of the present application provides a video processing method, where the method includes:
receiving a first input of a user to a first video processing device;
responding to the first input, and displaying a first video image sequence acquired by a first camera of the first video processing device on a video preview interface;
under the condition that a second video image sequence acquired by a second camera of a second video processing device is displayed on a video preview interface, generating a target video according to the first video image sequence and the second video image sequence;
wherein the target video comprises at least one frame of a first video image in the first sequence of video images and at least one frame of a second video image in the second sequence of video images.
In a second aspect, an embodiment of the present application provides a first video processing apparatus, including:
the first receiving module is used for receiving a first input of a user to the first video processing device;
the first display module is used for responding to the first input and displaying a first video image sequence acquired by a first camera of the first video processing device on a video preview interface;
the generation module is used for generating a target video according to the first video image sequence and the second video image sequence under the condition that a second video image sequence acquired by a second camera of a second video processing device is displayed on a video preview interface;
wherein the target video comprises at least one frame of a first video image in the first sequence of video images and at least one frame of a second video image in the second sequence of video images.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In this embodiment, video image sequences captured by cameras of different video processing apparatuses may be displayed on a video preview interface, and video clips may be performed according to a first video image sequence and a second video image sequence captured by the cameras of different video processing apparatuses, respectively, to generate a target image including at least one frame of a first video image and at least one frame of a second video image, where the at least one frame of the first video image is from the first video image sequence generated by the first video processing apparatus, and the at least one frame of the second video image is from the second video image sequence generated by the second video processing apparatus. According to the video processing method, under the condition that the video image sequences acquired by the cameras of different video processing devices are displayed on the video preview interface, the video editing can be performed on the different video image sequences generated by the different video processing devices, so that the target video is generated, professional video editing software is not needed, and the operation difficulty and complexity of video editing are reduced.
Drawings
Fig. 1 is a flowchart of a video processing method provided in an embodiment of the present application;
FIG. 2A is a schematic diagram of a video processing interface provided by an embodiment of the present application;
fig. 2B is a second schematic diagram of a video processing interface provided in the present application;
FIG. 2C is a third schematic diagram of a video processing interface provided by an embodiment of the present application;
FIG. 2D is a fourth schematic diagram of a video processing interface provided by an embodiment of the present application;
FIG. 2E is a fifth schematic diagram of a video processing interface provided by an embodiment of the present application;
FIG. 2F is a sixth schematic view of a video processing interface provided in an embodiment of the present application;
fig. 2G is a seventh schematic diagram of a video processing interface provided by an embodiment of the present application;
FIG. 2H is an eighth schematic view of a video processing interface provided by an embodiment of the present application;
FIG. 2I is a ninth illustration of a video processing interface provided by an embodiment of the present application;
FIG. 2J is a schematic diagram of a video processing interface provided by an embodiment of the present application;
FIG. 2K is an eleventh illustration of a video processing interface provided by an embodiment of the present application;
FIG. 2L is a twelfth schematic view of a video processing interface provided by an embodiment of the present application;
FIG. 2M is a thirteen schematic illustration of a video processing interface provided by an embodiment of the present application;
fig. 3 is a second flowchart of a video processing method according to an embodiment of the present application;
FIG. 4A is a fourteenth schematic diagram of a video processing interface provided in an embodiment of the present application;
FIG. 4B is a fifteen schematic diagram of a video processing interface provided by an embodiment of the present application;
fig. 5 is a block diagram of a video processing apparatus provided in an embodiment of the present application;
fig. 6 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure;
fig. 7 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The video processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, a flowchart of a video processing method according to an embodiment of the present application is shown, where the method may be applied to a first video processing apparatus, and the method may specifically include the following steps:
Illustratively, the first input may include, but is not limited to: the method comprises the following steps that click input of a user on a first video processing device is performed, or voice instructions input by the user are performed, or specific gestures input by the user are performed; the specific method can be determined according to actual use requirements, and the method is not limited in the embodiment of the application.
The specific gesture in the embodiment of the application may be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application can be click input, double-click input, click input of any number of times and the like, and can also be long-time press input or short-time press input.
And 102, responding to the first input, and displaying a first video image sequence acquired by a first camera of the first video processing device on a video preview interface.
For example, the video preview interface may be a shooting preview interface of a video, and the first video image sequence here may be a video image sequence acquired by the first camera in real time, and the video recorded in real time may be displayed frame by frame at the shooting preview interface.
For example, the video preview interface may also be a play preview interface of a generated video, and the first video image sequence here may be a video image sequence captured by the first camera in advance, and may be displayed in the play preview interface frame by frame for a video that has been recorded.
103, under the condition that a second video image sequence acquired by a second camera of a second video processing device is displayed on a video preview interface, generating a target video according to the first video image sequence and the second video image sequence.
In this embodiment, the second video image sequence is similar to the first video image sequence, and may be a video image sequence of a video recorded in real time, or a video image sequence of a video that has been recorded.
It should be noted that the second video processing apparatus is communicatively connected to the first video processing apparatus, so that the first video processing apparatus can display not only the first video image sequence of the first video processing apparatus but also the second video image sequence generated by another video processing apparatus (here, the second video processing apparatus) on the video preview interface.
For convenience of understanding, the following description takes the video image sequence as a video image sequence acquired by each camera in real time, and the video preview interface as a shooting preview interface as an example, and when the video image sequence is a video image sequence of a recorded video, the execution principle of the method of the embodiment of the present application is similar, and therefore, the description is omitted.
In addition, the display order of the first video image sequence and the second video image sequence in the video preview interface is not limited in the present application.
Wherein the target video comprises at least one frame of a first video image in the first sequence of video images and at least one frame of a second video image in the second sequence of video images.
In this embodiment, video image sequences captured by cameras of different video processing apparatuses may be displayed on a video preview interface, and video clips may be performed according to a first video image sequence and a second video image sequence captured by the cameras of different video processing apparatuses, respectively, to generate a target image including at least one frame of a first video image and at least one frame of a second video image, where the at least one frame of the first video image is from the first video image sequence generated by the first video processing apparatus, and the at least one frame of the second video image is from the second video image sequence generated by the second video processing apparatus. According to the video processing method, under the condition that the video image sequences acquired by the cameras of different video processing devices are displayed on the video preview interface, the video editing can be performed on the different video image sequences generated by the different video processing devices, so that the target video is generated, professional video editing software is not needed, and the operation difficulty and complexity of video editing are reduced.
Optionally, the video preview interface includes a main window and a first sub-window, where the main window is used to display the first video image sequence, and the first sub-window is used to display the second video image sequence.
Optionally, different windows in the video preview interface are used for displaying video data recorded by different video processing devices in real time.
The shooting preview interface can comprise a main window and at least one sub-window, and optionally, the number of the sub-windows can be multiple, and the sub-windows are used for displaying video image sequences recorded in real time by multiple other video processing devices (namely, other video processing devices) which are in communication connection with the first video processing device;
in addition, the main window and the sub-window in the shooting preview interface display video image sequences of different video processing devices, and different sub-windows can also display video image sequences of different video processing devices.
The following description will take the first video processing apparatus as a mobile phone M, and the other video processing apparatuses communicatively connected to the first video processing apparatus include a mobile phone a, a mobile phone B, and a mobile phone C as examples.
In addition, the video image sequences displayed in different windows in the shooting preview interface can be video image sequences obtained by real-time recording at different shooting angles by different mobile phones in the same shooting scene, and can also be video image sequences of multi-camera video in different shooting scenes. The shooting scene can be a sports scene, such as basketball shooting, football playing and the like.
The video processing device in the embodiment of the present application may be a mobile terminal, including a mobile phone, a tablet, and the like. Exemplarily, as shown in fig. 2A, taking a mobile phone as an example for explanation, in a video interface 11 of a mobile phone camera, a user may enter a multi-camera video clip mode after zooming a screen (for example, zooming to a minimum) by two fingers, so as to display a shooting preview interface, which is as shown in fig. 2B, and after the zooming operation, the shooting preview interface is divided into a plurality of windows, wherein one larger window is a main window 21, and an image acquired by a camera of a first video processing device (for example, a mobile phone M) of the method of the embodiment of the present application is displayed by default; the remaining smaller windows are sub-windows, and fig. 2B shows 8 sub-windows (e.g., one of the sub-windows 22) that default to a pending connection state, indicated by a plus sign, so that the mobile phone M has not yet connected to another video processing device (e.g., mobile phone) for multi-machine video clip in the pending connection state.
The connection mode between different video processing devices may be WiFi (wireless network), bluetooth, etc., hereinafter, WiFi connection is taken as an example for description, and the communication connection modes such as bluetooth are the same and will not be described again.
For example, the WiFi connection is established between the mobile phone M and the mobile phones a, B, and C, and the video data recorded by the mobile phones a, B, and C in real time can be transmitted to the mobile phone M in real time.
Illustratively, the multi-camera video clip mode requires multiple mobile phones to work in conjunction, so the first video processing device first needs to connect multiple mobile phones. In the shooting preview interface shown in fig. 2C, i.e., the multi-camera video clip mode interface, the user can display the mobile phone search interface shown in fig. 2D by clicking any one of the sub-windows, here the sub-window 22.
After the user clicks any sub-window, the mobile phone M can establish a wifi hotspot and wait for connection of other mobile phones. And when other mobile phones are also in a multi-phone video clip mode, nearby WiFi signals can be searched, wherein in the multi-phone video clip mode, if the sub-window is not clicked, only the nearby WiFi signals are searched, and if the sub-window is clicked, a WiFi hotspot is established. The WiFi hotspot may be a password-less WiFi hotspot.
The shooting preview interface can be switched to a multi-camera video clip mode from a common video mode by finger zooming, for example, a multi-camera video clip mode interface of a mobile phone A shown in fig. 2E, a multi-camera video clip mode interface of a mobile phone B shown in fig. 2F, and a multi-camera video clip mode interface of a mobile phone C shown in fig. 2G; the main window in each multi-computer video clip mode interface of the mobile phone A, the mobile phone B and the mobile phone C displays the video content recorded by each mobile phone; the principle of fig. 2E, 2F, and 2G is similar to that of the multi-camera video clip mode interface of the mobile phone M shown in fig. 2C, and is not described in detail here.
Since the same reference numerals in fig. 2A to 2M denote the same objects, the same reference numerals in different drawings will not be explained one by one, and the explanation of other drawings will be referred to.
In this embodiment, the hotspot information of the WiFi hotspot may carry some parameter information of the mobile phone M, for example, the hotspot information may include a parameter used to indicate that the mobile phone M is a multi-camera video clip mode, identification information of the mobile phone M, and the like.
In this embodiment, when two mobile phones perform WiFi connection through the WiFi hotspot in the multi-camera video clip mode for the first time, WiFi connection may be performed in an authentication manner, and if two mobile phones do not perform WiFi connection through the WiFi hotspot in the multi-camera video clip mode for the first time, WiFi connection may be directly performed without authentication.
In this embodiment, when two mobile phones perform WiFi connection through a WiFi hotspot in a multi-camera video clip mode for the first time, the following method may be used: for other mobile phones (mobile phones without WiFi hotspots established outside the mobile phone M) in the multi-camera video clip mode, after the WiFi hotspots are searched, if the hotspot information of the searched WiFi hotspots is found to be the WiFi hotspot of the multi-camera video clip mode, the WiFi hotspots can be actively connected, and the authentication mode is entered. Specifically, the other mobile phones may send the information of their own mobile phone to the mobile phone M (also called the master mobile phone) in an authentication manner, and wait for the connection application of the master mobile phone. Once there are other mobile phones requesting to connect to the WiFi hotspot of the master mobile phone, as shown in fig. 2D, the master mobile phone may display the identification information of each mobile phone requesting authentication in the mobile phone search interface.
In this embodiment, WiFi connection is not performed on two mobile phones through a WiFi hotspot in a multi-phone video clip mode for the first time, and may be implemented in the following manner: for other mobile phones (mobile phones without WiFi hotspots established outside the mobile phone M) in the multi-camera video clip mode, after the WiFi hotspots are searched, if the hotspot information of the searched WiFi hotspots is found to be the WiFi hotspots in the multi-camera video clip mode, the WiFi hotspots can be actively connected, and therefore WiFi connection is established between the mobile phone M and the other mobile phones.
Optionally, the method according to the embodiment of the present application may include: displaying at least one device identification indicating a video processing device communicatively connected to the first video processing device; then, receiving a third input of the target device identification in the at least one device identification by the user; and finally, responding to the third input, and displaying a third video image sequence acquired by a third camera of a third video processing device in a second sub-window, wherein the third video processing device identifies the indicated video processing device for the target device.
Illustratively, the third input may include, but is not limited to: clicking input of the target device identification by the user, or a voice instruction input by the user, or a specific gesture input by the user; the specific method can be determined according to actual use requirements, and the method is not limited in the embodiment of the application.
The specific gesture in the embodiment of the application may be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application can be click input, double-click input, click input of any number of times and the like, and can also be long-time press input or short-time press input.
For example, the device identifier may be identification information of each mobile phone displayed on the mobile phone search interface shown in fig. 2D, here, mobile phone a, mobile phone B, and mobile phone C; and the mobile phone search interface can also display a control 31 of the mobile phone M, so that when the identification information of each mobile phone requesting authentication is displayed, the identification information of each mobile phone can be displayed in the mobile phone search interface according to the distance and the direction of each mobile phone relative to the mobile phone M, wherein the mobile phone C is closest to the mobile phone M, and the next mobile phone B is the most distant mobile phone a.
In the cell phone search interface of the cell phone M, the user may drag the device identifier of the cell phone (taking the cell phone C as an example for explanation) that the user wants to connect to into the area to be connected 32, and then click the "connect" control 33, and the cell phone M may send a connection application to the cell phone C (for example, the third video processing device). The mobile phone C can receive the connection application of the mobile phone M, so that the mobile phone C and the mobile phone M become two-way communication when clicking 'consent' in the multi-phone video clip mode of the mobile phone C; in addition, the mobile phone C does not need to carry out multi-phone video clip, and only needs to provide a real-time recorded video image sequence of the shooting angle of the mobile phone C and transmit the video image sequence to the mobile phone M, so that the mobile phone C can exit the multi-phone video clip mode, and can only display a preview picture of a video shot by the mobile phone C on the mobile phone C.
Similarly, the mobile phone M may also connect to more other mobile phones for video recording and clipping through WiFi by clicking other sub-windows in fig. 2C, for example, the mobile phone M and the mobile phone B also establish WiFi connection, and the mobile phone M and the mobile phone a also establish WiFi connection. Then, the mobile phone a, the mobile phone B, and the mobile phone C, which establish WiFi connection with the mobile phone M, can transmit the respective real-time recorded videos to the mobile phone M through WiFi connection in real time.
As shown in fig. 2H, the sub-window 22 (e.g. the second sub-window) in the multi-camera recording and clipping mode interface of the mobile phone M is used to display a preview of the video recorded by the mobile phone C (e.g. the third video processing apparatus); the sub-window 23 (e.g., a first sub-window) is used for displaying a preview screen of a video recorded by the mobile phone B (e.g., a second video processing apparatus); the sub-window 24 is used for displaying a preview picture of a video recorded by the mobile phone A; the main window 21 is used to display a preview of a video recorded by the mobile phone M in an initial state.
In the embodiment of fig. 1, how to display the video image sequence of the second video processing apparatus is similar to the method for displaying the video recorded by the camera of the mobile phone C in the sub-window 22, which is exemplified here, and details are not repeated.
After the mobile phone M, the mobile phone a, the mobile phone B, and the mobile phone C all start recording, the three sub-windows and the main window display preview images of videos recorded in real time at each end.
In this example, the main window is used to display a preview of a video recorded by the first video processing apparatus, i.e. the mobile phone M, in an initial state, and the sub-window is used to display a preview of a video recorded by another mobile phone in communication connection with the mobile phone M; in other embodiments, the main window may not initially display any video recorded by the device, that is, the preview image of the video recorded by the mobile phone M is also displayed in a sub-window.
In the embodiment of the application, by displaying a device identifier for indicating a video processing device in communication connection with a first video processing device, and receiving a third input of a user to a target device identifier in the device identifier, in response to the third input, displaying a third video image sequence acquired by a third camera of the video processing device indicated by the target device identifier in a sub-window, so as to record videos in a multi-machine-position manner, and editing video images of different machine positions to generate a target video, and by connecting a plurality of video processing devices in communication with the first video processing device, a function of editing videos while displaying the recorded videos can be realized, and videos displayed in a main window are taken as the target video (what you see is what you get), so that complexity of video editing is simplified.
In addition, in the embodiment of the present application, by communicatively connecting the first video processing apparatus with at least one other video processing apparatus (i.e. a video processing apparatus other than the first video processing apparatus), the preview image of the video recorded in real time by the other video processing apparatus can be displayed in the sub-window of the video preview interface of the first video processing apparatus, and the different sub-windows display the preview image of the video recorded by the different video processing apparatuses, so that different videos captured by different other video processing devices can be distinguished by different sub-windows, on the basis of the video recording function, the function of recording the video and simultaneously editing the video can be realized through the mutual communication of a plurality of video processing devices, the video editing is realized by the video picture of the main window in a visible-to-immediate mode, and the complexity of video editing is simplified; in addition, by displaying the preview screen of the video recorded in real time by the first video processing device in the main window, the starting video segment of the clipped target video is derived from the video data recorded by the first video processing device, and the first video processing device is used as a control device of the video clip, so that the target video obtained through the main window is more in line with the video clip scene.
Alternatively, the main window may be larger in size relative to the sub-windows and located near the center of the video preview interface, thereby facilitating the user's browsing of the video content displayed in the main window.
Optionally, the method according to the embodiment of the present application may further include: determining a relative shooting orientation of the third camera according to the image contents of the third video image sequence and the first video image sequence, wherein the relative shooting orientation is the shooting orientation of the third camera relative to the first camera; and then, determining the target display position of the second sub-window according to the relative shooting orientation.
Alternatively, as shown in fig. 2H, the user may also change the position of the sub-window by dragging the sub-window with the video content displayed to another sub-window (which may or may not display the video content), where the position of the sub-window is determined according to the relative shooting orientation.
For example, in fig. 2H, the sub-window 22 displays a video recording screen of the mobile phone a, the sub-window 23 displays a video recording screen of the mobile phone B, and the main window 21 displays a video recording screen of the current mobile phone, i.e., the mobile phone M. Suppose that the mobile phone a is located on the left side of the mobile phone M to shoot a shot person, the mobile phone B is located on the right side of the mobile phone M to shoot a shot person, the mobile phone M shoots the front of the shot person, and the user can move the sub-window 23 to any one of the small windows on the right side of the main window 21 to conveniently show the shooting orientation of the camera corresponding to the video picture in the sub-window 23 relative to the shooting orientation of the camera corresponding to the video picture in the main window 21.
In the above embodiment, the user may trigger the display of the interface of fig. 2D by clicking the sub-window 22 in fig. 2C, so as to implement the connection between the first video processing apparatus and the third video processing apparatus by operating fig. 2D; after the first video processing device is communicatively connected to the third video processing device, the sub-window 22 (second sub-window) is used to display a third sequence of video images captured by a third camera of the third video processing device. In this embodiment, the shooting orientation of the third camera with respect to the first camera may be determined according to the image content of the third video image sequence and the image content of the first video image sequence, so as to adjust the second sub-window to the corresponding position for displaying. The position of the sub-window can be automatically or manually adjusted according to the relative shooting direction.
In this embodiment of the present application, a shooting orientation of the third camera with respect to the first camera may be determined according to image contents of the third video image sequence and the first video image sequence; and determining the target display position of the second sub-window based on the shooting direction and the position of the main window, so that the user can identify the shooting angle of the camera corresponding to each sub-window through the relative position relationship between the second sub-window and the main window in the video preview interface.
For example, if the mobile phone M shoots a subject right in front of the subject and the mobile phone C shoots the subject in the northwest corner of the mobile phone M, the video content shot by the mobile phone C can be displayed in the sub-window 22.
Optionally, the video processing method according to the embodiment of the present application further includes: receiving a fourth input of the video preview interface from the user; controlling the first camera and the second camera to stop collecting video images in response to the fourth input under the condition that the first video image sequence and the second video image sequence are video image sequences collected in real time in a video recording process; and in the case that the first video image sequence is a video image in a recorded first video and the second video image sequence is a video image in a recorded second video, stopping playing the first video and the second video in response to the fourth input.
Illustratively, the fourth input may include, but is not limited to: clicking input of a video preview interface by a user, or a voice instruction input by the user, or a specific gesture input by the user; the specific method can be determined according to actual use requirements, and the method is not limited in the embodiment of the application.
The specific gesture in the embodiment of the application may be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application can be click input, double-click input, click input of any number of times and the like, and can also be long-time press input or short-time press input.
The first video image sequence and the second video image sequence are taken as an example to be described as video image sequences acquired in real time in a video recording process.
Illustratively, as shown in fig. 2I, the main window 21 has a preset control 31, and the start and stop of video recording can be controlled by clicking the preset control 31. Specifically, the mobile phone M can control the mobile phone a, the mobile phone B, and the mobile phone C connected to the mobile phone M by clicking the preset control 31, and the mobile phone M starts or ends recording; however, each of the mobile phone a, the mobile phone B, and the mobile phone C can only control the start or the end of the video recording of the main body mobile phone, as shown in fig. 2J, the conventional video recording interface of the mobile phone a has a control 41, and a user of the mobile phone a can start the video recording of the mobile phone a by clicking the control 41, or end the video recording of the mobile phone a.
If any one of the mobile phones a, B, and C suspends video recording by clicking the video control (for example, the control 41 in fig. 2J) of the respective mobile phone, the master mobile phone, that is, the mobile phone M, can control the mobile phone that suspends video recording to continue video recording.
In the embodiment of the application, through the fourth input to the video preview interface, under the condition that the video content displayed by the main window and the sub-window is a video image sequence collected in real time, the fourth input is responded, and the cameras corresponding to the windows are controlled to stop collecting the video images; under the condition that the video contents displayed by the main window and the sub-window are video images in the recorded videos, the recorded videos can be stopped from being played by each window, and the video images collected by the multiple cameras can be stopped from being recorded or played by one-key input on the video preview interface.
Illustratively, through the input of the preset control in the main window, the first video processing device and other video processing devices in communication connection with the first video processing device can be controlled to uniformly execute the operation of starting or stopping (including pausing) video recording, and the multi-machine uniform control can be realized through one-key operation on the main window.
Optionally, as shown in fig. 2I, the window for displaying video data of other video processing apparatuses (i.e., other video processing apparatuses communicatively connected to the first video processing apparatus) further has a preset control therein, which is used to control video recording states of the other video processing apparatuses, specifically including a recording-currently state and a recording-suspended state, specifically, a state of the control 32 in the sub-window 22 indicates that the mobile phone a is currently in the recording-currently state, a state of the control 33 in the sub-window 23 indicates that the mobile phone B is currently in the recording-suspended state, and a state of the control 34 in the sub-window 24 indicates that the mobile phone C is currently in the recording-suspended state.
In the embodiment of the application, as the preset control is arranged in the window for displaying the video processing device, the recording state of the video processing device can be controlled through the preset control. And the user can intuitively know whether the video processing device is in the recording state or the recording pause state by presetting the state of the control.
Optionally, in executing step 103, a second input to the first sub-window by the user may be received; then, in response to the second input, exchanging display content in the main window and the first sub-window; and finally, performing video splicing on at least one frame of first video image and at least one frame of second video image displayed in the main window to obtain the target video.
Illustratively, the second input may include, but is not limited to: clicking input of the first sub-window by the user, or a voice instruction input by the user, or a specific gesture input by the user; the specific method can be determined according to actual use requirements, and the method is not limited in the embodiment of the application.
Illustratively, the second input may also be an input such that there is an overlap of a partial window area between the sub-window and the main window, e.g. an input dragging the first sub-window to the main window.
The specific gesture in the embodiment of the application may be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application can be click input, double-click input, click input of any number of times and the like, and can also be long-time press input or short-time press input.
For example, the video processing devices of the video data displayed by the first sub-window and the main window respectively can be interchanged, and a first video clip and a second video clip can be spliced according to the display order in the main window to generate the target video, where the first video clip and the second video clip are respectively the video data from different video processing devices displayed by the main window.
Before the first input is received, the video processing device corresponding to the video data displayed in the main window may be another video processing device or the first video processing device, and the specific example above takes the video data of the first video processing device displayed in the initial state of the main window as an example, but the application is not limited thereto. The video data of the first video processing device may also be displayed in the sub-window in an initial state, i.e. before the first input is received.
In this embodiment, the main window displays a video frame of the finally recorded and edited target video, which is obtained by what you see. And the sub-window displays the shot pictures of other video processing devices. In the video recording process, if the user of the mobile phone M finds that the video recording contents of other mobile phones connected with the mobile phone M are more suitable and need to be added to the target video, the corresponding sub-window can be dragged to the position of the main window, and the sub-window is dragged to the main window to trigger the clipping of the video. Illustratively, as shown in fig. 2K, the user of the mobile phone M drags the sub-window 22 to the main window 21 in the direction of the arrow to switch the phone position.
Illustratively, for example, the input time point of the first input is t1, and before t1, the main window plays the video content recorded by the cell phone M, for example, the first video clip (including at least one frame of the first video image); after t1, the video content recorded by cell phone a corresponding to the dragged sub-window, for example, the second video clip (including at least one frame of the second video image) is played through the main window. Therefore, through the first input, the display content of the main window can be switched from the first video segment to the second video segment, and the display content of the main window is the finally recorded target video. And during splicing, splicing according to the time sequence, and splicing the first video clip and the second video clip to obtain the target video.
Of course, the target video may also be obtained through a plurality of times of the first input, for example, when the user wishes to add the video content displayed by one sub-window to the target video, the sub-window may be dragged to the main window, i.e., the first input, so that the video content displayed by the sub-window is added to the target video in response to the first input; if the user again wishes to add video content displayed by another sub-window to the target video, the first input of dragging the other sub-window to the main window is continued, so that in response to the first input, video content displayed by the other sub-window is also added to the target video. And when the mobile phone with the last video stops recording or the main mobile phone stops recording, storing the content displayed in the main window into the main mobile phone to obtain the target video. For example, in a motion scene, the video recording can be carried out by switching to a proper shooting angle at any time.
In this embodiment, through the dragging operation shown in fig. 2K, the interface shown in fig. 2L is skipped, that is, the video content recorded by the first video processing apparatus (mobile phone M) after t1 is displayed in the sub-window 22, and the video content recorded by the mobile phone a after t1 is displayed in the main window 21; and finally, splicing at least two sections of videos to obtain the target video.
In the embodiment of the application, a first video image sequence acquired by a first camera of a first video processing device can be displayed in a main window of a video preview interface, and a second video image sequence acquired by a second camera of a second video processing device can be displayed in a first sub-window of the video preview interface; when a video is recorded, the display contents of the main window and the first sub-window can be exchanged by performing second input on the first sub-window, namely, the main window is switched and displayed to be a video image sequence acquired by the second camera, so that the first sub-window is switched and displayed to be the video image sequence acquired by the first camera, and because different video processing devices can perform video shooting of the same scene from different angles, machine position switching in the video recording process can be realized; the target video can be obtained based on the content displayed in the main window, specifically, the videos displayed in the main window are sequentially spliced according to the display sequence, so that the target video is obtained, the videos can be recorded in the same scene based on at least two video processing devices, the operation difficulty and complexity of video editing are reduced, and the video processing efficiency is improved.
In the embodiment of the application, video recording can be performed by means of a plurality of video processing devices, video data recorded in real time by different video processing devices are displayed in different windows of a video preview interface, and machine position switching in the video recording process can be realized by dragging a sub-window to the input of a main window in the video recording process; when recording a video, the video data from different video processing devices respectively displayed in the main window, namely the first video data and the second video data, are spliced according to the display sequence in the main window, so that the same scene can be recorded based on at least two video processing devices, the operation difficulty and complexity of video editing are reduced, and the video processing efficiency is improved. The video content recorded by the plurality of video processing devices is displayed on the equipment of one video processing device in real time, so that a user can switch the video recorder position in real time by dragging different sub-windows to the main window, and the operability of the user in the video recording process is improved.
Optionally, the method according to the embodiment of the present application may further include: under the condition that the video processing device corresponding to each window in the shooting preview interface stops video recording, storing the target video and the video data recorded by each video processing device; and for each video clip spliced in the target video, storing the mapping relation between the corresponding time point of each video clip in the target video and the corresponding time point in the video data recorded by the video processing device.
After the first video processing device and the other video processing devices establish communication connection, after the other video processing devices start recording, the first video processing device may receive video content recorded in real time by the other video processing devices, and store video data recorded by each video processing device and store the obtained target video under the condition that all the video processing devices stop recording videos.
Optionally, the method according to the embodiment of the present application may further include: receiving a fifth input of the target video from the user; in response to the fifth input, displaying a video adjustment window, the video adjustment window including an adjustment control, at least one first video thumbnail, and at least one second video thumbnail; the at least one first video thumbnail is a thumbnail of the at least one frame of the first video image, the at least one second video thumbnail is a thumbnail of the at least one frame of the second video image, and the adjustment control is used to update the video frame of the target video; then, receiving a sixth input of the user to the adjusting control; and responding to the sixth input, updating the display position of the adjusting control, and updating the video frame of the target video according to the updated display position of the adjusting control.
The fifth input and the sixth input in this embodiment and the seventh input and the eighth input in the following embodiments may refer to the above related exemplary description about the first input, and the principle is similar, and details are not repeated here.
For example, the target video may be saved in an album of the mobile phone M, and the user clicks an edit control (i.e., a fifth input) on the target video saved in the album. After clicking the edit control, the interface of the video adjustment window shown in fig. 2M of the target video may be entered.
Optionally, the video adjustment window includes a main play progress bar of the target video, where the main play progress bar includes a preset identifier 53, and the preset identifier 53 moves on the play progress bar along with a change of a video play progress in the video play window.
The preset mark in the application is used for indicating characters, symbols, images and the like of information, and a control or other container can be used as a carrier for displaying information, including but not limited to character marks, symbol marks and image marks.
The video adjusting window comprises a sub-playing progress bar of each video clip in the target video, wherein movable adjusting controls are displayed at the splicing positions of different video clips.
Illustratively, as shown in fig. 2M, a video playing window 54 is included in the video adjusting window, and the video playing window 54 is used for displaying a picture of the target video; the main playing progress bar 52 is a progress bar of the video played in the video playing window 54, and the main playing progress bar 52 is provided with a preset identifier 53 moving along with the playing time.
Further, as shown in fig. 2M, for example, the target video is composed of a segment of video a, a segment of video B, and a segment of video C, which are spliced in sequence. The video clip interface further includes a plurality of sub-play progress bars above the main play progress bar 52, and specifically, the sub-play progress bars of the video segments constituting the target video may be divided into a plurality of lines to be displayed according to the sequence of the play time from front to back, where the sub-play progress bars include a sub-play progress bar 61 of the video a, a sub-play progress bar 62 of the video B, and a sub-play progress bar 63 of the video C in sequence; in addition, at the splicing position of different sub-play progress bars, an adjusting control 51 may be further included, where the adjusting control 51 may be understood as a fine-tuning control, and a time point of machine position switching in the progress bar of the complete target video may correspond to one movable adjusting control. For example, the recorded target video is composed of three segments, i.e., video a, video B and video C, and includes two adjustment controls 51, one for adjusting the video frame at the joint of video a and video B, and the other for adjusting the video frame at the joint of video B and video C.
Optionally, as shown in fig. 2M, dragging the preset identifier 53 may control the playing progress of the video in the video playing window 54, and in addition, by clicking the preset identifier 53, the video in the video playing window 54 may be controlled to pause or continue playing, and in both the pause playing state and the continue playing state, the display pattern of the preset identifier 53 may be different.
In the embodiment of the application, the preset identifier of the main playing degree strip in the video clip interface can be moved to control the playing progress of the video in the video playing window; and moreover, the playing state of the video in the video playing window can be changed by inputting the preset identifier.
Optionally, the method of the embodiment of the application may adjust the video frame at the splicing position by adjusting the control; thumbnails of the video frames at the splicing position are respectively displayed on the left side and the right side of the adjusting control.
Illustratively, as shown in fig. 2M, two sides of the adjustment control 51 between the video a and the video B may respectively display two thumbnails, specifically including: a thumbnail 71 of the last frame image of the video a located above the subplayprogress bar 61, and a thumbnail 72 of the first frame image of the video B located above the subplayprogress bar 62; in addition, fig. 2M also shows two thumbnails of video frames at another splice, which is not repeated here.
Of course, in the example here, the adjustment control 51 has not moved, and then after the adjustment control 51 moves to the left or right along the arrow direction in fig. 2M, the position where the adjustment control 51 stays may correspond to a different video segment splice, and then thumbnails of two frame images from different video segments at the splice are also displayed on the left and right sides of the adjustment control 51.
In the embodiment of the application, thumbnails of two frames of images at the splicing position of two video segments are displayed, so that when a user moves the adjusting control to finely adjust the target video, whether the video frame splicing is proper or not can be judged by browsing the two thumbnails at the splicing position.
For example, in fig. 2M, the preset control 51 may be moved left or right in the direction of the arrow to trigger the adjustment of the splicing position of different video segments in the target video, and after the adjustment, the save control 55 may be clicked to update the target video.
Illustratively, taking the sixth input of the adjustment control 51 between the two sub-playing progress bars of the video a and the video B in fig. 2M as an example, by moving the adjustment control 51 to the left, the video a may be reduced by several frames from the tail, and the video B may be added by the same number of video frames at the head, so as to realize the adjustment of the video frames at the joint of the video a and the video B.
Optionally, when the step of updating the video frame of the target video is performed, the step may be implemented by at least one of the following steps: updating a splicing end video frame of the first video image sequence; updating a stitching start video frame of the second video image sequence; adding or subtracting stitched video frames of the first sequence of video images; adding or subtracting stitched video frames of the second sequence of video images. Wherein the spliced video frame represents a video frame used to splice the target video at the time of the handover.
For example, if the adjustment control 52 corresponding to the splicing position of the video a and the video B in fig. 2M is moved leftward by the length of the progress bar corresponding to the duration of 2s, the video frame at the splicing end position of the video a (i.e., the first video image sequence here) in the target video needs to be updated, specifically, the spliced video frame of the video a is reduced, and here, the video frame at the tail of the video a is reduced by the duration of 2 s; and updating the video frame at the splicing start position of the video B (namely the second video image sequence) in the target video, specifically, acquiring a corresponding video frame from the original video to which the video B belongs, and adding the corresponding video frame to the second video image sequence.
Here, the left movement of the adjustment control 52 is taken as an example for description, and when the adjustment control 52 is moved to the right, the method is similar, and is not described here again.
Continuing with the example of fig. 2M, moving the adjustment control 51 between the two sub-play progress bars of the video a and the video B, by moving the adjustment control 51 leftward, the adjustment control 51 moves closer to the sub-play progress bar of the video a and moves away from the sub-play progress bar of the video B, a preset mapping relationship exists between the moving distance and the number of video frames, and a target frame number that needs to be adjusted may be determined based on the moving distance, for example, 3 frames may be decreased at the tail of the video a, 3 frames are increased at the head of the video B, and the data source of the increased 3 frames is the complete original video data recorded by the video processing device corresponding to the video B.
Optionally, after fine adjustment is performed through the adjustment control, for example, the adjustment control 51 in fig. 2M moves to stay at the position corresponding to the 10 th s of the video a, the video segments of the 7 th to 10 th s of the video a and the video segment of the head of the video B (the video segments of the 1 st to 3 th s) may be spliced and then played through the video playing window 54.
In the embodiment of the application, when a user moves the adjusting control between two spliced video segments in the target video, the splicing positions of the two spliced video segments can be adjusted; moreover, the two video clips which are adjusted by the splicing position can be played, so that a user can conveniently browse a section of played video with the splicing position updated, and judge whether the updated splicing position between different video clips is proper or not.
Optionally, in this embodiment of the application, when the spliced portion of the target video is adjusted, the spliced video frame may be previewed, and the adjustment result may be previewed, so as to ensure that the effect desired by the user may be achieved after the video is fine-tuned.
Optionally, after step 103, as shown in fig. 3, the method of the embodiment of the present application may further include:
The video editing in this embodiment is described by taking a target video as an example, and in other embodiments, the processing target may be another video, for example, a video recorded by the first video processing apparatus, the second video processing apparatus, or another video downloaded from a network.
For example, after generating the target video, the mobile phone M may send the target video to the mobile phone a, the mobile phone B, and the mobile phone C, so that the three mobile phones may obtain the target video.
Illustratively, the connection mode of the mobile phone M and other video processing devices is similar to the above example, but in the triggering mode, the following mode may also be used: as shown in fig. 4A, in the mobile phone M, a user opens a target video in an album, and clicks the multi-machine collaborative editing control 82, so that multi-machine synchronous editing can be performed on the target video in the window 81.
The mobile phone a, the mobile phone B, and the mobile phone C also display the interface shown in fig. 4A, all the mobile phones successfully connected with the mobile phone M display the target video, and the editing options are the same. Fig. 4A shows various editing options, which are not described in detail here.
For example, clicking the "beauty" control makes an eighth input. It should be noted that, in order to avoid that multiple terminals perform the same editing operation on the same video to cause editing disorder, in the embodiment of the present application, different mobile phones perform different edits on the same video. After the user clicks one editing option, the mobile phone M can share the editing information corresponding to the editing option to the mobile phone a, the mobile phone B, and the mobile phone C.
Similarly, after the mobile phone a, the mobile phone B, or the mobile phone C selects a certain editing option to edit the target video, the editing information corresponding to the editing option can be synchronously sent to the mobile phone M in real time, and the mobile phone M can synchronously share the received editing information with other mobile phones, so that the editing information is shared among the four mobile phones.
After the plurality of mobile phones are in communication connection with the mobile phone M, the mobile phone a may allocate subtitles to the video, the mobile phone B may adjust a filter for the video, the mobile phone C may clip the duration of the video, and the like.
Optionally, after editing the target video, the mobile phone M may display the edited preview image in the window 81; in addition, since the target video is also edited by another mobile phone, the window 81 can also preview the video effect edited by another mobile phone.
Optionally, in this embodiment, the mobile phone M saves the edited video by clicking the saving control in fig. 4A or fig. 4B; if the sides of the mobile phone A, the mobile phone B and the mobile phone C respectively click the saving control, the saved video is synchronized to the mobile phone M.
In the embodiment of the application, the target video can be edited by the plurality of video processing devices, and the plurality of video processing devices can perform various video editing operations with different functions, so that the requirement that a user needs multi-user cooperation in the process of editing the video is met, and the editing efficiency is improved.
Optionally, when there is a conflict between the editing information of different video processing apparatuses (for example, the mobile phone a clips the video frames of the 1 st to 5 th video frames of the target video, and the mobile phone M performs the face beautifying operation on the video frames of the 1 st to 10 th video frames, it is obvious that there is a conflict between the two editing information), the prompt may be performed through the prompt information, and after one of the video processing apparatuses edits, the other video processing apparatuses are prompted to perform corresponding editing.
In this embodiment, if the editing operation being performed by one mobile phone may affect the editing operation of other mobile phones, the marking and the prompting may be performed. Such as the duration that the mobile phone a is clipping the video, the deleted time period is marked in the progress bar by a special color in the other mobile phones (optionally, in other embodiments, the added time period may also be included), and the other mobile phones side is marked in the progress bar by a special color to indicate that the video segment is clipped. After the save control in fig. 4A is clicked in any one of the mobile phones, the editing result is synchronized to the other mobile phones.
Optionally, if one mobile phone already performs a certain clipping function, the clipping function of the other mobile phone is set to gray to prompt the other user that the clipping function corresponding to the gray control is being processed by the other mobile phone, and if the gray clipping function control is clicked, the other user is prompted to use the clipping function. For example, as shown in fig. 4B, it is assumed that the editing function of the gray "music" widget is performed by the mobile phone a, and the editing function of the "beauty" widget is performed by the mobile phone B, so in fig. 4B on the mobile phone M side, both options of the two editing functions are gray, and the user cannot edit the target video using the two editing functions of the mobile phone M, and the problem of disordered editing information caused by editing the same editing function on the target video by different video processing apparatus sides can be avoided.
It should be noted that, in the video processing method provided in the embodiment of the present application, the execution subject may be a video processing apparatus, or a control module in the video processing apparatus for executing the video processing method. In the embodiment of the present application, a video processing apparatus executing a video processing method is taken as an example, and the video processing apparatus provided in the embodiment of the present application is described.
Referring to fig. 5, a block diagram of a first video processing device 300 of one embodiment of the present application is shown. The first video processing apparatus 300 includes:
a first receiving module 301, configured to receive a first input of a user to a first video processing apparatus;
a first display module 302, configured to display, in response to the first input, a first video image sequence acquired by a first camera of the first video processing apparatus on a video preview interface;
a generating module 303, configured to generate a target video according to the first video image sequence and the second video image sequence when a second video image sequence acquired by a second camera of a second video processing apparatus is displayed on a video preview interface;
wherein the target video comprises at least one frame of a first video image in the first sequence of video images and at least one frame of a second video image in the second sequence of video images.
In this embodiment, video image sequences captured by cameras of different video processing apparatuses may be displayed on a video preview interface, and video clips may be performed according to a first video image sequence and a second video image sequence captured by the cameras of different video processing apparatuses, respectively, to generate a target image including at least one frame of a first video image and at least one frame of a second video image, where the at least one frame of the first video image is from the first video image sequence generated by the first video processing apparatus, and the at least one frame of the second video image is from the second video image sequence generated by the second video processing apparatus. According to the video processing method, under the condition that the video image sequences acquired by the cameras of different video processing devices are displayed on the video preview interface, the video editing can be performed on the different video image sequences generated by the different video processing devices, so that the target video is generated, professional video editing software is not needed, and the operation difficulty and complexity of video editing are reduced.
Optionally, the video preview interface includes a main window and a first sub-window, the main window is used for displaying the first video image sequence, and the first sub-window is used for displaying the second video image sequence;
the generating module 303 includes:
the first receiving submodule is used for receiving second input of a user to the first sub-window;
the switching sub-module is used for responding to the second input and switching the display content in the main window and the first sub-window;
and the splicing submodule is used for carrying out video splicing on at least one frame of first video image and at least one frame of second video image displayed in the main window to obtain the target video.
In this embodiment, when recording a video, the display contents of the main window and the first sub-window may be exchanged by performing a second input on the first sub-window, that is, the main window is switched and displayed to the video image sequence acquired by the second camera, so that the first sub-window is switched and displayed to the video image sequence acquired by the first camera, and since different video processing devices may perform video shooting of the same scene from different angles, machine position switching in the video recording process may be achieved; the target video can be obtained based on the content displayed in the main window, specifically, the videos displayed in the main window are sequentially spliced according to the display sequence, so that the target video is obtained, the videos can be recorded in the same scene based on at least two video processing devices, the operation difficulty and complexity of video editing are reduced, and the video processing efficiency is improved.
Optionally, the first video processing apparatus 300 further includes:
a second display module for displaying at least one device identification indicating a video processing device communicatively coupled to the first video processing device;
a second receiving module, configured to receive a third input of a target device identifier from the at least one device identifier from a user;
and the third display module is used for responding to the third input and displaying a third video image sequence acquired by a third camera of a third video processing device in a second sub-window, wherein the third video processing device is the video processing device indicated by the target device identification.
In the embodiment of the application, by displaying a device identifier for indicating a video processing device in communication connection with a first video processing device, and receiving a third input of a user to a target device identifier in the device identifier, in response to the third input, displaying a third video image sequence acquired by a third camera of the video processing device indicated by the target device identifier in a sub-window, so as to record videos in a multi-machine-position manner, and editing video images of different machine positions to generate a target video, and by connecting a plurality of video processing devices in communication with the first video processing device, a function of editing videos while displaying the recorded videos can be realized, and videos displayed in a main window are taken as the target video (what you see is what you get), so that complexity of video editing is simplified.
Optionally, the first video processing apparatus 300 further includes:
a first determining module, configured to determine a relative shooting orientation of the third camera according to the third video image sequence and image content of the first video image sequence, where the relative shooting orientation is a shooting orientation of the third camera with respect to the first camera;
and the second determining module is used for determining the target display position of the second sub-window according to the relative shooting direction.
In this embodiment of the present application, a shooting orientation of the third camera with respect to the first camera may be determined according to image contents of the third video image sequence and the first video image sequence; and determining the target display position of the second sub-window based on the shooting direction and the position of the main window, so that the user can identify the shooting angle of the camera corresponding to each sub-window through the relative position relationship between the second sub-window and the main window in the video preview interface.
Optionally, the first video processing apparatus 300 further includes:
the third receiving module is used for receiving a fourth input of the user to the video preview interface;
a first control module, configured to respond to the fourth input and control the first camera and the second camera to stop capturing video images when the first video image sequence and the second video image sequence are video image sequences captured in real time during a video recording process;
and the second control module is used for responding to the fourth input and stopping playing the first video and the second video under the condition that the first video image sequence is a video image in a recorded first video and the second video image sequence is a video image in a recorded second video.
In the embodiment of the application, through the fourth input to the video preview interface, under the condition that the video content displayed by the main window and the sub-window is a video image sequence collected in real time, the fourth input is responded, and the cameras corresponding to the windows are controlled to stop collecting the video images; under the condition that the video contents displayed by the main window and the sub-window are video images in the recorded videos, the recorded videos can be stopped from being played by each window, and the video images collected by the multiple cameras can be stopped from being recorded or played by one-key input on the video preview interface.
Optionally, the first video processing apparatus 300 further includes:
the fourth receiving module is used for receiving a fifth input of the target video from the user;
a fourth display module, configured to display a video adjustment window in response to the fifth input, where the video adjustment window includes an adjustment control, at least one first video thumbnail, and at least one second video thumbnail; the at least one first video thumbnail is a thumbnail of the at least one frame of the first video image, the at least one second video thumbnail is a thumbnail of the at least one frame of the second video image, and the adjustment control is used to update the video frame of the target video;
the fifth receiving module is used for receiving a sixth input of the user to the adjusting control;
and the first updating module is used for responding to the sixth input, updating the display position of the adjusting control and updating the video frame of the target video according to the updated display position of the adjusting control.
In the embodiment of the application, after the clipped target video is generated, the video frames at the splicing part in the target video can be adjusted, and a user can accurately adjust the position of the adjusting control by browsing the thumbnails of the video frames at the video splicing part, so that the purpose of accurately adjusting the splicing part in the target video is achieved.
Optionally, the first updating module is further configured to perform at least one of the following steps:
updating a splicing end video frame of the first video image sequence;
updating a stitching start video frame of the second video image sequence;
adding or subtracting stitched video frames of the first sequence of video images;
adding or subtracting stitched video frames of the second sequence of video images.
In this embodiment, the start video frame and the end video frame at the splicing position may be increased, decreased, or updated according to the specific needs of the user, so that the video frame at the splicing position is more suitable.
Optionally, the first video processing apparatus 300 further includes:
a sixth receiving module, configured to receive a seventh input of the target video from the user;
a fifth display module for displaying a first video editing window of the target video on the first video processing device in response to the seventh input;
a seventh receiving module, configured to receive an eighth input to the first video editing window by the user;
a second update module, configured to update, in response to the eighth input, the target video according to editing information, the editing information being determined according to the eighth input;
and the sending module is used for sending the editing information to a second video processing device so that the second video processing device synchronously updates the target video according to the editing information.
In the embodiment of the application, the target video can be edited by the plurality of video processing devices, and the plurality of video processing devices can perform various video editing operations with different functions, so that the requirement that a user needs multi-user cooperation in the process of editing the video is met, and the editing efficiency is improved.
The video processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The video processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The video processing apparatus provided in the embodiment of the present application can implement each process implemented by the foregoing method embodiment, and is not described here again to avoid repetition.
Optionally, as shown in fig. 6, an electronic device 2000 is further provided in the embodiment of the present application, and includes a processor 2002, a memory 2001, and a program or an instruction stored in the memory 2001 and executable on the processor 2002, where the program or the instruction is executed by the processor 2002 to implement the processes of the above-mentioned embodiment of the video processing method, and can achieve the same technical effects, and no further description is provided here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The user input unit 1007 is used for receiving a first input of a user to the first video processing device;
a display unit 1006, configured to display, in response to the first input, a first video image sequence acquired by a first camera of the first video processing apparatus on a video preview interface;
the processor 1010 is configured to generate a target video according to the first video image sequence and the second video image sequence when a second video image sequence acquired by a second camera of a second video processing apparatus is displayed on a video preview interface;
wherein the target video comprises at least one frame of a first video image in the first sequence of video images and at least one frame of a second video image in the second sequence of video images.
In this embodiment, video image sequences captured by cameras of different video processing apparatuses may be displayed on a video preview interface, and video clips may be performed according to a first video image sequence and a second video image sequence captured by the cameras of different video processing apparatuses, respectively, to generate a target image including at least one frame of a first video image and at least one frame of a second video image, where the at least one frame of the first video image is from the first video image sequence generated by the first video processing apparatus, and the at least one frame of the second video image is from the second video image sequence generated by the second video processing apparatus. According to the video processing method, under the condition that the video image sequences acquired by the cameras of different video processing devices are displayed on the video preview interface, the video editing can be performed on the different video image sequences generated by the different video processing devices, so that the target video is generated, professional video editing software is not needed, and the operation difficulty and complexity of video editing are reduced.
Optionally, the video preview interface includes a main window and a first sub-window, the main window is used for displaying the first video image sequence, and the first sub-window is used for displaying the second video image sequence;
a user input unit 1007, configured to receive a second input to the first sub-window by a user;
a processor 1010 configured to swap display content in the main window and the first sub-window in response to the second input; and performing video splicing on at least one frame of first video image and at least one frame of second video image displayed in the main window to obtain the target video.
In this embodiment, when recording a video, the display contents of the main window and the first sub-window may be exchanged by performing a second input on the first sub-window, that is, the main window is switched and displayed to the video image sequence acquired by the second camera, so that the first sub-window is switched and displayed to the video image sequence acquired by the first camera, and since different video processing devices may perform video shooting of the same scene from different angles, machine position switching in the video recording process may be achieved; the target video can be obtained based on the content displayed in the main window, specifically, the videos displayed in the main window are sequentially spliced according to the display sequence, so that the target video is obtained, the videos can be recorded in the same scene based on at least two video processing devices, the operation difficulty and complexity of video editing are reduced, and the video processing efficiency is improved.
Optionally, a display unit 1006, configured to display at least one device identifier, where the device identifier is used to indicate a video processing device communicatively connected to the first video processing device;
a user input unit 1007 configured to receive a third input of a target device identifier from among the at least one device identifier by a user;
a display unit 1006, configured to display, in response to the third input, a third video image sequence captured by a third camera of a third video processing apparatus in a second sub-window, where the third video processing apparatus identifies the indicated video processing apparatus for the target apparatus.
In the embodiment of the application, by displaying a device identifier for indicating a video processing device in communication connection with a first video processing device, and receiving a third input of a user to a target device identifier in the device identifier, in response to the third input, displaying a third video image sequence acquired by a third camera of the video processing device indicated by the target device identifier in a sub-window, so as to record videos in a multi-machine-position manner, and editing video images of different machine positions to generate a target video, and by connecting a plurality of video processing devices in communication with the first video processing device, a function of editing videos while displaying the recorded videos can be realized, and videos displayed in a main window are taken as the target video (what you see is what you get), so that complexity of video editing is simplified.
Optionally, the processor 1010 is configured to determine a relative shooting orientation of the third camera according to the image contents of the third video image sequence and the first video image sequence, where the relative shooting orientation is a shooting orientation of the third camera relative to the first camera; and determining the target display position of the second sub-window according to the relative shooting orientation.
In this embodiment of the present application, a shooting orientation of the third camera with respect to the first camera may be determined according to image contents of the third video image sequence and the first video image sequence; and determining the target display position of the second sub-window based on the shooting direction and the position of the main window, so that the user can identify the shooting angle of the camera corresponding to each sub-window through the relative position relationship between the second sub-window and the main window in the video preview interface.
Optionally, a user input unit 1007, configured to receive a fourth input to the video preview interface from the user;
a processor 1010, configured to control the first camera and the second camera to stop capturing video images in response to the fourth input when the first video image sequence and the second video image sequence are video image sequences captured in real time during a video recording process; and in the case that the first video image sequence is a video image in a recorded first video and the second video image sequence is a video image in a recorded second video, stopping playing the first video and the second video in response to the fourth input.
In the embodiment of the application, through the fourth input to the video preview interface, under the condition that the video content displayed by the main window and the sub-window is a video image sequence collected in real time, the fourth input is responded, and the cameras corresponding to the windows are controlled to stop collecting the video images; under the condition that the video contents displayed by the main window and the sub-window are video images in the recorded videos, the recorded videos can be stopped from being played by each window, and the video images collected by the multiple cameras can be stopped from being recorded or played by one-key input on the video preview interface.
Optionally, a user input unit 1007, configured to receive a fifth input to the target video by the user;
a display unit 1006, configured to display a video adjustment window in response to the fifth input, where the video adjustment window includes an adjustment control, at least one first video thumbnail, and at least one second video thumbnail; the at least one first video thumbnail is a thumbnail of the at least one frame of the first video image, the at least one second video thumbnail is a thumbnail of the at least one frame of the second video image, and the adjustment control is used to update the video frame of the target video;
a user input unit 1007, configured to receive a sixth input to the adjustment control by the user;
the processor 1010 is configured to update the display position of the adjustment control in response to the sixth input, and update the video frame of the target video according to the updated display position of the adjustment control.
In the embodiment of the application, after the clipped target video is generated, the video frames at the splicing part in the target video can be adjusted, and a user can accurately adjust the position of the adjusting control by browsing the thumbnails of the video frames at the video splicing part, so that the purpose of accurately adjusting the splicing part in the target video is achieved.
Optionally, the processor 1010 is configured to update a stitching termination video frame of the first video image sequence; updating a stitching start video frame of the second video image sequence; adding or subtracting stitched video frames of the first sequence of video images; adding or subtracting stitched video frames of the second sequence of video images.
In this embodiment, the start video frame and the end video frame at the splicing position may be increased, decreased, or updated according to the specific needs of the user, so that the video frame at the splicing position is more suitable.
Optionally, a user input unit 1007, configured to receive a seventh input to the target video by the user;
a display unit 1006 for displaying a first video editing window of the target video at the first video processing apparatus in response to the seventh input;
a user input unit 1007 configured to receive an eighth input to the first video editing window by a user;
a processor 1010, responsive to the eighth input, for updating the target video in accordance with editing information determined in accordance with the eighth input;
a radio frequency unit 1001, configured to send the editing information to a second video processing apparatus, so that the second video processing apparatus updates the target video synchronously according to the editing information.
In the embodiment of the application, the target video can be edited by the plurality of video processing devices, and the plurality of video processing devices can perform various video editing operations with different functions, so that the requirement that a user needs multi-user cooperation in the process of editing the video is met, and the editing efficiency is improved.
It should be understood that in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, and the Graphics Processing Unit 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1009 may be used to store software programs as well as various data, including but not limited to application programs and operating systems. Processor 1010 may integrate an application processor that handles primarily operating systems, user interfaces, applications, etc. and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above video processing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (11)
1. A video processing method, comprising:
receiving a first input of a user to a first video processing device;
responding to the first input, and displaying a first video image sequence acquired by a first camera of the first video processing device on a video preview interface;
under the condition that a second video image sequence acquired by a second camera of a second video processing device is displayed on a video preview interface, generating a target video according to the first video image sequence and the second video image sequence;
wherein the target video comprises at least one frame of a first video image in the first sequence of video images and at least one frame of a second video image in the second sequence of video images.
2. The video processing method of claim 1, wherein the video preview interface comprises a main window and a first sub-window, the main window being used for displaying the first sequence of video images, the first sub-window being used for displaying the second sequence of video images;
generating a target video from the first video image sequence and the second video image sequence, comprising:
receiving a second input of the first sub-window from the user;
in response to the second input, swapping display content in the main window and the first sub-window;
and performing video splicing on at least one frame of first video image and at least one frame of second video image displayed in the main window to obtain the target video.
3. The video processing method of claim 1, wherein the video processing method further comprises:
displaying at least one device identification indicating a video processing device communicatively connected to the first video processing device;
receiving a third input of a target device identification from the at least one device identification by the user;
and responding to the third input, and displaying a third video image sequence acquired by a third camera of a third video processing device in a second sub-window, wherein the third video processing device identifies the indicated video processing device for the target device.
4. The video processing method according to claim 3, wherein the video processing method further comprises:
determining a relative shooting orientation of the third camera according to the image contents of the third video image sequence and the first video image sequence, wherein the relative shooting orientation is the shooting orientation of the third camera relative to the first camera;
and determining the target display position of the second sub-window according to the relative shooting orientation.
5. The video processing method of claim 1, wherein the video processing method further comprises:
receiving a fourth input of the video preview interface from the user;
controlling the first camera and the second camera to stop collecting video images in response to the fourth input under the condition that the first video image sequence and the second video image sequence are video image sequences collected in real time in a video recording process;
and in the case that the first video image sequence is a video image in a recorded first video and the second video image sequence is a video image in a recorded second video, stopping playing the first video and the second video in response to the fourth input.
6. The video processing method of claim 1, wherein the video processing method further comprises:
receiving a fifth input of the target video from the user;
in response to the fifth input, displaying a video adjustment window, the video adjustment window including an adjustment control, at least one first video thumbnail, and at least one second video thumbnail; the at least one first video thumbnail is a thumbnail of the at least one frame of the first video image, the at least one second video thumbnail is a thumbnail of the at least one frame of the second video image, and the adjustment control is used to update the video frame of the target video;
receiving a sixth input of the adjustment control by the user;
and responding to the sixth input, updating the display position of the adjusting control, and updating the video frame of the target video according to the updated display position of the adjusting control.
7. The video processing method according to claim 6, wherein said updating the video frames of the target video comprises at least one of:
updating a splicing end video frame of the first video image sequence;
updating a stitching start video frame of the second video image sequence;
adding or subtracting stitched video frames of the first sequence of video images;
adding or subtracting stitched video frames of the second sequence of video images.
8. The video processing method according to claim 1, wherein after generating the target video from the first video image sequence and the second video image sequence, the video processing method further comprises:
receiving a seventh input of the target video from the user;
displaying a first video editing window of the target video at the first video processing device in response to the seventh input;
receiving an eighth input of the first video editing window by a user;
in response to the eighth input, updating the target video in accordance with editing information, the editing information determined in accordance with the eighth input;
and sending the editing information to a second video processing device so that the second video processing device synchronously updates the target video according to the editing information.
9. A first video processing apparatus, comprising:
the first receiving module is used for receiving a first input of a user to the first video processing device;
the first display module is used for responding to the first input and displaying a first video image sequence acquired by a first camera of the first video processing device on a video preview interface;
the generation module is used for generating a target video according to the first video image sequence and the second video image sequence under the condition that a second video image sequence acquired by a second camera of a second video processing device is displayed on a video preview interface;
wherein the target video comprises at least one frame of a first video image in the first sequence of video images and at least one frame of a second video image in the second sequence of video images.
10. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the video processing method according to any one of claims 1 to 8.
11. A readable storage medium, on which a program or instructions are stored, which, when executed by a processor, implement the steps of the video processing method according to any one of claims 1 to 8.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111091200.9A CN113794923B (en) | 2021-09-16 | 2021-09-16 | Video processing method, device, electronic equipment and readable storage medium |
PCT/CN2022/118527 WO2023040844A1 (en) | 2021-09-16 | 2022-09-13 | Video processing method and apparatus, electronic device, and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111091200.9A CN113794923B (en) | 2021-09-16 | 2021-09-16 | Video processing method, device, electronic equipment and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113794923A true CN113794923A (en) | 2021-12-14 |
CN113794923B CN113794923B (en) | 2024-06-28 |
Family
ID=79183848
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111091200.9A Active CN113794923B (en) | 2021-09-16 | 2021-09-16 | Video processing method, device, electronic equipment and readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113794923B (en) |
WO (1) | WO2023040844A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114390356A (en) * | 2022-01-19 | 2022-04-22 | 维沃移动通信有限公司 | Video processing method, video processing device and electronic equipment |
CN114745506A (en) * | 2022-04-28 | 2022-07-12 | 维沃移动通信有限公司 | Video processing method and electronic equipment |
CN114745507A (en) * | 2022-04-28 | 2022-07-12 | 维沃移动通信有限公司 | Shooting method, shooting device, electronic equipment and readable storage medium |
CN114845171A (en) * | 2022-03-21 | 2022-08-02 | 维沃移动通信有限公司 | Video editing method and device and electronic equipment |
WO2023040844A1 (en) * | 2021-09-16 | 2023-03-23 | 维沃移动通信(杭州)有限公司 | Video processing method and apparatus, electronic device, and readable storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013116163A1 (en) * | 2012-01-26 | 2013-08-08 | Zaletel Michael Edward | Method of creating a media composition and apparatus therefore |
CN113194227A (en) * | 2021-04-14 | 2021-07-30 | 上海传英信息技术有限公司 | Processing method, mobile terminal and storage medium |
CN113301351A (en) * | 2020-07-03 | 2021-08-24 | 阿里巴巴集团控股有限公司 | Video playing method and device, electronic equipment and computer storage medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102452973B1 (en) * | 2016-04-12 | 2022-10-11 | 삼성전자주식회사 | Image processing method and electronic device supporting the same |
CN108933881B (en) * | 2017-05-22 | 2022-05-27 | 中兴通讯股份有限公司 | Video processing method and device |
CN110336968A (en) * | 2019-07-17 | 2019-10-15 | 广州酷狗计算机科技有限公司 | Video recording method, device, terminal device and storage medium |
CN113794923B (en) * | 2021-09-16 | 2024-06-28 | 维沃移动通信(杭州)有限公司 | Video processing method, device, electronic equipment and readable storage medium |
-
2021
- 2021-09-16 CN CN202111091200.9A patent/CN113794923B/en active Active
-
2022
- 2022-09-13 WO PCT/CN2022/118527 patent/WO2023040844A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013116163A1 (en) * | 2012-01-26 | 2013-08-08 | Zaletel Michael Edward | Method of creating a media composition and apparatus therefore |
CN113301351A (en) * | 2020-07-03 | 2021-08-24 | 阿里巴巴集团控股有限公司 | Video playing method and device, electronic equipment and computer storage medium |
CN113194227A (en) * | 2021-04-14 | 2021-07-30 | 上海传英信息技术有限公司 | Processing method, mobile terminal and storage medium |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023040844A1 (en) * | 2021-09-16 | 2023-03-23 | 维沃移动通信(杭州)有限公司 | Video processing method and apparatus, electronic device, and readable storage medium |
CN114390356A (en) * | 2022-01-19 | 2022-04-22 | 维沃移动通信有限公司 | Video processing method, video processing device and electronic equipment |
CN114845171A (en) * | 2022-03-21 | 2022-08-02 | 维沃移动通信有限公司 | Video editing method and device and electronic equipment |
CN114745506A (en) * | 2022-04-28 | 2022-07-12 | 维沃移动通信有限公司 | Video processing method and electronic equipment |
CN114745507A (en) * | 2022-04-28 | 2022-07-12 | 维沃移动通信有限公司 | Shooting method, shooting device, electronic equipment and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2023040844A8 (en) | 2023-11-02 |
WO2023040844A9 (en) | 2023-05-04 |
WO2023040844A1 (en) | 2023-03-23 |
CN113794923B (en) | 2024-06-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113794923B (en) | Video processing method, device, electronic equipment and readable storage medium | |
WO2022100712A1 (en) | Method and system for displaying virtual prop in real environment image, and storage medium | |
CN112492212A (en) | Photographing method and device, electronic equipment and storage medium | |
CN112672061B (en) | Video shooting method and device, electronic equipment and medium | |
WO2023134583A1 (en) | Video recording method and apparatus, and electronic device | |
CN111679772B (en) | Screen recording method and system, multi-screen device and readable storage medium | |
WO2023030306A1 (en) | Method and apparatus for video editing, and electronic device | |
CN112954209B (en) | Photographing method and device, electronic equipment and medium | |
CN114025237B (en) | Video generation method and device and electronic equipment | |
CN114125297B (en) | Video shooting method, device, electronic equipment and storage medium | |
CN114143455B (en) | Shooting method and device and electronic equipment | |
WO2022105673A1 (en) | Video recording method and electronic device | |
CN114745505A (en) | Shooting method, shooting device, electronic equipment and readable storage medium | |
CN113873319A (en) | Video processing method and device, electronic equipment and storage medium | |
CN115631109A (en) | Image processing method, image processing device and electronic equipment | |
CN112367467B (en) | Display control method, display control device, electronic apparatus, and medium | |
CN114500852A (en) | Photographing method, photographing apparatus, electronic device, and readable storage medium | |
CN114827686A (en) | Recording data processing method and device and electronic equipment | |
CN113923392A (en) | Video recording method, video recording device and electronic equipment | |
CN114237800A (en) | File processing method, file processing device, electronic device and medium | |
CN113873147A (en) | Video recording method and device and electronic equipment | |
CN113596331A (en) | Shooting method, shooting device, shooting equipment and storage medium | |
CN112492205A (en) | Image preview method and device and electronic equipment | |
CN114449134A (en) | Shooting method and terminal equipment | |
CN114078280A (en) | Motion capture method, motion capture device, electronic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |