CN114520875B - Video processing method and device and electronic equipment - Google Patents
Video processing method and device and electronic equipment Download PDFInfo
- Publication number
- CN114520875B CN114520875B CN202210109250.3A CN202210109250A CN114520875B CN 114520875 B CN114520875 B CN 114520875B CN 202210109250 A CN202210109250 A CN 202210109250A CN 114520875 B CN114520875 B CN 114520875B
- Authority
- CN
- China
- Prior art keywords
- video
- input
- frames
- video frames
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 23
- 238000012545 processing Methods 0.000 claims abstract description 205
- 238000000034 method Methods 0.000 claims abstract description 47
- 230000004927 fusion Effects 0.000 claims description 95
- 238000003780 insertion Methods 0.000 claims description 9
- 230000037431 insertion Effects 0.000 claims description 9
- 230000002194 synthesizing effect Effects 0.000 abstract 1
- 230000000694 effects Effects 0.000 description 45
- 238000010586 diagram Methods 0.000 description 16
- 230000004044 response Effects 0.000 description 8
- 239000000284 extract Substances 0.000 description 7
- 230000003068 static effect Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 239000000463 material Substances 0.000 description 5
- 230000003796 beauty Effects 0.000 description 3
- 230000009977 dual effect Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
- H04M1/72439—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2621—Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Computer Networks & Wireless Communication (AREA)
- Studio Devices (AREA)
Abstract
The application discloses a video processing method, a video processing device and electronic equipment, and belongs to the technical field of shooting. Wherein the method comprises the following steps: displaying the first video stream and the second video stream; receiving a first input of a user; and responding to the first input, and synthesizing N first video frames and M second video frames selected by the first input to obtain the target video.
Description
Technical Field
The application belongs to the technical field of camera shooting, and particularly relates to a video processing method, a video processing device and electronic equipment.
Background
In general, in a scene of video shooting by a user, the user may trigger the electronic device to shoot to obtain a video file, and then trigger the electronic device to start a video processing application, and add a plurality of special effects required for the video file in the video processing application, so as to obtain a video file satisfied by the user.
However, after the electronic device shoots the video file, the user needs to trigger the electronic device to start the video processing application and perform multiple operations to add the special effect required by the user to the video file, so that the operation of the user is complicated and time-consuming in the process of shooting the video file satisfactory to the user.
As such, the electronic device is caused to capture video files less efficiently.
Disclosure of Invention
The embodiment of the application aims to provide a video processing method, a video processing device and electronic equipment, which can simplify the operation of a user in the process of shooting a video file satisfactory to the user, reduce time consumption and improve the efficiency of shooting the video file by the electronic equipment.
In a first aspect, an embodiment of the present application provides a video processing method, including: displaying the first video stream and the second video stream; receiving a first input of a user; responding to the first input, and carrying out image fusion on N first video frames and M second video frames selected by the first input to obtain a target video; wherein the N first video frames are video frames in the first video stream, the M second video frames are video frames in the second video stream, and N, M is a positive integer.
In a second aspect, embodiments of the present application provide a video processing apparatus, including: the device comprises a display module, a receiving module and a processing module. The display module is used for displaying the first video stream and the second video stream. And the receiving module is used for receiving the first input of the user. The processing module is used for responding to the first input received by the receiving module, and carrying out image fusion on N first video frames and M second video frames selected by the first input to obtain a target video; wherein the N first video frames are video frames in the first video stream, the M second video frames are video frames in the second video stream, and N, M is a positive integer.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, the program or instruction implementing the steps of the method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In this embodiment of the present application, the electronic device may display a first video stream and a second video stream, and according to a first input of a user, perform image fusion on N first video frames in the first video stream and M second video frames in the second video stream selected by the first input, to obtain a target video. Because the electronic equipment can display the first video frame of the first video stream and the second video frame of the second video stream first, the user can input the first video frame and the second video frame for one time according to the requirement of the user, so that the electronic equipment can perform image fusion on N first video frames and M second video frames required by the user to obtain a target video with special effects required by the user, namely, the user can customize the video frames required to perform image fusion, so that the electronic equipment can obtain the video with double exposure special effects required by the user without performing operation for multiple times, the operation of the user in the process of shooting the video file satisfactory to the user can be simplified, the time consumption is reduced, and the efficiency of shooting the video file by the electronic equipment can be improved.
Drawings
Fig. 1 is a schematic diagram of a video processing method according to an embodiment of the present application;
fig. 2 is one example schematic diagram of an interface of a mobile phone according to an embodiment of the present application;
FIG. 3A is a second exemplary diagram of an interface of a mobile phone according to an embodiment of the present disclosure;
FIG. 3B is a third exemplary diagram of an interface of a mobile phone according to an embodiment of the present disclosure;
FIG. 3C is a diagram illustrating an example of an interface of a mobile phone according to an embodiment of the present disclosure;
FIG. 4A is a fifth example schematic diagram of an interface of a mobile phone according to an embodiment of the present disclosure;
FIG. 4B is a diagram illustrating an example of an interface of a mobile phone according to an embodiment of the present disclosure;
FIG. 5 is a diagram of an example of an interface of a mobile phone according to an embodiment of the present disclosure;
FIG. 6 is a diagram illustrating an example of an interface of a mobile phone according to an embodiment of the present disclosure;
fig. 7 is a diagram illustrating a ninth example of an interface of a mobile phone according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an example of an interface of a mobile phone according to an embodiment of the present disclosure;
FIG. 9 is a second schematic diagram of a video processing method according to an embodiment of the present disclosure;
FIG. 10 is an eleventh example schematic diagram of an interface of a mobile phone according to an embodiment of the present disclosure;
Fig. 11 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 13 is a hardware schematic of an electronic device according to an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The video processing method, the video processing device and the electronic equipment provided by the embodiment of the application are described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
The embodiment of the application can be applied to video shooting scenes.
In the related art, a user may trigger the electronic device to shoot to obtain a video file with an overlapping (double-exposure special effect) special effect, then select a video material in a video processing application of the electronic device, and adjust shooting parameters of the video material, so that the electronic device may obtain the video file with the overlapping (double-exposure special effect according to the adjusted video material and the video file through the video processing application, so as to obtain the video file satisfactory to the user. However, since the user needs to perform operations multiple times to trigger the electronic device to shoot the video file satisfying the user, the operation of the user is complicated.
However, in the embodiment of the present application, the user may trigger the electronic device to display the video stream 1 and the video stream 2, so that the user may perform one-time input according to the user's requirement, so that the electronic device may perform image fusion on a part of the video frames in the video stream 1 and a part of the video frames in the video stream 2 selected by the user, so as to obtain a video file with an overlapping (ghost) special effect. It can be understood that the user can trigger the electronic device to shoot the video file satisfactory to the user by performing the input once, so that the operation of the user can be simplified.
Fig. 1 shows a flowchart of video processing according to an embodiment of the present application. As shown in fig. 1, the video processing method provided in the embodiment of the present application may include the following steps 101 to 103.
Step 101, the video processing device displays the first video stream and the second video stream.
Alternatively, in the embodiment of the present application, the video processing apparatus may display the first video stream and the second video stream in the video processing interface.
Further optionally, in the embodiment of the present application, when the video processing device displays the desktop, the video processing device may start the shooting application and display the shooting interface according to a click input of a user to the shooting application in the desktop, and start a "double exposure recording" mode according to a click input of a user to a "double exposure recording" control in the shooting interface, and acquire the first video stream and the second video stream according to the input of the user, so that the video processing device may update the shooting interface to a video processing interface, where the video processing interface includes the first video stream and the second video stream.
Optionally, in an embodiment of the present application, the first video stream may be: video streams collected by cameras of the video processing device, video streams prestored in the video processing device, or video streams downloaded by the video processing device from other devices. Wherein the first video stream may include at least one video frame therein.
Optionally, in an embodiment of the present application, the second video stream may be: video streams collected by cameras of the video processing device, video streams prestored in the video processing device, or video streams downloaded by the video processing device from other devices. Wherein the second video stream may include at least one video frame therein. It should be noted that, the "video stream" in the present application may be understood as continuous video data collected by a camera.
Optionally, in the embodiment of the present application, in a case where the first video stream and the second video stream are both video streams collected by a camera of the video processing device, the first video stream and the second video stream may be video streams collected by the same camera (or different cameras).
The number of video frames in the second video stream may be the same as or different from the number of video frames in the first video stream; the shooting parameters of the second video stream may be different from the shooting parameters of the first video stream; the image content of the video frames in the second video stream may be the same as or different from the image content of the video frames in the first video stream.
Wherein, the shooting parameters may include at least one of the following: focus parameters, auto Exposure (AE) parameters, filter parameters, beauty parameters, brightness parameters, frame rate parameters, etc.
It will be appreciated that the user may set different shooting parameters to achieve the video stream of the effect desired by the user so that the desired fusion material (i.e. the first video stream and the second video stream) may be obtained.
Alternatively, in the embodiment of the present application, the video processing apparatus may display video frames in the first video stream and the second video stream in the form of thumbnails, so as to display the first video stream and the second video stream.
It should be noted that, the above "displaying the video frames in the first video stream and the second video stream in the form of thumbnails" may be understood as: the video processing device may display a thumbnail of each video frame in the first video stream and a thumbnail of each video frame in the second video stream.
For example, a video processing device is taken as a mobile phone for illustration. As shown in fig. 2, the mobile phone may display video frames in the first video stream and the second video stream, respectively, in the form of thumbnails in a video processing interface (e.g., interface 10), i.e., the video processing apparatus displays thumbnails (e.g., reference numerals 11 to 16) of each video frame in the first video stream (e.g., video stream one) and displays thumbnails (e.g., reference numerals 17 to 21) of each video frame in the second video stream (e.g., video stream two).
The following will take, as an example, a video stream collected by a camera of the video processing device as both the first video stream and the second video stream.
Optionally, in the embodiment of the present application, before step 101, the video processing method provided in the embodiment of the present application may further include step 201 or step 202 described below.
Step 201, the video processing device records a first video stream and a second video stream sequentially through a camera.
Optionally, in this embodiment of the present application, when the video processing device starts the "double exposure recording" mode, the video processing device may start the rear-mounted main camera according to a click input of a user on a "Shan Zhu camera recording" control in the shooting interface, and sequentially collect, by using the rear-mounted main camera, the first video stream and the second video stream according to a click input of a user on a "shooting" control in the shooting interface, so as to record, by using one camera (i.e., the rear-mounted main camera), the first video stream and the second video stream in sequence.
For example, as shown in fig. 3A, the mobile phone displays a shooting interface (for example, interface 22), and the mobile phone opens a "double exposure recording" mode, where the interface 22 includes a "Shan Zhu recording" control, so that the mobile phone can open a rear-mounted main camera according to a click input of a user on the "Shan Zhu recording" control, so that the user can adjust shooting parameters such as a focus parameter, an AE parameter, a filter parameter, a beauty parameter, a brightness parameter, a frame rate parameter, and the like, aim the rear-mounted main camera at a person, and then click-input the "shooting" control in the interface 22, so that the mobile phone can collect a first video stream of the person through the rear-mounted main camera according to the adjusted shooting parameters; as shown in fig. 3B, after the mobile phone collects the first video stream of the person, the mobile phone may display a "save data stream" control, so that the mobile phone may store the first data stream and open the rear-mounted main camera according to the click input of the user on the "save data stream" control, so that the user may adjust the shooting parameters again, align the rear-mounted main camera to the landscape, and then perform the click input on the "shoot" control in the interface 22; as shown in fig. 3C, after the user clicks the "shoot" control, the mobile phone may collect, according to the adjusted shooting parameters, a second video stream of the landscape through the rear-mounted main camera.
Step 202, the video processing device records the first video stream and the second video stream in parallel through two cameras.
Optionally, in this embodiment of the present application, when the video processing device starts the "dual exposure recording" mode, the video processing device may start the front camera and the rear camera (or start two rear cameras) according to a click input of a user on a "front and rear dual shooting recording" control (or a "rear dual shooting recording" control) in the shooting interface, and simultaneously acquire the first video stream and the second video stream through the front camera and the rear camera (or start two rear cameras) according to a click input of a user on a "shooting" control in the shooting interface, so as to record the first video stream and the second video stream through the two cameras in parallel.
For example, as shown in fig. 4A, the mobile phone displays a shooting interface (for example, interface 23), where the interface 23 includes a "front-back dual-shot record" control, so that the mobile phone can start the front camera and the rear camera according to the click input of the user on the "front-back dual-shot record" control, so that the user can adjust shooting parameters such as a focus parameter, an AE parameter, a filter parameter, a beauty parameter, a brightness parameter, a frame rate parameter, and the like, and aim the front camera and the rear main camera at a person 1 and a person 2 respectively, and then perform click input on the "shooting" control in the interface 23, so that the mobile phone can collect a first video stream of the person 1 through the front camera and collect a second video stream of the person 2 through the rear camera respectively; as shown in fig. 4B, after the user clicks the "shoot" control, the mobile phone may collect a first video stream of the person 1 through the front camera and a second video stream of the person 2 through the rear camera, and display a first window (e.g., window 24) and a second window (e.g., window 25) in the interface 23, where the window 24 is used to display video frames of the first video stream of the person 1 and the window 25 is used to display video frames of the second video stream of the person 2.
Therefore, the video processing device can record the first video stream and the second video stream respectively through different recording modes so as to obtain the first video stream and the second video stream with different effects, and therefore the diversity of special effects of the target video obtained through shooting can be improved.
Of course, in the case where the video processing apparatus displays the first video stream and the second video stream, the user may also preview video frames in the first video stream (or the second video stream), as will be exemplified below.
Alternatively, in the embodiment of the present application, the above step 101 may be specifically implemented by the following step 101a, and after the above step 101, the video processing method provided in the embodiment of the present application may further include the following steps 301 and 302.
In step 101a, the video processing apparatus displays video frames in the first video stream and the second video stream in the form of thumbnails on the video processing interface.
Further alternatively, in an embodiment of the present application, the video processing apparatus may display a thumbnail of each video frame of the first video stream in a first display area in the video processing interface, and display a thumbnail of each video frame of the second video stream in a second display area in the video processing interface. Wherein the first display area is adjacent to the second display area.
The "display area adjacent to the first display area" may be understood as: and the display area is positioned in a preset range of the first display area.
For example, referring to fig. 3C, as shown in fig. 5, the mobile phone may update the interface 23 to a video processing interface (e.g., interface 26), where a thumbnail of a video frame of a first video stream (e.g., video stream one) is displayed in a first display area (e.g., display area 27) in the interface 26, and a thumbnail of a video frame of a second video stream (e.g., video stream two) is displayed in a second display area (e.g., display area 28) in the interface 26, where the display area 28 is: a display area adjacent to the display area 27.
Step 301, the video processing apparatus receives a third input of a first thumbnail by a user.
In this embodiment of the present application, the first thumbnail is a thumbnail of any video frame in a target video stream, where the target video stream is: a first video stream or a second video stream.
It will be appreciated that the target video frame is a user-operated video stream.
In this embodiment of the present application, the third input is used to select a video frame for previewing.
Further optionally, in an embodiment of the present application, the third input may specifically be: the touch input of the user to the display screen, or the voice instruction input by the user, or the specific gesture input by the user, or other feasibility inputs, may be specifically determined according to the actual use requirement, and the embodiment of the application is not limited.
The specific gesture can be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input may be a single click input, a double click input, or any number of click inputs, and may be a long press input or a short press input. For example, the third input may be: the user clicks on the first thumbnail displayed on the display screen.
In step 302, the video processing apparatus responds to the third input, and displays a first video frame corresponding to the first thumbnail in the video processing interface.
Further optionally, in the embodiment of the present application, the video processing device may update the thumbnail in the video processing interface to a first video frame corresponding to the first thumbnail, so as to display the first video frame corresponding to the first thumbnail.
For example, in connection with fig. 5, as shown in fig. 6, after the mobile phone displays the thumbnail of the video frame of the first video stream and the thumbnail of the video frame of the second video stream in the interface 26, the mobile phone may update the thumbnail of the video frame of the first video stream and the thumbnail of the video frame of the second video stream to the first video frame (e.g., the video frame 29) corresponding to the first thumbnail according to the third input (e.g., the click input) of the first thumbnail by the user.
Therefore, the video processing device can display the thumbnail of the video frame of the first video stream and the thumbnail of the video frame of the second video stream respectively, and display any video frame according to the input of the thumbnail of any video frame by the user, so that the user can quickly check whether any video frame is the video frame required by the user.
Step 102, the video processing device receives a first input from a user.
In this embodiment of the present application, the first input is used to select a video frame for video processing, where the video processing may include: fusing images; or image fusion and video frame insertion.
Optionally, in an embodiment of the present application, the first input specifically includes: the touch input of the user to the display screen, or the voice instruction input by the user, or the specific gesture input by the user, or other feasibility inputs, may be specifically determined according to the actual use requirement, and the embodiment of the application is not limited.
The specific gesture can be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input may be a single click input, a double click input, or any number of click inputs, and may be a long press input or a short press input. For example, the first input may be: and clicking input of the control displayed on the display screen by the user.
Alternatively, in embodiments of the present application, the first input may include at least one input.
Wherein, in the case where the video processing includes image fusion and video frame insertion, a part of the at least one input is input for selecting a video frame for image fusion and another part is input for selecting a video frame for video frame insertion.
Step 103, the video processing device responds to the first input, and performs image fusion on the N first video frames and the M second video frames selected by the first input to obtain a target video.
In this embodiment of the present application, the N first video frames are video frames in the first video stream, the M second video frames are video frames in the second video stream, and N, M are all positive integers.
Alternatively, in the embodiment of the present application, the number of N first video frames and the number of M second video frames may be the same or different, that is, N may be equal to M, or N may not be equal to M.
Optionally, in the embodiment of the present application, in a case where the video processing includes image fusion, the video processing device may perform image fusion on N first video frames and M second video frames to obtain the target video.
Optionally, in this embodiment of the present application, under the condition that n=m, the video processing device may perform image fusion on the first video frame and the first second video frame to obtain the first video frame, and perform image fusion on the second first video frame and the second video frame to obtain the second video frame, and so on, until the last first video frame and the last second video frame are subjected to image fusion to obtain the last video frame, so as to obtain N video frames, and thus the video processing device may package the N video frames into a video file to obtain the target video.
Optionally, in the embodiment of the present application, in the case where N < M, the video processing device may perform image fusion on the first video frame and the first second video frame to obtain the first video frame, and perform image fusion on the second first video frame and the second video frame to obtain the second video frame, and so on, until the nth first video frame and the nth second video frame are subjected to image fusion, so as to obtain the nth video frame, so that the video processing device may package the nth video frame and the M-N second video frames into a video file, to obtain the target video. Wherein the M-N second video frames are: all but the N second video frames of the M second video frames.
It should be noted that, in the case where N > M, the description of the image fusion performed by the video processing apparatus on the N first video frames and the M second video frames may refer to the specific description of the case where N < M, which is not described herein in detail.
Optionally, in a possible implementation manner of the embodiment of the present application, the video processing device may adjust transparency values of N first video frames and M second video frames, and perform image fusion on the N first video frames and the M second video frames to obtain the target video. Wherein the video frames of the target video have overlapping (ghost) effects.
Further alternatively, in the embodiment of the present application, the video processing apparatus may adjust down (or adjust up) the transparency values of the N first video frames, and adjust down (or adjust up) the transparency values of the M second video frames; alternatively, the video processing apparatus may adjust up (or down) the transparency values of the N first video frames and adjust down (or up) the transparency values of the M second video frames to adjust the transparency values of the N first video frames and the M second video frames to perform image fusion on the N first video frames and the M second video frames.
For example, assuming that the image contents of the first video stream and the second video stream each include a character, character 1 in the first video stream walks leftwards, and character 2 in the second video stream walks rightwards, the video processing apparatus may adjust down the transparency values of the N first video frames and adjust down the transparency values of the M second video frames, and perform image fusion on the adjusted N first video frames and the adjusted M second video frames, so that the effect that two characters (i.e., character 1 and character 2) penetrate may be formed in the obtained target video, i.e., the effect that character 1 penetrates through character 2.
Optionally, in another possible implementation manner of the embodiment of the present application, the video processing device may determine a certain area in the N first video frames (or the M second video frames) as a foreground area, and adjust down the image transparency of the certain area, and adjust up the image transparency of all sub-areas except the certain area, so that the video processing device may perform image fusion on the N first video frames and the M second video frames to obtain the target video. Wherein all sub-regions of the video frame of the target video except for the certain region have overlapping (ghost) effects.
For example, assuming that the first video stream includes a person and the second video stream includes a landscape, the video processing apparatus may adjust down the image transparency of the region where the person is located and adjust up the image transparency of all sub-regions except the region where the person is located (i.e., the region where the landscape is located), so that the video processing apparatus may perform image fusion on the adjusted N first video frames and M second video frames, and thus may form a landscape with overlapping (ghost) special effects in the obtained target video.
Optionally, in another possible implementation manner of the embodiment of the present application, the video processing device may extract N sub-images from N first video frames and extract M sub-images from M second video frames, so that the video processing device may perform image fusion on the N sub-images and the M sub-images to obtain the target video. The N sub-images are matched with the image contents of the M sub-images, and the video frames of the target video have special effects of different picture fusion.
It should be noted that, the "special effect of different frame fusion" can be understood as: special effects of objects at different times (or different places) of the same landscape appear in the picture of the video. Wherein the object may be any one of the following: characters, objects, animals, plants, etc.
Alternatively, in the embodiment of the present application, in the case where the video processing includes image fusion and video frame insertion, the video processing apparatus may insert an inserted video frame (for example, an intermediate video frame in the embodiment described below) into a video frame obtained after image fusion is performed on N first video frames and M second video frames, to obtain the target video.
Optionally, in the embodiment of the present application, after the video processing device obtains the target video, the video processing device may update the video processing interface, and display a third window in the updated interface, where the third window is used to display a video frame of the target video, so that a user may view the target video in the third window, and may input a "save" control in the updated interface, so that the video processing device may store the target video.
For example, the mobile phone may perform image fusion on N video frames and M video frames to obtain a target video. Referring to fig. 6, as shown in fig. 7, after the mobile phone obtains the target video, the mobile phone may update the interface 26 to the interface 30, and display a third window (for example, window 31) in the interface 30, where the window 31 is used to display a video frame of the target video, so that the user may view the target video in the window 31.
In this embodiment of the present application, if a user wants to add an overlapping (ghost) effect to a video file, the user may directly trigger the video processing device to acquire a first video stream and a second video stream, and perform one-time input, so that the video processing device may perform image fusion on a portion of video frames in the first video stream and a portion of video frames in the second video stream selected by the one-time input, to obtain a target video with the overlapping (ghost) effect.
The user may also trigger the video processing apparatus to acquire the first video stream and the second video stream with different shooting parameters, so that the target video obtained by the video processing apparatus has overlapping (ghost) special effects (i.e. double exposure special effects) at the same time.
The user may also trigger the video processing apparatus to acquire a first video stream and a second video stream with different shooting parameters and identical image content of the video frame, so that the target video obtained by the video processing apparatus has a high-dynamic range (HDR) effect, and the frame rate is higher than that of the first video stream (or the second video stream).
According to the video processing method provided by the embodiment of the application, the video processing device can display the first video stream and the second video stream, and according to the first input of the user, the N first video frames in the first video stream and the M second video frames in the second video stream selected by the first input are subjected to image fusion, so that the target video is obtained. The video processing device can display the first video frame of the first video stream and the second video frame of the second video stream, so that a user can input the first video frame and the second video frame for one time according to the requirement of the user, the video processing device can conduct image fusion on N first video frames and M second video frames required by the user, so that a target video with special effects required by the user is obtained, namely, the user can customize the video frames required to conduct image fusion, so that the video processing device can obtain the double-exposure special effects required by the user, multiple operations are not required by the user, the operation of the user in the process of shooting the video file satisfactory to the user can be simplified, time consumption is reduced, and the efficiency of shooting the video file by the video processing device can be improved.
And in addition, the video processing device can always store video streams in the video recording process, and respectively store two video streams in two video queues for performing double exposure post-processing, is applicable to multiple scenes and multiple time and space, and has more creativity and playability.
In the following, it will be illustrated how the video processing apparatus performs image fusion, with different examples.
Optionally, in a possible implementation manner of the embodiment of the present application, before step 103, the video processing method provided in the embodiment of the present application may further include steps 401 to 403 described below.
Step 401, the video processing device divides the third video frame into at least two sub-areas.
In this embodiment of the present application, the third video frame is any video frame of N first video frames or M second video frames.
Further alternatively, in the embodiment of the present application, in a case where the video processing apparatus displays the third video frame, the video processing apparatus may divide the third video frame into at least two sub-areas.
It should be noted that, for the explanation of the video processing device displaying the third video frame, reference may be made to the specific description of the video processing device displaying the first video frame, which is not repeated herein in the embodiments of the present application.
Further optionally, in the embodiment of the present application, the video processing device may perform image recognition on a third video frame, and divide the third video frame into at least two sub-areas according to a video object in the third video frame obtained by the image recognition; alternatively, the third video frame may be divided into at least two sub-regions according to the user's input.
Alternatively, in the embodiment of the present application, the above step 401 may be specifically implemented by the following step 401a or step 401 b.
In step 401a, the video processing apparatus divides the third video frame into at least two sub-areas according to the object type of the video object in the third video frame.
Further alternatively, in an embodiment of the present application, the object type may include any one of the following: static objects or dynamic objects.
It should be noted that, the "static object" can be understood as: objects that are stationary in the video stream, such as buildings, roads, etc. The above "dynamic object" can be understood as: objects moving in the video stream, such as pedestrians, vehicles in motion, etc.
Further optionally, in the embodiment of the present application, the video processing device may divide the area where the video objects with the same object type are located in the third video frame into one sub-area, so as to obtain at least two sub-areas.
It is understood that at least two sub-regions include: the region where the static object is located and the region where the dynamic object is located.
In this embodiment of the present invention, since the image fusion modes of the objects (i.e., the static object, the dynamic object, etc.) that may be required by the user are different, the video processing apparatus may divide the third video frame into the region where the static object is located and the region where the dynamic object is located, so that the user may set the image fusion modes of the different sub-regions.
Therefore, the video processing device can divide the third video frame into the region where the static object is located and the region where the dynamic object is located, so that the user can set the image fusion modes of different sub-regions according to the requirements, and the flexibility of setting the image fusion modes of the video frame can be improved.
In step 401b, when the video processing device receives the sliding input of the user to the third video frame, the video processing device divides the third video frame into at least two sub-areas based on the sliding track of the sliding input.
Further alternatively, in an embodiment of the present application, the sliding track may be any one of the following: closed track, non-closed track.
Further optionally, in the embodiment of the present application, in a case where the sliding track is a closed track, the video processing device may determine an area enclosed by the closed track as at least two sub-areas respectively.
Specifically, the video processing device may determine the region enclosed by the closed track as one first sub-region, so as to determine at least two first sub-regions, and then identify edge lines of the video object in the at least two first sub-regions through an artificial intelligence (artificial intelligence, AI) algorithm, so as to determine at least two sub-regions according to the edge lines of the video object.
For example, referring to fig. 6, as shown in fig. 8, the user may perform a sliding input on the video frame 29, where the sliding track of the sliding input is a track 32, so that the mobile phone may determine an area (for example, an area 33) enclosed by the input track 32 as one sub-area, so as to obtain at least two sub-areas, namely, an area 33 and an area 34, and the area 34 is all the areas except the area 33.
Further alternatively, in the embodiment of the present application, in a case where the sliding track is an unsealed track, the video processing apparatus may determine an area within a preset range of the unsealed track as one sub-area, so as to determine at least two sub-areas.
Further alternatively, in the embodiment of the present application, after the video processing apparatus determines at least two sub-areas, the video processing apparatus may mark the at least two sub-areas in a first marking manner, so that the user may view the at least two sub-areas.
Specifically, the first marking mode may include at least one of the following: a dotted line frame marking method, a highlight marking method, a color marking method, a gray scale marking method, a preset transparency marking method, a blinking marking method, and the like.
In this embodiment of the present invention, since the situation that the user wants to set the image fusion mode of the sub-region in the third video frame may occur, the user may perform the sliding input on the third video frame, so that the video processing apparatus may divide the third video frame into at least two sub-regions based on the sliding track of the sliding input, and thus the user may set the image fusion mode of different sub-regions.
Therefore, the video processing device can divide the third video frame into at least two sub-areas according to the sliding input of the user on the third video frame, so that the user can set the image fusion modes of different sub-areas according to the requirements, and the flexibility of setting the image fusion modes of the video frame can be improved.
Step 402, the video processing device receives a second input from a user to a first sub-region of the at least two sub-regions.
In this embodiment of the present application, the second input is used to set the transparency of the image of the sub-region.
Further optionally, in an embodiment of the present application, the second input may specifically be: the touch input of the user to the display screen, or the voice instruction input by the user, or the specific gesture input by the user, or other feasibility inputs, may be specifically determined according to the actual use requirement, and the embodiment of the application is not limited.
The specific gesture can be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input may be a single click input, a double click input, or any number of click inputs, and may be a long press input or a short press input. For example, the second input may be: the user inputs a single click of a first sub-region displayed on the display screen.
In step 403, the video processing device, in response to the second input, decreases the image transparency of the first sub-region and increases the image transparency of the second sub-region.
In this embodiment of the present application, the second sub-area includes all sub-areas except the first sub-area of the at least two sub-areas.
In this embodiment of the present application, the video processing device may divide the third video frame into at least two sub-areas, so that a user may input a first sub-area of the at least two sub-areas, so that the video processing device may determine the first sub-area as a foreground area, and thus the video processing device may perform image fusion on the N first video frames and the M second video frames to obtain a target video. Wherein all sub-areas of the video frame of the target video except the first sub-area have overlapping (ghost) effects.
Therefore, the video processing device can divide any video frame in the N first video frames or the M second video frames into at least two sub-areas, so that a user can input the video at one time, the video processing device can reduce the image transparency of the first sub-area required by the user, and increase the image transparency of the second sub-area, so as to obtain the target video with special effects required by the user, therefore, the user operation in the process of shooting the video file satisfactory to the user can be simplified, and the time consumption is reduced.
Optionally, in another possible implementation manner of the embodiment of the present application, before step 103, the video processing method provided in the embodiment of the present application may further include a step 501 and a step 502 described below, and step 103 may be specifically implemented by step 103a described below.
In step 501, the video processing apparatus extracts the ith first sub-image from the ith first video frame in the N first video frames.
In the embodiment of the present application, i is a positive integer.
Step 502, the video processing apparatus extracts the ith second sub-image from the ith second video frame in the M second video frames.
In this embodiment of the present application, the i second sub-images are matched with the image content of the i first sub-images.
Further alternatively, in this embodiment of the present application, the video processing device may first perform image recognition on the ith first video frame to obtain a first image content, and perform image recognition on the ith second video frame to obtain a second image content, so that the video processing device may extract the ith first sub-image and the ith second sub-image according to the first image content and the second image content, respectively.
It should be noted that, the above "matching" can be understood as: the same, or a difference therebetween is less than or equal to a preset threshold.
Step 103a, the video processing device performs image fusion on the ith first sub-image and the ith second sub-image to obtain a target video.
It will be appreciated that the video processing apparatus may perform the above steps 501, 502 and 103a, respectively, for each of the N first video frames and each of the M second video frames, to obtain the target video.
In this embodiment of the present application, the video processing device may identify a scene of the same video object (for example, an object, a building, a person, etc.) of two video frames, perform clipping and overlapping of the scene areas, clip out a non-overlapping area of the two video frames, and reserve the overlapping area (i.e., the first sub-image and the second sub-image) of the two video frames, and then fuse the overlapping area of the two video frames to obtain a target video, where the scene of the target video is consistent, the scene of the target video is unchanged after being fused, and the scene of the inconsistent picture is only overlapped with the special effect.
Therefore, the video processing device can extract the ith first video frame and the ith second video frame, the ith first sub-image and the ith second sub-image with the matched image contents, and the ith first sub-image and the ith second sub-image are subjected to image fusion, so that the diversity of special effects of the target video obtained through shooting can be improved.
The following will exemplify video processing including image fusion and video frame insertion.
It will be appreciated that a portion of the first input is used to select video frames for image fusion and another portion is used to select video frames for video frame insertion.
Optionally, in an embodiment of the present application, the first input further includes a user input of an intermediate video frame in the first video stream or in the second video stream. Specifically, the above step 103 may be specifically realized by the following step 103 b.
Step 103b, the video processing device responds to the first input, performs image fusion on all video frames except the intermediate video frames in the N first video frames and the M second video frames, and inserts the intermediate video frames into the video frames obtained by the image fusion to obtain the target video.
It will be appreciated that the intermediate video frames described above may be video frames selected for video frame insertion for another portion of the first input.
Further alternatively, in the embodiment of the present application, after the video processing device inserts the intermediate video frame into the video frame obtained by image fusion, the video processing device may further add special effect materials to the intermediate video frame according to the input of the user.
Specifically, the special effects material may include at least one of: photo-specific material, sticker material, etc.
Further alternatively, in the embodiment of the present application, the video processing device may insert the intermediate video frame before (or after) any video frame in the video frames obtained by image fusion, so that the video processing device may encapsulate the inserted video frame into a video file, to obtain the target video. It will be appreciated that the intermediate video frame is the previous video frame (or the next video frame) to the any video frame.
For example, assuming that the N first video frames include 30 first video frames, the M second video frames include 31 second video frames, and the intermediate video frame is a first second video frame of the M second video frames, the video frame processing apparatus may perform image fusion on the first video frame of the 30 first video frames and the second video frame of the 31 second video frames to obtain a first fused video frame, and perform image fusion on the second first video frame of the 30 first video frames and the third second video frame of the 31 second video frames to obtain a second fused video frame, and so on to obtain 30 fused video frames, so that the video processing apparatus may insert the intermediate video frame into the 30 fused video frames, for example, into front of a fifteenth fused video frame of the 30 fused video frames to obtain the target video, that is, the intermediate video frame is the fifteenth video frame of the target video.
In this embodiment of the present application, since a situation may occur in which a user wants to display some video frames in the first video stream (or the second video stream) separately (i.e., without performing image fusion), the video processing apparatus may perform image fusion on all video frames except for the intermediate video frames in the N first video frames and the M second video frames, and insert the intermediate video frames into the video frames obtained by image fusion, so as to obtain the target video.
Therefore, the video processing device can perform image fusion on a part of video frames in the N first video frames and a part of video frames in the M second video frames, and insert another part of video frames into the video frames obtained by image fusion, namely, the video frames can be processed by adopting different video processing modes, so that the diversity of special effects of the target video obtained by shooting can be improved.
In this embodiment of the present invention, the video processing interface further includes at least one control, where each control is used to select a video frame in one video stream, so that a user may input the at least one control, so that the video processing apparatus may select N first video frames and M second video frames.
Optionally, in this embodiment of the present application, the video processing interface further includes a first control and a second control; the first control is for selecting video frames in a first video stream and the second control is for selecting video frames in a second video stream. Specifically, as shown in fig. 9 in conjunction with fig. 1, the above step 102 may be specifically implemented by the following step 102a, and before "image fusion is performed on N first video frames and M second video frames selected by the first input" in the above step 103, the video processing method provided in the embodiment of the present application may further include the following steps 601 to 603, and the above step 103 may be specifically implemented by the following step 103 c.
Step 102a, the video processing device receives a first input from a user to a first control and a second control.
Further alternatively, in an embodiment of the present application, the first input may include a first sub-input and a second sub-input. The first sub-input is input to a first control by a user, and the first sub-input is used for selecting video frames for image fusion in a first video stream; the second sub-input is input by a user to a second control, and the second sub-input is used for selecting video frames for image fusion in the second video stream.
Specifically, the first sub-input may specifically be a drag input by which the user drags the first control to the first position. The second sub-input may specifically be a drag input by which the user drags the second control to the second position.
In step 601, the video processing apparatus determines, in response to a first input, a first start video frame and a first end video frame in a first video stream and a second start video frame and a second end video frame in a second video stream according to the first input.
Further alternatively, in an embodiment of the present application, the video processing apparatus may determine the first start video frame and the first end video frame according to the first position, and determine the second start video frame and the second end video frame according to the second position, respectively.
Specifically, the video processing apparatus may determine the video frames corresponding to the thumbnail at the first position to be the first start video frame and the first end video frame, respectively, and determine the video frames corresponding to the thumbnail at the second position to be the second start video frame and the second end video frame, respectively.
For example, in conjunction with fig. 5, as shown in fig. 10, the interface 26 of the mobile phone further includes a first control 35 and a second control 36, so that the user may perform a first sub-input on the first control 35, so that the mobile phone may determine a first start video frame 37 and a first end video frame 38 according to a first position, respectively, and perform a second sub-input on the second control 36, so that the mobile phone may determine a second start video frame 39 and a second end video frame 40 according to a second position, respectively.
In step 602, the video processing apparatus determines N first video frames between a first start video frame and a first end video frame from the first video stream.
It will be appreciated that the N first video frames include a first starting video frame, a first ending video frame, and all first video frames between the first starting video frame and the first ending video frame.
Step 603, the video processing device determines M second video frames between the second start video frame and the second end video frame from the second video stream.
It is understood that the M second video frames include a second start video frame, a second end video frame, and all second video frames between the second start video frame and the second end video frame.
Note that, the execution sequence of the above steps 602 and 603 is not limited to this embodiment. In one possible implementation, the video processing apparatus may perform step 602 first, and then perform step 603; in another possible implementation, the video processing apparatus may perform step 603 first, and then perform step 603; in yet another possible implementation, the video processing apparatus may perform step 603 while performing step 602.
And 103c, the video processing device performs image fusion on the N first video frames and the M second video frames to obtain a target video.
Therefore, the user can input the first control and the second control, so that the video processing device can directly determine N first video frames and M second video frames without inputting for many times, and therefore, the operation of the user in the process of shooting the video file satisfactory to the user can be simplified, and the time consumption is reduced.
It should be noted that, for the description of selecting the intermediate video frame by the video processing apparatus, specific descriptions of selecting N first video frames and M second video frames by the video processing apparatus may be referred to, and the embodiments of the present application will not be repeated here.
It should be noted that, in the video processing method provided in the embodiment of the present application, the execution subject may be a video processing apparatus, or a control module in the video processing apparatus for executing the video processing method. In the embodiment of the present application, a video processing method performed by a video processing device is taken as an example, and the video processing device provided in the embodiment of the present application is described.
Fig. 11 shows a schematic diagram of one possible configuration of a video processing apparatus involved in an embodiment of the present application. As shown in fig. 11, the video processing apparatus 60 may include: the display module 61 is configured to display the first video stream and the second video stream. The receiving module 62 is configured to receive a first input from a user. The processing module 63 is configured to synthesize N first video frames and M second video frames selected by the first input in response to the first input received by the receiving module 62, so as to obtain a target video; wherein the N first video frames are video frames in the first video stream, the M second video frames are video frames in the second video stream, and N, M is a positive integer.
In a possible implementation manner, the processing module 63 is further configured to divide a third video frame into at least two sub-areas, where the third video frame is any one of the N first video frames or the M second video frames. The receiving module 62 is further configured to receive a second input from a user to a first sub-area of the at least two sub-areas divided by the processing module 63. The processing module 63 is further configured to, in response to the second input received by the receiving module 62, turn down the image transparency of the first sub-area and turn up the image transparency of the second sub-area; wherein the second sub-area includes all sub-areas except the first sub-area of the at least two sub-areas.
In a possible implementation manner, the processing module 63 is specifically configured to divide the third video frame into at least two sub-areas according to an object type of the video object in the third video frame; or, in case that a sliding input of the user to the third video frame is received, dividing the third video frame into at least two sub-areas based on a sliding track of the sliding input.
In a possible implementation manner, the video processing interface further includes a first control and a second control; the first control is for selecting video frames in a first video stream and the second control is for selecting video frames in a second video stream. The receiving module 62 is specifically configured to receive a first input from a user to the first control and the second control. The processing module 63 is further configured to determine a first start video frame and a first end video frame in the first video stream and a second start video frame and a second end video frame in the second video stream according to the first input received by the receiving module 62; determining N first video frames between the first starting video frame and the first ending video frame from the first video stream; and determining M second video frames from the second video stream between the second start video frame and the second end video frame.
In one possible implementation, the first input includes a user input of an intermediate video frame in the first video stream or in the second video stream. The processing module 63 is specifically configured to perform image fusion on all video frames except the intermediate video frame in the N first video frames and the M second video frames, and insert the intermediate video frame into the video frames obtained by the image fusion.
In a possible implementation manner, the processing module 63 is further configured to extract, from an ith first video frame in the N first video frames, an ith first sub-image, where i is a positive integer; and the ith second sub-image is scratched from the ith second video frame in the M second video frames, and the ith second sub-image is matched with the image content of the ith first sub-image. The processing module 63 is specifically configured to perform image fusion on the ith first sub-image and the ith second sub-image to obtain a target video.
In a possible implementation manner, the processing module 63 is further configured to record the first video stream and the second video stream sequentially through one camera; or, the first video stream and the second video stream are recorded in parallel through two cameras.
In a possible implementation manner, the display module 61 is specifically configured to display, in a video processing interface, video frames in the first video stream and the second video stream in the form of thumbnails, respectively. The receiving module 62 is further configured to receive a third input from a user on a first thumbnail, where the first thumbnail is a thumbnail of any video frame in a target video stream, and the target video stream is: a first video stream or a second video stream. The display module 63 is further configured to display, in response to the third input received by the receiving module 62, a first video frame corresponding to the first thumbnail in the video processing interface.
According to the video processing device provided by the embodiment of the invention, the video processing device can display the first video frame of the first video stream and the second video frame of the second video stream, so that a user can input the first video frame and the second video frame according to the requirement of the user once, the video processing device can conduct image fusion on N first video frames and M second video frames required by the user, so that a target video with special effects required by the user is obtained, namely, the user can customize the video frames required to conduct image fusion, so that the video processing device can obtain the video with double exposure special effects required by the user without multiple operations, therefore, the operation of the user in the process of shooting the video file satisfactory to the user can be simplified, the time consumption is reduced, and the efficiency of the video processing device for shooting the video file can be improved.
The video processing device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a server, network attached storage (network attached storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The video processing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The video processing apparatus provided in the embodiments of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to 10, and in order to avoid repetition, a description is omitted here.
Optionally, in the embodiment of the present application, as shown in fig. 12, the embodiment of the present application further provides an electronic device 70, including a processor 71, a memory 72, and a program or an instruction stored in the memory 72 and capable of running on the processor 71, where the program or the instruction implements each process of the embodiment of the video processing method when executed by the processor 71, and the process can achieve the same technical effect, and is not repeated herein.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 13 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, and processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further include a power source (e.g., a battery) for powering the various components, and that the power source may be logically coupled to the processor 110 via a power management system to perform functions such as managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 13 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein, the display unit 106 is configured to display the first video stream and the second video stream.
A user input unit 107 for receiving a first input of a user.
The processor 110 is configured to synthesize, in response to the first input, N first video frames and M second video frames selected by the first input, to obtain a target video.
Wherein the N first video frames are video frames in the first video stream, the M second video frames are video frames in the second video stream, and N, M is a positive integer.
According to the electronic device provided by the embodiment of the invention, as the electronic device can display the first video frame of the first video stream and the second video frame of the second video stream, a user can input the first video frame and the second video frame for one time according to the requirement of the user, so that the electronic device can perform image fusion on N first video frames and M second video frames required by the user, so as to obtain the target video of the special effect required by the user, namely, the user can customize the video frames required to perform image fusion, so that the electronic device can obtain the video of the double exposure special effect required by the user without multiple operations of the user, therefore, the operation of the user in the process of shooting the video file satisfactory to the user can be simplified, the time consumption is reduced, and the efficiency of shooting the video file by the electronic device can be improved.
Optionally, in an embodiment of the present application, the processor 110 is further configured to divide a third video frame into at least two sub-areas, where the third video frame is any one of the N first video frames or the M second video frames.
The user input unit 107 is further configured to receive a second input from a user to a first sub-area of the at least two sub-areas.
The processor 110 is further configured to, in response to the second input, turn down the image transparency of the first sub-region and turn up the image transparency of the second sub-region.
Wherein the second sub-area includes all sub-areas except the first sub-area of the at least two sub-areas.
Therefore, the electronic device can divide any video frame in the N first video frames or the M second video frames into at least two sub-areas, so that a user can input the video at one time, the electronic device can reduce the image transparency of the first sub-area required by the user, and increase the image transparency of the second sub-area, so as to obtain the target video with the special effect required by the user, therefore, the operation of the user in the process of shooting the video file satisfactory to the user can be simplified, and the time consumption is reduced.
Optionally, in the embodiment of the present application, the processor 110 is specifically configured to divide the third video frame into at least two sub-areas according to an object type of the video object in the third video frame; or, in case that a sliding input of the user to the third video frame is received, dividing the third video frame into at least two sub-areas based on a sliding track of the sliding input.
Therefore, the electronic device can divide the third video frame into the region where the static object is located and the region where the dynamic object is located, so that the user can set the image fusion modes of different sub-regions according to the requirements, and the flexibility of setting the image fusion modes of the video frame can be improved.
Therefore, the electronic device can divide the third video frame into at least two sub-areas according to the sliding input of the user on the third video frame, so that the user can set the image fusion modes of different sub-areas according to the requirements, and the flexibility of setting the image fusion modes of the video frame can be improved.
Optionally, in this embodiment of the present application, the video processing interface further includes a first control and a second control; the first control is for selecting video frames in a first video stream and the second control is for selecting video frames in a second video stream.
The user input unit 107 is specifically configured to receive a first input of a first control and a second control by a user.
The processor 110 is further configured to determine a first start video frame and a first end video frame in the first video stream and a second start video frame and a second end video frame in the second video stream according to the first input; determining N first video frames between the first starting video frame and the first ending video frame from the first video stream; and determining M second video frames from the second video stream between the second start video frame and the second end video frame.
Therefore, the user can input the first control and the second control, so that the electronic equipment can directly determine N first video frames and M second video frames without inputting for many times, and therefore, the operation of the user in the process of shooting the video file satisfactory to the user can be simplified, and the time consumption is reduced.
Optionally, in an embodiment of the present application, the first input includes a user input of an intermediate video frame in the first video stream or in the second video stream.
The processor 110 is specifically configured to perform image fusion on all video frames except the intermediate video frame in the N first video frames and the M second video frames, and insert the intermediate video frame into the video frames obtained by the image fusion.
Therefore, the electronic device can perform image fusion on a part of video frames in the N first video frames and a part of video frames in the M second video frames, and insert another part of video frames into the video frames obtained by image fusion, namely, the video frames can be processed by adopting different video processing modes, so that the diversity of special effects of the target video obtained by shooting can be improved.
Optionally, in the embodiment of the present application, the processor 110 is further configured to extract an ith first sub-image from an ith first video frame in the N first video frames, where i is a positive integer; and the ith second sub-image is scratched from the ith second video frame in the M second video frames, and the ith second sub-image is matched with the image content of the ith first sub-image.
The processor 110 is specifically configured to perform image fusion on the i first sub-image and the i second sub-image to obtain a target video.
Therefore, the electronic device can extract the ith first video frame and the ith second video frame, the ith first sub-image and the ith second sub-image with the matched image contents, and the ith first sub-image and the ith second sub-image are subjected to image fusion, so that the diversity of special effects of the target video obtained through shooting can be improved.
Optionally, in the embodiment of the present application, the processor 110 is further configured to record the first video stream and the second video stream sequentially through one camera; or, the first video stream and the second video stream are recorded in parallel through two cameras.
Therefore, the electronic device can record the first video stream and the second video stream respectively through different recording modes so as to obtain the first video stream and the second video stream with different effects, and therefore the diversity of special effects of the target video obtained through shooting can be improved.
Optionally, in the embodiment of the present application, the display unit 106 is specifically configured to display, in a thumbnail form, video frames in the first video stream and the second video stream in the video processing interface.
The user input unit 107 is further configured to receive a third input from a user on a first thumbnail, where the first thumbnail is a thumbnail of any video frame in a target video stream, and the target video stream is: a first video stream or a second video stream.
The display unit 106 is further configured to display, in response to the third input, a first video frame corresponding to the first thumbnail in the video processing interface.
Therefore, the electronic device can display the thumbnails of the video frames of the first video stream and the second video stream respectively, and display any video frame according to the input of the user on the thumbnail of the any video frame, so that the user can quickly check whether the any video frame is the video frame required by the user.
It should be appreciated that in embodiments of the present application, the input unit 104 may include a graphics processor (graphics processing unit, GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. Memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction realizes each process of the embodiment of the video processing method, and the same technical effect can be achieved, so that repetition is avoided, and no redundant description is provided herein.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, so as to implement each process of the embodiment of the video processing method, and achieve the same technical effect, so that repetition is avoided, and no redundant description is provided here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.
Claims (7)
1. A method of video processing, the method comprising:
displaying the first video stream and the second video stream;
receiving a first input of a user, wherein the first input is used for selecting video frames for video processing, the video processing comprises image fusion, or the video processing comprises image fusion and video frame insertion;
responding to the first input, and carrying out image fusion on N first video frames and M second video frames selected by the first input to obtain a target video;
wherein the N first video frames are video frames in the first video stream, the M second video frames are video frames in the second video stream, and N, M are positive integers;
before the image fusion is performed on the N first video frames and the M second video frames selected by the first input, the method further includes:
the ith first sub-image is scratched from the ith first video frame in the N first video frames, wherein i is a positive integer;
the ith second sub-image is scratched from the ith second video frame in the M second video frames, and the ith second sub-image is matched with the image content of the ith first sub-image;
The image fusion of the N first video frames and the M second video frames selected by the first input includes:
and carrying out image fusion on the ith first sub-image and the ith second sub-image, wherein the ith first sub-image and the ith second sub-image are areas overlapped by two video frames.
2. The method of claim 1, wherein the video processing interface further comprises a first control and a second control; the first control is used for selecting video frames in the first video stream, and the second control is used for selecting video frames in the second video stream;
the receiving a first input from a user includes:
receiving first input of a user to the first control and the second control;
before the image fusion is performed on the N first video frames and the M second video frames selected by the first input, the method further includes:
determining a first starting video frame and a first ending video frame in the first video stream and a second starting video frame and a second ending video frame in the second video stream according to the first input;
determining, from the first video stream, the N first video frames between the first starting video frame and the first ending video frame;
From the second video stream, the M second video frames between the second starting video frame and the second ending video frame are determined.
3. The method of claim 1, wherein the first input comprises user input of an intermediate video frame in the first video stream or in a second video stream;
the image fusion is performed on the N first video frames and the M second video frames selected by the first input to obtain a target video, including:
and performing image fusion on all video frames except the intermediate video frames in the N first video frames and the M second video frames, and inserting the intermediate video frames into the video frames obtained by image fusion.
4. The method of claim 1, wherein prior to said displaying the first video stream and the second video stream, the method further comprises:
sequentially recording the first video stream and the second video stream through a camera;
or, the first video stream and the second video stream are recorded in parallel through two cameras.
5. The method of claim 1, wherein displaying the first video stream and the second video stream comprises:
Displaying video frames in the first video stream and the second video stream respectively in the form of thumbnails in a video processing interface;
after the displaying the first video stream and the second video stream, the method further comprises:
receiving a third input of a user to a first thumbnail, wherein the first thumbnail is a thumbnail of any video frame in a target video stream, and the target video stream is: the first video stream or the second video stream;
and responding to the third input, and displaying a first video frame corresponding to the first thumbnail in the video processing interface.
6. A video processing apparatus, the video processing apparatus comprising:
the display module is used for displaying the first video stream and the second video stream;
the receiving module is used for receiving a first input of a user, wherein the first input is used for selecting video frames subjected to video processing, the video processing comprises image fusion, or the video processing comprises image fusion and video frame insertion;
the processing module is used for responding to the first input received by the receiving module, and carrying out image fusion on N first video frames and M second video frames selected by the first input to obtain a target video;
Wherein the N first video frames are video frames in the first video stream, the M second video frames are video frames in the second video stream, and N, M are positive integers;
the processing module is further configured to extract an ith first sub-image from an ith first video frame in the N first video frames, where i is a positive integer; and the ith second sub-image is scratched from the ith second video frame in the M second video frames, and the ith second sub-image is matched with the image content of the ith first sub-image;
the processing module is specifically configured to perform image fusion on the ith first sub-image and the ith second sub-image, where the ith first sub-image and the ith second sub-image are areas where two video frames overlap.
7. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the video processing method of any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210109250.3A CN114520875B (en) | 2022-01-28 | 2022-01-28 | Video processing method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210109250.3A CN114520875B (en) | 2022-01-28 | 2022-01-28 | Video processing method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114520875A CN114520875A (en) | 2022-05-20 |
CN114520875B true CN114520875B (en) | 2024-04-02 |
Family
ID=81596623
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210109250.3A Active CN114520875B (en) | 2022-01-28 | 2022-01-28 | Video processing method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114520875B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005244536A (en) * | 2004-02-26 | 2005-09-08 | Seiko Epson Corp | Image composition for generating composite image by overlapping image |
JP2018198463A (en) * | 2018-09-11 | 2018-12-13 | 日本電信電話株式会社 | Image processing device, image processing method, and computer program |
CN109104588A (en) * | 2018-07-24 | 2018-12-28 | 房梦琦 | A kind of video monitoring method, equipment, terminal and computer storage medium |
WO2020135055A1 (en) * | 2018-12-28 | 2020-07-02 | 广州市百果园信息技术有限公司 | Method, device and apparatus for adding video special effects and storage mediem |
JP2020154428A (en) * | 2019-03-18 | 2020-09-24 | 株式会社リコー | Image processing apparatus, image processing method, image processing program, electronic device and photographing apparatus |
CN112135049A (en) * | 2020-09-24 | 2020-12-25 | 维沃移动通信有限公司 | Image processing method and device and electronic equipment |
CN112995500A (en) * | 2020-12-30 | 2021-06-18 | 维沃移动通信(杭州)有限公司 | Shooting method, shooting device, electronic equipment and medium |
CN113810624A (en) * | 2021-09-18 | 2021-12-17 | 维沃移动通信有限公司 | Video generation method and device and electronic equipment |
CN113905175A (en) * | 2021-09-27 | 2022-01-07 | 维沃移动通信有限公司 | Video generation method and device, electronic equipment and readable storage medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4973756B2 (en) * | 2010-03-30 | 2012-07-11 | カシオ計算機株式会社 | Image processing apparatus and program |
US9519984B2 (en) * | 2013-03-29 | 2016-12-13 | Rakuten, Inc. | Image processing device, image processing method, information storage medium, and program |
US9979921B2 (en) * | 2016-09-02 | 2018-05-22 | Russell Holmes | Systems and methods for providing real-time composite video from multiple source devices |
JP6543313B2 (en) * | 2017-10-02 | 2019-07-10 | 株式会社エイチアイ | Image generation record display device and program for mobile object |
-
2022
- 2022-01-28 CN CN202210109250.3A patent/CN114520875B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005244536A (en) * | 2004-02-26 | 2005-09-08 | Seiko Epson Corp | Image composition for generating composite image by overlapping image |
CN109104588A (en) * | 2018-07-24 | 2018-12-28 | 房梦琦 | A kind of video monitoring method, equipment, terminal and computer storage medium |
JP2018198463A (en) * | 2018-09-11 | 2018-12-13 | 日本電信電話株式会社 | Image processing device, image processing method, and computer program |
WO2020135055A1 (en) * | 2018-12-28 | 2020-07-02 | 广州市百果园信息技术有限公司 | Method, device and apparatus for adding video special effects and storage mediem |
JP2020154428A (en) * | 2019-03-18 | 2020-09-24 | 株式会社リコー | Image processing apparatus, image processing method, image processing program, electronic device and photographing apparatus |
CN112135049A (en) * | 2020-09-24 | 2020-12-25 | 维沃移动通信有限公司 | Image processing method and device and electronic equipment |
CN112995500A (en) * | 2020-12-30 | 2021-06-18 | 维沃移动通信(杭州)有限公司 | Shooting method, shooting device, electronic equipment and medium |
CN113810624A (en) * | 2021-09-18 | 2021-12-17 | 维沃移动通信有限公司 | Video generation method and device and electronic equipment |
CN113905175A (en) * | 2021-09-27 | 2022-01-07 | 维沃移动通信有限公司 | Video generation method and device, electronic equipment and readable storage medium |
Non-Patent Citations (2)
Title |
---|
Matching of video objects taken from different camera views by using multi-feature fusion and evolutionary learning methods;S. R. Kharabe 等;《IEEE》;20161031;全文 * |
区域图像融合算法在红外图像分析中的应用;张勇;陈大建;;光电技术应用;20110615(03);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114520875A (en) | 2022-05-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113093968B (en) | Shooting interface display method and device, electronic equipment and medium | |
CN112954210B (en) | Photographing method and device, electronic equipment and medium | |
CN112135046B (en) | Video shooting method, video shooting device and electronic equipment | |
CN112492209B (en) | Shooting method, shooting device and electronic equipment | |
CN111756995A (en) | Image processing method and device | |
CN112738402B (en) | Shooting method, shooting device, electronic equipment and medium | |
CN111857512A (en) | Image editing method and device and electronic equipment | |
CN112911147B (en) | Display control method, display control device and electronic equipment | |
CN113794829B (en) | Shooting method and device and electronic equipment | |
CN113794834B (en) | Image processing method and device and electronic equipment | |
CN112437232A (en) | Shooting method, shooting device, electronic equipment and readable storage medium | |
CN112672061A (en) | Video shooting method and device, electronic equipment and medium | |
CN113923350A (en) | Video shooting method and device, electronic equipment and readable storage medium | |
CN114302009A (en) | Video processing method, video processing device, electronic equipment and medium | |
CN113794831B (en) | Video shooting method, device, electronic equipment and medium | |
CN113194256B (en) | Shooting method, shooting device, electronic equipment and storage medium | |
CN113207038B (en) | Video processing method, video processing device and electronic equipment | |
CN112162805B (en) | Screenshot method and device and electronic equipment | |
CN111586305B (en) | Anti-shake method, anti-shake device and electronic equipment | |
CN112684963A (en) | Screenshot method and device and electronic equipment | |
CN112383709A (en) | Picture display method, device and equipment | |
CN114520875B (en) | Video processing method and device and electronic equipment | |
WO2022247766A1 (en) | Image processing method and apparatus, and electronic device | |
CN114143455B (en) | Shooting method and device and electronic equipment | |
CN112367467B (en) | Display control method, display control device, electronic apparatus, and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |