CN115484423A - Transition special effect adding method and electronic equipment - Google Patents

Transition special effect adding method and electronic equipment Download PDF

Info

Publication number
CN115484423A
CN115484423A CN202210056663.XA CN202210056663A CN115484423A CN 115484423 A CN115484423 A CN 115484423A CN 202210056663 A CN202210056663 A CN 202210056663A CN 115484423 A CN115484423 A CN 115484423A
Authority
CN
China
Prior art keywords
interface
frame
video
transition
video data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210056663.XA
Other languages
Chinese (zh)
Inventor
牛思月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Publication of CN115484423A publication Critical patent/CN115484423A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/915Television signal processing therefor for field- or frame-skip recording or reproducing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

The application provides a transition special effect adding method and electronic equipment, and relates to the technical field of terminals. The problem of low man-machine efficiency of adding a transition special effect is solved. The specific scheme is as follows: displaying a first interface; receiving a first operation of a user on a first thumbnail; responding to the first operation, and displaying a second interface; the second interface is a video editing interface of the first video data; after receiving a second operation of the one-key large-film control by the user, displaying a third interface; the third interface is used for displaying the second video data; the second video data comprises a video frame of the first video data and a first transition special effect; when the first transition special effect is any one of a superposition transition, a fuzzy transition or a melting transition, the second video data further comprises a first lattice frame and a second lattice frame; the first lattice frame is overlapped with the second video frame, the second lattice frame is overlapped with the first video frame, and the first transition special effect is superposed on the first video frame and the second video frame.

Description

Transition special effect adding method and electronic equipment
The present application claims priority of chinese patent application entitled "a method and electronic device for user video creation based on story line mode" filed by the national intellectual property office on 16/6/2021, with application number 2021106709.3, the entire contents of which are incorporated herein by reference.
The present application further claims the priority of the chinese patent application entitled "a transition special effect adding method and electronic device" filed by the national intellectual property office at 29/11/2021, application number 202111434131.7, which is incorporated herein by reference in its entirety.
Technical Field
The application relates to the technical field of terminals, in particular to a transition special effect adding method and electronic equipment.
Background
With the development of electronic technology, electronic devices such as mobile phones and tablet computers are generally configured with a plurality of cameras, such as a front camera, a rear camera, a wide-angle camera, and the like. The plurality of cameras facilitate shooting of video works by a user through the electronic equipment.
After the user finishes shooting the video by using the electronic equipment, the video can be edited by adding special effects, configuring music and the like, and video works with higher ornamental value can be obtained. At present, the problem of low human-computer interaction efficiency still exists in the process of manually editing videos by users.
Disclosure of Invention
The embodiment of the application provides a transition special effect adding method and electronic equipment, which are used for improving the human-computer interaction efficiency of video editing.
In order to achieve the purpose, the following technical scheme is adopted in the application:
in a first aspect, a transition special effect adding method provided in an embodiment of the present application is applied to an electronic device, and the method includes: the electronic equipment displays a first interface; wherein the first interface comprises a first thumbnail of first video data; the electronic equipment receives a first operation of a user on the first thumbnail; the electronic equipment responds to the first operation and displays a second interface; the second interface is a video editing interface of the first video data; the second interface comprises a one-key large-piece control; after receiving a second operation of the one-key large-film control by the user, displaying a third interface by the electronic equipment; the third interface is used for displaying second video data; the second video data comprises a video frame of the first video data and a first transition special effect; when the first transition special effect is any one of a superposition transition, a fuzzy transition or a melting transition, the second video data further comprises a first lattice frame and a second lattice frame; the first frame is the same as a first video frame, the second frame is the same as a second video frame, and the first video frame and the second video frame are two adjacent frames in the first video data; the first freeze frame overlaps the second video frame, the second freeze frame overlaps the first video frame, and the first transition special effect is superimposed over the first video frame and the second video frame.
That is, for saved video data, such as first video data, the electronic device may automatically edit and process the first video data to obtain second video data in response to a user operation of the one-touch large-screen control. The processing of the first video data involves adding a transition effect, e.g. adding the first transition effect at a first point in time of the first video data.
Under the condition that the first transition special effect is the superposition transition, the fuzzy transition or the melting transition, the electronic equipment enables the created second video data to keep the same length with the first video data in a mode of inserting the lattice frames before and after the first time point, the problem that the video time length is shortened after the superposition transition, the fuzzy transition or the melting transition is added is solved, the possibility of manual adjustment of a user is reduced, and the man-machine interaction efficiency of creating the video data is improved.
In some possible embodiments, before the electronic device displays the first interface, the method further comprises: the electronic equipment displays a fourth interface; the fourth interface is a view preview interface provided by a camera application; the fourth interface comprises a first control for indicating starting of shooting the video; the electronic equipment receives a third operation of the first control by a user; the electronic equipment responds to the third operation, displays a fifth interface and starts to record the first video data; wherein the fifth interface is a video recording interface; the fifth interface comprises a second control indicating that shooting is suspended; when the first time point of the first video data is recorded, the electronic equipment responds to the operation of the user on the second control and displays a sixth interface; wherein the sixth interface is an interface for pausing shooting; the sixth interface comprises a third control for indicating to continue shooting; the electronic equipment receives a fourth operation of the user on the third control; the electronic equipment responds to the fourth operation, displays the fifth interface again, and determines the first video frame and the second video frame which are positioned before and after the first time point in the first video data; wherein the fifth interface includes a fourth control indicating to stop shooting; the electronic equipment receives a fifth operation of the user on the fourth control; the electronic device displays a first interface, comprising: the electronic equipment responds to the fifth operation and displays the first interface; the first interface is also a viewfinder preview interface provided by the camera application.
In the above-described embodiment, the first video data may be a video taken by the electronic device in the normal mode. In the process of shooting the video, the electronic equipment receives an operation indicating that shooting is suspended at a first time point, recording of the first video data can be suspended, and a user can adjust a viewing picture during the suspension, so that in the process of editing and processing the first video data, the electronic equipment can add a first transition special effect at the first time point of the first video data, join video segments collected before and after suspension, improve the ornamental property of the created second video data, realize an automatic processing process, and improve the man-machine interaction efficiency.
In some possible embodiments, the fifth interface is a video recording interface in the first lens mode, and the fifth interface includes a fifth control that instructs to switch the lens mode; before the electronic device receives the fifth operation, the method further includes: when the second time point of the first video data is recorded, the electronic equipment responds to a sixth operation of a user on the fifth control, and freezes a third video frame to obtain a plurality of first substitute frames; wherein the third video frame is a last frame of video frame collected before the sixth operation is received; the sixth operation is an operation of instructing switching to a second lens mode; after the electronic device displays a seventh interface, canceling the freeze frame for the third video frame; the seventh interface is a video recording interface in the second lens mode; after the electronic device receives the fifth operation, the method further includes: the electronic device superimposes a second transition special effect on the first substitute frame.
In the above-described embodiment, the first video data may be a video taken by the electronic device in the normal mode. In the process of shooting the video, the electronic device receives the operation of indicating the lens mode switching at the second time point, and can directly switch the lens mode, so that the electronic device can adopt different view finding modes to collect video frames. In addition, the influence of the lens mode switching on the image quality of the video frames can be improved through the freeze frames, the ornamental property of the second video data is improved, and the human-computer interaction efficiency of video editing is improved.
In some possible embodiments, the second video data further comprises: a second substitute frame, a first music and a third transition special effect; the third transition special effect corresponds to the first music; the third transition special effect is superimposed on the second replacement frame, the second replacement frame being used to replace the first replacement frame in the first video data.
In the above embodiment, in the process of editing the first video data, the second transition special effect added in the first video data may be replaced by a third transition special effect, and the third transition special effect is matched with the first music, so that the music in the obtained second video data is adapted to the transition special effect, the ornamental property of the second video data is improved, and the human-computer interaction efficiency of the edited video is improved.
In some possible embodiments, before the electronic device displays the third interface, the method further includes: deleting, by the electronic device, the first substitute frame; wherein the first substitute frame is located between the third video frame and a fourth video frame; the fourth video frame is a first frame video frame collected after the electronic device displays the seventh interface; the electronic device determining a plurality of frames of the second substitute frame; wherein the second substitute frame comprises a same frame as the third video frame and a same frame as the fourth video frame; the number of the second replacement frames is not less than the number of the first replacement frames; and the electronic equipment superimposes the third field of special effect on the second substitute frame to obtain the second video data.
In some possible embodiments, before the electronic device displays the first interface, the method further includes: the electronic equipment displays a main interface; the main interface comprises an icon of a gallery application; the electronic equipment receives a seventh operation of the icon of the gallery application by the user; the electronic device displays a first interface, including: and the electronic equipment responds to the seventh operation and displays the first interface, and the first interface is an application interface provided by the gallery application.
In some possible embodiments, before the electronic device displays the third interface, the method further includes: the electronic device determining, in response to the second operation, a first effect template from a plurality of preconfigured effect templates; the first effect template comprises first music; the electronic device determining the first transition special effect corresponding to the first music; when the first transition special effect is any one of a superposition transition, a fuzzy transition or a melting transition, the electronic equipment overlaps the first lattice frame on the second video frame; and, overlaying the second freeze frame below the first video frame; the electronic device superimposes the first transition special effect on the first video frame and the second video frame.
In some possible embodiments, the first effect template corresponds to a first style; the determining a first effects template from a plurality of preconfigured effects templates, comprising: the electronic equipment determines that the first video data is matched with the first style by using a preset artificial intelligence model; the electronic equipment belongs to the first style effect template, and the first effect template is determined; or, the electronic device randomly determines the first effect template from a plurality of preset effect templates.
In the above embodiment, the electronic device can manufacture various second video data by randomly determining the first effect template, so that the manufacture of a uniform video product is avoided, and the human-computer interaction efficiency for manufacturing the video is improved.
In some possible embodiments, the electronic device determining the first transition special effect corresponding to the first music includes: the electronic equipment determines a first transition special effect with associated identification with the first music from a plurality of preset transition special effects; or the electronic equipment determines the first transition special effect from a plurality of preset transition special effects based on the matching weight; and each preset transition special effect corresponds to one matching weight, and the matching weight is a quantization ratio parameter of the adaptation degree between the first music and the preset transition special effect.
In the above embodiment, while the transition special effect in the second video data is ensured to be adapted to music, the randomness of the occurring transition special effect is improved, and the ornamental performance of the produced video product is improved.
In a second aspect, a transition special effect adding method provided by an embodiment of the present application is applied to an electronic device, and the method includes: the electronic equipment displays an eighth interface; wherein the eighth interface comprises a first identifier indicating a first effect template; the first effect template comprises first music and a first transition special effect corresponding to the first music; the electronic equipment responds to the operation of the user on the first identifier and displays a ninth interface; the ninth interface is a view-finding preview interface corresponding to the first effect template; the ninth interface comprises a sixth control for indicating the start of shooting; the electronic equipment responds to the operation of the user on the sixth control, displays a tenth interface and starts to record third video data; the tenth interface is a video recording interface corresponding to the first effect template; the tenth interface comprises a second control indicating that the shooting is paused; when the first time point of the third video data is recorded, the electronic equipment responds to the operation of a user on the second control and displays an eleventh interface; wherein the eleventh interface is an interface that suspends shooting; the eleventh interface includes a third control indicating to continue shooting; the electronic equipment responds to the operation of the user on the third control, displays the tenth interface again, and determines the fifth video frame and the sixth video frame which are positioned before and after the first time point in the third video data; displaying a twelfth interface after the third video data is recorded; the twelfth interface is used for displaying fourth video data; the fourth video data comprises video frames of the third video data, the first music, and the first transition special effect; when the first transition special effect is any one of a superposition transition, a fuzzy transition or a melting transition, the fourth video data further comprises a third frame and a fourth frame; the third freeze frame is the same as the fifth video frame, the fourth freeze frame is the same as the sixth video frame, the third freeze frame overlaps the sixth video frame, the fourth freeze frame overlaps the fifth video frame, and the first transition special effect is superimposed on the fifth video frame and the sixth video frame.
In a third aspect, an electronic device provided in an embodiment of the present application includes one or more processors and a memory; the memory coupled to the processor, the memory to store computer program code, the computer program code comprising computer instructions, which, when executed by the one or more processors, cause the one or more processors to:
displaying a first interface; wherein the first interface comprises a first thumbnail of first video data; receiving a first operation of a user on the first thumbnail; responding to the first operation, and displaying a second interface; the second interface is a video editing interface of the first video data; the second interface comprises a one-key large-film control; after receiving a second operation of the one-key large-film control by the user, displaying a third interface; the third interface is used for displaying second video data; the second video data comprises a video frame of the first video data and a first transition special effect; when the first transition special effect is any one of a superposition transition, a fuzzy transition or a melting transition, the second video data further comprises a first lattice frame and a second lattice frame; the first freeze frame is the same as a first video frame, the second freeze frame is the same as a second video frame, and the first video frame and the second video frame are two adjacent frames in the first video data; the first freeze frame overlaps the second video frame, the second freeze frame overlaps the first video frame, and the first transition special effect is superimposed over the first video frame and the second video frame.
In some possible embodiments, the one or more processors are further configured to: displaying a fourth interface; the fourth interface is a view preview interface provided by a camera application; the fourth interface comprises a first control for indicating starting of shooting the video; receiving a third operation of the first control by a user; responding to the third operation, displaying a fifth interface, and starting to record the first video data; wherein the fifth interface is a video recording interface; the fifth interface comprises a second control indicating that shooting is suspended; when the first time point of the first video data is recorded, responding to the operation of a user on the second control, and displaying a sixth interface; wherein the sixth interface is an interface for pausing shooting; the sixth interface comprises a third control for indicating to continue shooting; receiving a fourth operation of the third control by the user; responding to the fourth operation, displaying the fifth interface again, and determining the first video frame and the second video frame which are positioned before and after the first time point in the first video data; wherein the fifth interface includes a fourth control indicating to stop shooting; receiving a fifth operation of the user on the fourth control; responding to the fifth operation, and displaying the first interface; the first interface is also a viewing preview interface provided by the camera application.
In some possible embodiments, the fifth interface is a video recording interface in a first lens mode, the fifth interface including a fifth control to instruct switching of lens modes; the one or more processors further to:
when the second time point of the first video data is recorded, responding to a sixth operation of a user on the fifth control, and freezing a third video frame to obtain a plurality of frames of first substitute frames; wherein the third video frame is a last frame of video frame collected before the sixth operation is received; the sixth operation is an operation of instructing switching to a second lens mode; after displaying a seventh interface, unfreezing for the third video frame; wherein the seventh interface is a video recording interface in the second lens mode; superimposing a second transition special effect on the first replacement frame.
In some possible embodiments, the second video data further comprises: a second substitute frame, a first music and a third transition special effect; the third transition special effect corresponds to the first music; the third transition special effect is superimposed on the second replacement frame, the second replacement frame being used to replace the first replacement frame in the first video data.
In some possible embodiments, the one or more processors are further configured to: deleting the first replacement frame; wherein the first substitute frame is located between the third video frame and a fourth video frame; the fourth video frame is a first frame video frame collected after the electronic device displays the seventh interface; determining the second substitute frame of the multiframe; wherein the second substitute frame comprises a same frame as the third video frame and a same frame as the fourth video frame; the number of the second replacement frames is not less than the number of the first replacement frames; and superposing the third transition special effect on the second substitute frame to obtain the second video data.
In some possible embodiments, the one or more processors are further configured to:
displaying a main interface; the main interface comprises an icon of a gallery application; receiving a seventh operation of the icon of the gallery application by the user; and responding to the seventh operation, and displaying the first interface, wherein the first interface is an application interface provided by the gallery application.
In some possible embodiments, the one or more processors are further configured to:
determining, in response to the second operation, a first effect template from a plurality of preconfigured effect templates; the first effect template comprises first music; determining the first transition special effect corresponding to the first music; when the first transition special effect is any one of a superposition transition, a fuzzy transition or a melting transition, overlapping the first lattice frame on the second video frame; and, overlaying the second freeze frame below the first video frame; superimposing the first transition special effect on the first video frame and the second video frame.
In some possible embodiments, the first effect template corresponds to a first style; the one or more processors further to:
determining that the first video data matches the first style using a preset artificial intelligence model; determining the first effect template from the effect templates belonging to the first style; or, randomly determining the first effect template from a plurality of preset effect templates.
In some possible embodiments, the one or more processors are further configured to: determining the first transition special effect with the associated identification with the first music from a plurality of preset transition special effects;
or determining the first transition special effect from a plurality of preset transition special effects based on the matching weight;
and each preset transition special effect corresponds to one matching weight, and the matching weight is a quantization ratio parameter of the adaptation degree between the first music and the preset transition special effect.
In a fourth aspect, an embodiment of the present application provides an electronic device, which includes one or more processors and a memory; the memory coupled to the processor, the memory to store computer program code, the computer program code comprising computer instructions, which, when executed by the one or more processors, cause the one or more processors to: displaying an eighth interface; wherein the eighth interface comprises a first identifier indicating a first effect template; the first effect template comprises first music and a first transition special effect corresponding to the first music; responding to the operation of the user on the first identifier, and displaying a ninth interface; the ninth interface is a view-finding preview interface corresponding to the first effect template; the ninth interface comprises a sixth control for indicating the start of shooting; responding to the operation of the user on the sixth control, displaying a tenth interface, and starting to record third video data; the tenth interface is a video recording interface corresponding to the first effect template; the tenth interface comprises a second control indicating that the shooting is paused; when the first time point of the third video data is recorded, responding to the operation of the user on the second control, and displaying an eleventh interface; wherein the eleventh interface is an interface that suspends shooting; the eleventh interface includes a third control that indicates to continue shooting; responding to the operation of the user on the third control, displaying the tenth interface again, and determining a fifth video frame and a sixth video frame which are positioned before and after the first time point in the third video data; displaying a twelfth interface after the third video data is recorded; the twelfth interface is used for displaying fourth video data; the fourth video data comprises video frames of the third video data, the first music, and the first transition special effect; when the first transition special effect is any one of a superposition transition, a fuzzy transition or a melting transition, the fourth video data further comprises a third freeze frame and a fourth freeze frame; the third freeze frame is the same as the fifth video frame, the fourth freeze frame is the same as the sixth video frame, the third freeze frame overlaps the sixth video frame, the fourth freeze frame overlaps the fifth video frame, and the first transition special effect is superimposed on the fifth video frame and the sixth video frame.
In a fifth aspect, a computer storage medium provided in an embodiment of the present application includes computer instructions, which, when executed on an electronic device, cause the electronic device to perform the method described in the first aspect, the second aspect, and possible embodiments thereof.
In a sixth aspect, the present application provides a computer program product, which, when run on the above electronic device, causes the electronic device to perform the methods described in the above first aspect, second aspect and possible embodiments thereof.
It is understood that the electronic device, the computer storage medium and the computer program product provided in the foregoing aspects are all applied to the corresponding method provided in the foregoing, and therefore, the beneficial effects achieved by the electronic device, the computer storage medium and the computer program product provided in the foregoing aspects can refer to the beneficial effects in the corresponding method provided in the foregoing, and are not described herein again.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
FIG. 2 is an exemplary diagram of a display interface provided by an embodiment of the application;
FIG. 3 is a second exemplary diagram of a display interface provided in an embodiment of the present application;
fig. 4 conceptually illustrates an exemplary diagram of the influence on the video data 1 after the shot mode is switched from the front-rear mode to the picture-in-picture mode;
figure 5 conceptually illustrates an example diagram of processing video data 1 after the shot mode is switched from the front-back mode to the picture-in-picture mode;
FIG. 6 is a third exemplary diagram of a display interface provided in an embodiment of the present application;
fig. 7A conceptually illustrates one of exemplary diagrams of processing video data 1 in a scene in which the shot mode is switched from the front-rear mode to the rear-rear mode;
fig. 7B conceptually illustrates a second exemplary diagram of processing the video data 1 in a scene in which the lens mode is switched from the front-rear mode to the rear-rear mode;
FIG. 7C is a fourth illustration of an example display interface provided by an embodiment of the present application;
fig. 7D conceptually illustrates an exemplary diagram of an influence of a pause shooting event on the video data 1;
FIG. 8 is a fifth illustration of an example display interface provided by an embodiment of the present application;
FIG. 9 is a sixth illustration of an example display interface provided by an embodiment of the present application;
FIG. 10 is a seventh illustration of an example display interface provided by an embodiment of the present application;
fig. 11 is a flowchart illustrating steps of processing video data 1 by using an effect template according to an embodiment of the present application;
figure 12 conceptually illustrates an example diagram of replacing the transition special effects that were in the video data 1;
figure 13 conceptually illustrates an exemplary diagram of adding an overlay transition in video data 1;
FIG. 14 is an eighth illustration of an exemplary display interface provided by an embodiment of the present application;
FIG. 15 is a ninth illustration of an example display interface provided by an embodiment of the present application;
FIG. 16 is a tenth drawing illustrating an exemplary display interface provided by an embodiment of the present application;
fig. 17 is a schematic composition diagram of a chip system according to an embodiment of the present application.
Detailed Description
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present embodiment, the meaning of "a plurality" is two or more unless otherwise specified.
Embodiments of the present embodiment will be described in detail below with reference to the accompanying drawings.
The embodiment of the application provides a transition special effect adding method which can be applied to electronic equipment with a plurality of cameras. By adopting the method provided by the embodiment of the application, the electronic equipment can respond to the operation of a user and automatically process the video data, such as adding transition special effects, configuring video music and the like. After the transition special effect is ensured to be added, the video duration is not influenced, rework is avoided, and the man-machine interaction efficiency of video creation is improved.
For example, the electronic device in the embodiment of the present application may be a mobile phone, a tablet computer, a smart watch, a desktop, a laptop, a handheld computer, a notebook, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a Personal Digital Assistant (PDA), an Augmented Reality (AR) \ Virtual Reality (VR) device, and other devices including multiple cameras, and the embodiment of the present application does not particularly limit the specific form of the electronic device.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings. Please refer to fig. 1, which is a schematic structural diagram of an electronic device 100 according to an embodiment of the present disclosure. As shown in fig. 1, the electronic device 100 may include: the mobile terminal includes a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like.
The sensor module 180 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
It is to be understood that the illustrated structure of the present embodiment does not constitute a specific limitation to the electronic apparatus 100. In other embodiments, electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), among others. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
It should be understood that the interface connection relationship between the modules illustrated in the present embodiment is only an exemplary illustration, and does not limit the structure of the electronic device 100. In other embodiments, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, connected to the display screen 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 293.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV and other formats. In some embodiments, electronic device 100 may include N cameras 193, N being a positive integer greater than 1.
For example, the N cameras 193 may include: one or more front cameras and one or more rear cameras. For example, the electronic device 100 is a mobile phone. The mobile phone comprises at least one front camera. The front camera is disposed on the front side of the mobile phone, such as the front camera 201 shown in fig. 2 (a). In addition, the mobile phone comprises at least one rear camera. The rear camera is arranged on the back side of the mobile phone. Thus, the front camera and the rear camera face different directions.
In some embodiments, the electronic device may enable at least one of the N cameras 193 to take a picture and generate a corresponding photo or video. For example, one front camera of the electronic apparatus 100 is used alone for shooting. As another example, a rear camera of the electronic apparatus 100 is used alone for shooting. For another example, two front cameras are simultaneously started to shoot. For another example, two rear cameras are simultaneously started to shoot. For another example, a front camera and a rear camera are simultaneously started to shoot and the like.
It is understood that when one camera 193 is activated for shooting alone, it may be referred to as a single shot mode, such as a forward shot mode (also referred to as a single forward mode) and a backward shot mode (also referred to as a single backward mode). The multiple cameras 193 are simultaneously activated for shooting, which may be collectively referred to as a multi-shot mode, such as a front-front mode, a front-back mode, a back-back mode, and a picture-in-picture mode.
For example, a front camera and a rear camera are simultaneously enabled. After a front camera and a rear camera are started to shoot, the electronic equipment can render and combine image frames acquired by the front camera and the rear camera. The rendering and combining can be splicing image frames acquired by different cameras. If after the front and back mode is adopted to carry out vertical screen shooting, the image frames collected by different cameras can be spliced up and down. For another example, after the rear-back mode is adopted for transverse screen shooting, the image frames collected by different cameras can be spliced left and right. For another example, after the picture-in-picture mode is adopted for photographing, the image frame collected by one camera can be embedded in the image frame collected by the other camera. Then, the picture is generated by encoding.
In addition, after a front camera and a rear camera are started to shoot videos, the front camera collects a path of video stream and caches the video stream. The rear camera collects a path of video stream and caches the video stream. Then, the electronic device 100 performs rendering and merging processing on the two buffered video streams frame by frame, that is, renders and merges video frames with the same or matched acquisition time points in the two video streams. And then, encoding is carried out to generate a video file.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor, which processes input information quickly by referring to a biological neural network structure, for example, by referring to a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110. The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. In this way, the electronic device 100 may play audio data, such as video music, and the like.
The pressure sensor is used for sensing a pressure signal and converting the pressure signal into an electric signal. In some embodiments, the pressure sensor may be disposed on the display screen 194. The gyro sensor may be used to determine the motion pose of the electronic device 100. The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for identifying the posture of the electronic equipment 100 and applied to horizontal and vertical screen switching and other applications. Touch sensors, also known as "touch panels". The touch sensor may be disposed on the display screen 194, and the touch sensor and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type.
For clarity and conciseness of the following description of various embodiments, a brief introduction to related concepts or technologies is first given:
after the user uses the video shot by the electronic equipment, the shot video can be edited by operating the electronic equipment, such as configuring video music, adding animation special effects, adding transition special effects and the like. Therefore, the video created secondarily is more vivid and rich, and the creation intention of the user is met. The transition special effect is added, so that the transition of the video content is more natural, and the content presented by the video is richer.
However, in the related art, on one hand, if the user manually changes the original transition special effect in the case where the transition special effect exists in the video, the operation process is complicated. If the user does not change the original transition special effect, the original transition special effect may not conform to the video music style selected by the user. On the other hand, after a part of transition special effects (such as a transition of aliasing) are added to the video, the video length is shortened, and the user experience is also influenced while the video quality is influenced.
To solve the problem, the embodiment of the present application provides a transition special effect adding method. The method according to the embodiment of the present application will be described below by taking the electronic device 100 as a mobile phone as an example.
In some embodiments, the above-described method may be used to process video data taken without selecting an effect template. It can be understood that, in a scene where an effect template is not selected for shooting, the mobile phone may directly store the shot original video after the shooting is finished.
In other embodiments, the above method may also be used to process video data taken with an effect template selected. It can be understood that in the scene of selecting the effect template for shooting, after the mobile phone finishes shooting the video, the video can be edited according to the effect template to obtain the video works with the similar style to the effect template, and then the video works are stored.
It will be appreciated that the use of the above-described effect templates helps to simplify the complexity of the creation when creating a video work with soundtracks, filters, special effects. Illustratively, the effect templates include configuration items such as filters, stickers, a plurality of selectable transition effects, and the like. Meanwhile, the effect template is also corresponding to video music.
Taking the unselected effect templates, shooting the video data 1 by the mobile phone, and processing the video data 1 by using the above method as an example, the following description is made:
as shown in fig. 2 (a), the handset includes a main interface. The main interface is also referred to as a desktop 202. The main interface can be a user interface displayed after the mobile phone is unlocked. The home interface may include an icon of an installed Application (APP), such as an icon 203 of a camera APP.
In some embodiments, the mobile phone may receive an operation of the user on the main interface and start the APP indicated by the operation. Illustratively, as shown in (a) of fig. 2, the mobile phone may receive an operation of the icon 203 by the user, such as a click operation, and in response to the operation, start the camera APP. After the camera APP starts, an application interface provided by the camera APP may be displayed. For example, a framing interface for realizing the photographing function, that is, an interface 204 as shown in (b) in fig. 2 is displayed.
In some embodiments, the user may switch between different functional modes of the camera APP in the interface 204, such as a portrait functional mode, a photographing functional mode, a video recording functional mode, a multi-mirror video recording functional mode, and the like. That is, the handset may receive operation 1 of the user in the interface 204, the operation 1 being used to instruct the camera APP to switch different functional modes.
Illustratively, the interface 204 includes controls corresponding to a plurality of functional modes of the camera APP, such as a portrait control, a photographing control, a video recording control, a multi-mirror video recording control, and the like. The shooting control is in a selected state and used for prompting a user that the current viewing interface is used for realizing a shooting function.
During the period of displaying the interface 204, the mobile phone may receive an operation of a user on any one of the portrait control, the video recording control, and the multi-lens video recording control, and determine a switched functional mode based on the operation, and in addition, the mobile phone may also display a view finding interface for implementing the functional mode.
For example, when the mobile phone receives an operation of the portrait control by the user, such as a click operation, a viewing interface for implementing a portrait shooting function may be displayed. When the mobile phone receives an operation of the video recording control by a user, such as a click operation, a view interface for realizing a video recording function can be displayed. In addition, as shown in fig. 2 (b), when the mobile phone receives an operation, such as a click operation, of the multi-lens video recording control by the user, a view interface for implementing the multi-lens video recording function may be displayed, that is, the interface 205 shown in fig. 2 (c) is also referred to as a fourth interface.
Wherein, video recording function and many mirrors video recording function can all record video data, and the difference of the two lies in: at the start of recording, the lens modes enabled are different. Under the video recording function, the mobile phone can respond to the operation of a user and start a single-shot lens mode such as a single-front mode or a single-back mode to shoot videos. Under the multi-lens video recording function, the mobile phone can respond to the operation of a user and start a front-back mode, a back-back mode or a picture-in-picture multi-camera lens mode to shoot videos. The method provided by the embodiment of the application is not only suitable for the video data shot under the video recording function, but also suitable for the video data shot under the multi-mirror video recording function, and the realization principles are the same. In the following embodiments, the method provided in the embodiments of the present application is mainly described by taking a multi-mirror video recording function as an example.
Under the multi-lens recording function, as shown in (c) of fig. 2, the viewfinder interface (i.e., the interface 205) displayed by the mobile phone includes a plurality of viewboxes, such as the viewfinder 206 and the viewfinder 207. The arrangement position relationship between the viewfinder 206 and the viewfinder 207 is related to the posture of the mobile phone. For example, in a scene in which the gyro sensor of the mobile phone recognizes that the mobile phone is in the portrait state, the above-described finder frame 206 and finder frame 207 are arranged up and down. The view frames 206 and 207 are arranged left and right in a scene where the gyro sensor of the mobile phone recognizes that the mobile phone is in a landscape state.
In addition, the view frames 206 and 207 correspond to cameras, respectively. For example, the viewfinder 206 corresponds to the camera 1 (e.g., the rear camera a), so that the viewfinder 206 can be used to display the video stream uploaded by the camera 1. The viewfinder 207 corresponds to the camera 2 (e.g., a front camera), so that the viewfinder 207 can be used to display the video stream uploaded by the camera 2. It is understood that the camera corresponding to each frame (e.g., the frame 206 and the frame 207) can be adjusted according to the operation of the user. After the camera corresponding to each view finder is changed, it means that the lens mode used by the mobile phone is changed accordingly, and several scenes for switching the lens mode will be introduced in the subsequent embodiments, which are not described herein again.
In addition, in the interface 205, the handset may receive an operation 2 of the user, the operation 2 being for triggering the handset to directly start video shooting without selecting any effect template. Illustratively, a control 208, referred to as a first control, is included in the interface 205 for instructing the start of shooting. When the mobile phone receives a third operation, such as a click operation, on the control 208 by the user, a recording interface, such as a fifth interface, for example, an interface 209 shown in fig. 2 (d), may be displayed. Therein, the interface 209 also includes a view frame 206 and a view frame 207. In this way, the interface 209 of the mobile phone can display the video streams collected by the front camera and the rear camera a in real time. Meanwhile, the mobile phone can also render and combine video streams collected by the front camera and the rear camera a, then encode the video streams, generate video data and store the video data. During the shooting process, the video frames of the video data are gradually increased.
In addition, during video shooting, the mobile phone may receive an operation 3 of the user in the interface 209, where the operation 3 may be an operation instructing to switch the lens mode. The mobile phone can respond to the operation 3, and start different cameras or different camera combinations to collect video streams, so that users can create videos with various scenes and rich contents.
Illustratively, a control for instructing to switch the lens mode, also called a fifth control, such as the control 301 shown in fig. 3 (a) may be included in the interface 209. Wherein the icon of control 301 is used to indicate the currently enabled lens mode. After the cell phone receives the user's operation on control 301, as shown in fig. 3 (b), a lens mode selection window, e.g., window 302, may be displayed in interface 209. In the window 302, a plurality of selectable lens modes are listed, such as a front-back mode, a back-back mode, a picture-in-picture mode, a single back mode, and a single back mode. In window 302, the front-back mode is in a selected state. In this scenario, the mobile phone may receive a selection operation of the user on the post-mode, the picture-in-picture mode, the single post-mode, or the single post-mode, and switch the used lens mode in response to the selection operation.
The operation of switching the lens mode by operating the fifth control may be referred to as a sixth operation. The time when the sixth operation is received, at the corresponding time in the video data 1, may be referred to as a second time point. A mode before the lens mode is switched may be referred to as a first lens mode, and a mode after the lens mode is switched may be referred to as a second lens mode. The video recording interface before the lens mode switching may be referred to as a fifth interface, and the recording interface after the lens mode switching may be referred to as a seventh interface.
For example, when the mobile phone receives a selection operation of the user on the picture-in-picture mode, the mobile phone may switch the lens mode to the picture-in-picture mode. That is, as in fig. 3 (c), the cellular phone can switch the display interface 303. Also included in the interface 303 are a view frame 206 and a view frame 207. Therein, the viewfinder 206 will continue to be used for displaying the video stream uploaded by the rear camera a, and the viewfinder 207 will also continue to be used for displaying the video stream uploaded by the front camera.
In addition, during the switching, the finder frame 207 is reduced. And the viewfinder 206 increases. In addition, the viewfinder 207 is superposed on the viewfinder 206. In this process, the camera parameters of the front camera and the rear camera a are also adjusted. After camera parameters are adjusted, the problem that uploading of video streams collected by the front camera and the rear camera a is not timely may occur. This will cause a pause section to appear in the captured video data 1. That is, as shown in fig. 3 (c), the framing frame 206 and the framing frame 207 of the interface 303 may briefly appear as a black screen. Of course, after the camera parameter adjustment is completed, as shown in fig. 3 (d), the view box 207 of the interface 303 displays the video stream uploaded by the front camera in real time, and the view box 206 displays the video stream uploaded by the rear camera a in real time.
Referring to fig. 4, fig. 4 conceptually illustrates the effect on the video data 1 after the lens mode is switched (from the front-back mode to the picture-in-picture mode) during the process of shooting the video by the mobile phone. On the relative time axis of the video data 1, between time 00 and 00. Wherein the relative time axis is a time axis created based on the video data 1. Time 00 of the relative time axis: 00 corresponds to the capture time of the first frame video frame (also called the first frame) of video data 1.
After time 00. This operation may momentarily affect the video stream return of the front camera and the rear camera a, so that the problem of missing video frames occurs between time 00. After time 00.
In the embodiment of the present application, as shown in fig. 5, the handset may insert a multi-frame substitute frame 1, also referred to as a first substitute frame, between time 00 and time 00.
In some examples, after the handset determines that the user indicates a switch to picture-in-picture mode, the handset may freeze video frame 401 (the last frame in video clip 1) resulting in substitute frame 1. This video frame 401 may be referred to as a third video frame. In this scenario, substitute frame 1 displays the same picture content as video frame 401. After the video stream returned by each camera is received again, the mobile phone cancels the frame freeze for the video frame 401.
In other examples, after determining that the user indicates switching to the picture-in-picture mode, the handset may insert a preview configuration of image frames, such as black image frames or white image frames, after the video frame 401 until the video stream returned by each camera is received again, and then stop inserting the substitute frame 1. Understandably, the inserted image frames may also be collectively referred to as substitute frame 1.
In other examples, the handset may mark video frame 401 after determining that the user indicates a switch to picture-in-picture mode. Thus, after the capture of the video data 1 is completed, the handset automatically inserts the multi-frame substitute frame 1 after the video frame 401. The substitute frame 1 may be the same image frame as the video frame 401, or may be a preconfigured white image frame or black image frame.
For another example, in the case where the window 302 is included in the interface 209, as shown in (a) in fig. 6, the cellular phone may receive a selection operation of the rear mode by the user. In this way, the mobile phone can respond to the selection operation and switch the used lens mode to the rear mode. That is, as shown in fig. 6 (b), the cellular phone can switch the display interface 601. The interface 601 also includes a viewfinder 206 and a viewfinder 207. The viewfinder 206 is used for displaying the video stream uploaded by the rear camera a, and the viewfinder 207 is used for displaying the video stream uploaded by the rear camera b.
In addition, after the rear-rear mode is selected in the window 302, during the period of displaying the interface 209, that is, during the period of shooting the video by the mobile phone in the front-rear mode, the mobile phone receives an operation of the control 602 by the user, for example, a click operation, and may also switch the lens mode to the rear-rear mode.
In the process of switching to the rear-rear mode, the relative position and size between the viewfinder frame 206 and the viewfinder frame 207 are not changed, the mobile phone starts the rear camera b, closes the front camera, and corresponds the rear camera b to the viewfinder frame 207. In this way, the viewfinder 206 of the mobile phone continues to display the video stream uploaded by the rear camera a, and the viewfinder 207 displays the video stream uploaded by the rear camera b.
In this process, the video stream returned by the rear camera a is not affected, as shown in (b) of fig. 6, and the image display of the finder frame 206 during the shot cut is not affected. However, due to hardware reaction delay, there is a time gap between the front camera turning off and the rear camera b normally uploading the video stream. In this time gap, the screen display of the finder frame 207 may be affected, and as shown in (b) of fig. 6, the finder frame 207 appears as a short black screen. Of course, after the rear camera b can normally upload the video stream, as shown in (c) of fig. 6, the view box 207 of the interface 601 can display the video stream uploaded by the rear camera b in real time, and the view box 206 displays the video stream uploaded by the rear camera a in real time.
Referring to fig. 7A, fig. 7A conceptually shows an exemplary diagram of generation of video data 1 in a case where a lens mode switching (from a front-rear mode to a rear-rear mode) occurs during shooting of a video by a mobile phone.
As shown in (a) in fig. 7A, on the relative time axis of the video data 1, there are multiframe video frames with screen abnormalities between time 00. These video frames with picture abnormalities are due to shot mode switching.
In some embodiments, the handset may mark the video frame with dots when instructing to activate the rear camera b. After determining that the rear camera b has normally returned the video stream, the marking is stopped. In this way, as shown in (b) of fig. 7A, the handset can remove the video frame corresponding to the mark position, so that video segment 1 and video segment 2 can be obtained. Then, in order to ensure the continuity of the video data, as shown in (c) of fig. 7A, the handset may further add a substitute frame 1 between the video clip 1 and the video clip 2, thereby obtaining the video data 1.
In other embodiments, after the mobile phone determines that a new camera needs to be activated (i.e., the rear camera B), the mobile phone may freeze the acquired last frame of video frame, for example, the video frame 701 in fig. 7B is subjected to frame freezing to obtain the substitute frame 1. After the fact that the rear camera b normally returns the video stream is determined, frame freeze for the video frame 701 is cancelled, and the video data 1 is generated through normal coding according to the video streams uploaded by the rear camera a and the rear camera b.
It can be seen that the addition of the substitute frame 1 mentioned in the above embodiment may be after the video capture is completed, or may be during the video capture.
In addition, the normal shooting of the video is not interrupted by switching the lens mode. For example, after the cell phone switches to the rear mode, the cell phone may display the interface 601 and take a video. Thus, the number of video frames corresponding to the video data 1 will continue to increase.
In the above example, the mobile phone is enumerated to switch the lens mode from the front-rear mode to the picture-in-picture mode, and from the front-rear mode to the rear mode. In practical applications, similar problems also exist in switching between other shot modes, and the problems can also be solved by inserting the substitute frame 1 in the manner described in the foregoing example, and details are not repeated here.
In addition, it is understood that in the case where the substitute frame 1 is added to the video data 1, during the playing, a situation in which the picture is still occurs. In the embodiment of the present application, while adding the substitute frame to the video data 1, the mobile phone may further superimpose a transition special effect, such as referred to as transition special effect 1 or referred to as a second transition special effect, on the multi-frame substitute frame 1.
In some examples, transition effect 1 may be any type of transition effect pre-specified in the mobile phone, such as one of left transition, right transition, rotation transition, superposition transition, fuzzy transition, melting transition, black transition, white transition, zoom-in transition, zoom-out transition, up transition, and down transition. In addition, in the transition special effect, the left transition and the right transition are only applicable to the scene of the vertical screen shot video, and the up transition and the down transition are only applicable to the scene of the horizontal screen shot video. In other examples, the transition special effect 1 may be a transition randomly determined by the mobile phone from the above multiple transition special effects.
Therefore, the transition special effect 1 can better link the video clips before and after the substitute frame 1, namely, the video clips 1 and the video clips 2 can be better excessive, the viewing experience of a user is improved, and the shooting quality of the video is also improved.
In addition, in the area where the transition special effect 1 is added, the mobile phone may also mark the substitute frame 1 where the transition special effect is actually superimposed, for example, mark 1, so that the mobile phone may identify the position where the transition is added in the later stage.
In other embodiments, during the video capture, the handset may also receive an operation to instruct to pause the capture. The handset may pause continuing to capture video in response to the indication to pause capturing. Illustratively, as shown in (a) of fig. 7C, during the display interface 209 of the mobile phone, that is, during the process of the mobile phone capturing the video, an operation, such as a click operation, of the user with respect to a pause control (e.g., control 701), that is, a second control may be received. The handset may also pause encoding and storing video frames in response to this operation.
At this time, during the suspended photographing, as shown in (b) of fig. 7C, the cellular phone may also display an interface 702, such as what is called a sixth interface. Wherein the shooting duration in the interface 702 is timed out. In addition, the interface 702 also includes a view frame 206 and a view frame 207, and the view frame 206 and the view frame 207 can continue to display the video stream uploaded by the corresponding camera, so as to facilitate the user to preview the view effect.
In other embodiments, the mobile phone may also receive an operation of the user instructing to resume the shooting, and in response to the operation, display the interface 209 again. Illustratively, as shown in (b) in fig. 7C, the mobile phone receives a fourth operation of the control 703 (i.e., the third control) on the interface 702 by the user, and as shown in (C) in fig. 7C, the interface 209 may be displayed again, and the video shooting may be continued.
As with the switching of the lens mode, pausing shooting has some effect on the captured video. However, unlike the switched-lens mode, pausing shooting does not affect the continuity of the resulting video data 1, nor does it require inserting substitute frames. In addition, a time point indicating a photographing suspension instruction, that is, a first time point is received.
Referring to fig. 7D, fig. 7D conceptually shows an exemplary diagram of generating video data 1 in a case where a pause shooting event occurs during shooting of a video by a mobile phone.
As shown in fig. 7D, at time 00: during the period from 00 to 00. During this time, the handset can get video clip 1. Then, at time 00 (i.e., a first time point), after the mobile phone receives an operation of the user instructing to suspend shooting, the mobile phone suspends shooting. During the pause shooting, the mobile phone pauses coding and stores video frames. And meanwhile, pausing the accumulated shooting time. And after the mobile phone receives the operation of resuming the shooting instructed by the user, the mobile phone resumes the shooting, namely, resumes the coding and stores the video frame, thereby obtaining the video clip 2. In this way, the video clip 1 and the video clip 2 can constitute the video data 1. Of course, the picture difference between the end frame of video clip 1 and the first frame of video clip 1 can be large. In this scenario, the handset does not need to insert a substitute frame between the video segment 1 and the video segment 2, but can mark the first frame of the video segment 1 and the last frame of the video segment 2, for example, mark the mark 2.
In addition, in some embodiments, the mobile phone may also receive an operation 4 of the user instructing to stop shooting. Then, the handset can stop the shooting of the video in response to this operation 4.
Illustratively, during the display of the interface 209, that is, during the mobile phone is shooting video data, as shown in fig. 8 (a), the interface 209 further includes a control for instructing to pause shooting, also referred to as a fourth control, such as a control 801. The cell phone may receive a fifth operation, such as a click operation, from the user on the control 801. And in response to a click operation on the control 801, stops shooting the video, and saves the shot video data 1, also referred to as first video data.
In addition, as shown in fig. 8 (b), after the mobile phone stops capturing the video, the interface 205 may also be displayed again.
As another example, the mobile phone may stop continuing the shooting and save the shot video data 1 according to an operation of the user instructing to exit the camera APP, such as a slide-up operation on the interface 601. In addition, the mobile phone can display the main interface again.
After the video data 1 is saved, the mobile phone can display the shot video data 1 according to the operation 5 of the user, so that the user can edit the video data 1 conveniently.
Illustratively, in a scene where shooting is stopped and the cell phone displays the interface 205, a thumbnail of the video data 1, that is, a first thumbnail, such as the icon 802, is included in the interface 205. The interface 205 may also be an example of a first interface. In this example, the first interface may be an interface provided by a camera application. The mobile phone may receive a first operation of the icon 802 by the user during the display of the interface 205, and in response to the operation, display a video editing interface, also referred to as a second interface, such as the interface 803 shown in fig. 8 (c). The interface 803 is used to display the video data 1.
Also exemplarily, in a scene where the shooting is stopped and the cell phone displays the interface 205, the cell phone may again display the main interface of the cell phone, that is, the desktop 202, according to an operation of the user instructing to exit the camera APP, such as a swipe operation. Also included in the desktop 202 are icons of a gallery APP. Thus, as shown in fig. 9 (a), during displaying the desktop 202, the mobile phone may receive a seventh operation, e.g., a click operation, for the icon 901 of the gallery APP, and in response to the operation, display an application interface provided by the gallery APP, e.g., the interface 902 shown in fig. 9 (b). Of course, in a scene where shooting is stopped and the mobile phone displays the main interface, the mobile phone may directly receive a click operation for the icon 901 of the gallery APP, and in response to the click operation, display an application interface provided by the gallery APP, such as the interface 902 shown in (b) of fig. 9.
In some embodiments, the interface 902 may display thumbnails of various picture and video assets, and the interface 902 may be an example of a first interface. In this example, the first interface may be an application interface provided by a gallery APP. The picture resources or video resources may be captured and stored by a mobile phone, for example, a thumbnail 903 of the video data 1, or may also be thumbnails of images, videos and the like downloaded from the internet, or may also be thumbnails of images, videos and the like synchronized to the cloud.
In some embodiments, the cell phone may receive a user selection operation of a thumbnail of any video in the interface 902, and in response to the user selection operation of the thumbnail of any video, the cell phone may display a corresponding video editing interface. For example, in response to a user clicking on the thumbnail 903 in the interface 902, the mobile phone may display an interface 803 shown in (c) in fig. 9, where the interface 803 is a video editing interface corresponding to the video data 1.
In other embodiments, the mobile phone may automatically perform secondary creation on the video data 1 according to the operation of the user on the video editing interface, for example, configure video music, add transition special effects, and the like. Illustratively, as shown in fig. 10, a control for instructing to edit the video data 1, such as a one-touch large-screen control 1001, is further included in the interface 803. When the mobile phone receives a second operation, such as a click operation, of the one-key widgets 1001 by the user, the mobile phone may automatically edit the video data 1.
In some embodiments, as shown in fig. 11, the mobile phone may automatically edit the video data 1, which may include the following steps:
s101, the mobile phone determines an effect template matched with the video data 1.
In some embodiments, effect templates of multiple styles can be configured in the mobile phone in advance, and each effect template corresponds to one piece of video music. In addition, the effect template is also provided with a filter, a special effect, a transition and a sticker correspondingly.
In some embodiments, the mobile phone may analyze the picture content of the video data 1 by using an artificial intelligence model, and determine an effect template matching the video data 1, which is also referred to as the effect template 1 or the first effect template. Wherein the first effect template corresponds to a first style, e.g., a relaxing style.
For example, the artificial intelligence model of the mobile phone may search for similar videos according to the picture content of the video data 1. And acquires video music for similar videos. In this way, the corresponding effect template 1 is determined according to the acquired video music. For another example, the artificial intelligence model of the mobile phone searches for similar videos according to the picture content of the video data 1. And determining a plurality of effect templates belonging to the same style according to the style names of the similar videos. Then, from the plurality of determined effect templates, an effect template 1 is randomly determined. In other embodiments, the mobile phone may randomly determine an effect template as the effect template 1.
And S102, the mobile phone processes the video data 1 according to the effect template 1.
For example, the mobile phone may adjust the volume of the original audio track of the video data 1 to zero, and add the video music (i.e., the first music) of the shooting template 1 to the video data 1, so that the video music is matched with the video picture of the video data 1. In other embodiments, the original track volume may also be adjusted to other decibel values in accordance with user actions.
As another example, the mobile phone may add a filter, a special effect, a transition, and a sticker corresponding to the effect template 1 to the video data 1.
During the process of adding the transition effect, the mobile phone can replace the original transition effect in the video data 1 (e.g., the transition effect 1 superimposed on the substitute frame 1).
It can be understood that the adaptation degree between the effect template and various transition special effects is different. Generally, the transition special effect with higher adaptation degree is more suitable for the style of the effect template and the corresponding video music. The transition special effect with lower adaptation degree is relatively unsuitable for the style of the effect template and the corresponding video music. In this embodiment, by replacing the transition special effect 1, the problem that the original transition special effect 1 conflicts with the processed video style after the video data is processed by using the effect template is solved.
In some embodiments, as shown in fig. 12, the mobile phone may first identify whether transition special effect 1 exists in video data 1. For example, it is possible to detect whether or not a video frame having a flag 1 exists in the video data 1. In the case where the mark is recognized, the video frame having the mark 1 (that is, the substitute frame 1 on which the transition special effect 1 is superimposed) is deleted, and thus, the video data 1 is divided into the video clip 1 and the video clip 2. Then, the handset generates a plurality of substitute frames 2 and a plurality of substitute frames 3, and the substitute frames 2 and 3 may be collectively referred to as a second substitute frame. Here, the substitute frame 2 may be the same as the last frame (i.e., the third video frame) of the video segment 1, and the substitute frame 3 is the same as the first frame (i.e., the fourth video frame) of the video data 2. In addition, the number of the substitute frames 2 and 3 is not less than the number of deleted video frames, ensuring that the length of the final video data 1 is not affected. Then, the mobile phone determines a transition special effect 2 (i.e., a third transition special effect) according to the adaptation degree between the effect template 1 and each transition special effect, and superimposes the transition special effect 2 onto the substitution frame 2 and the substitution frame 3, so as to realize the connection of the video clip 1 and the video clip 2.
In some embodiments, the degree of adaptation between the effect template and the transition special effect may be quantized to a matching weight. Thus, when determining the transition special effect 2, the mobile phone can randomly select the transition special effect 2 matched with the effect template 1 from various transition special effects by combining the matching weights between the effect template 1 and each transition special effect. Understandably, the transition effect with higher matching weight is relatively easier to be selected as the transition effect 2 matched with the effect template 1. The transition effect with the lower matching weight is relatively more difficult to be selected as transition effect 2 matching effect template 1.
In some examples, the matching weights of the effect template and the transition special effect may be preconfigured. Illustratively, as shown in table 1 below:
TABLE 1
Figure BDA0003476768000000161
Table 1 above illustrates the correspondence between different effect templates and the matching weights of video music, genre, and different transition effects. Wherein, the percentage value corresponding to each transition special effect in the table is the matching weight between the transition special effect and the effect template.
Take the hello summer effect template recorded in table 1 as an example. The matching weight between the effect template and the superimposed transition is 50%, i.e. the superimposed transition has a probability of 50% to be selected as the matching transition special effect. In addition, the matching weight between the effect template and the fuzzy transition is 0%, that is, the fuzzy transition is not selected as the matched transition special effect. The matching weight between the effect template and the melt transition is 0%, i.e., the melt transition cannot be selected as the matched transition special effect. The matching weight between the effect template and the upper transition is 50%, that is, under the scene of the cross-screen video data 1 to be processed by the mobile phone, the upper transition has a probability of 50% and is selected as the matched transition special effect. The matching weight between the effect template and the down transition is 50%, and certainly, in the scene of the cross-screen video data 1 to be processed by the mobile phone, the probability of 50% of the down transition is selected as the matched transition special effect. The matching weight between the effect template and the left transition field is 50%, that is, in the scene of the vertical screen video data 1 to be processed by the mobile phone, the left transition field has a probability of 50% to be selected as the matched transition special effect. The matching weight between the effect template and the right transition field is 50%, and certainly, under the scene of the vertical screen video data 1 to be processed by the mobile phone, the right transition field has a probability of 50% and is selected as the matched transition special effect. The matching weight between the effect template and the black field transition is 90%, i.e. there is a probability of 90% for the black field transition to be selected as the matched transition special effect. The matching weight between the effect template and the white field transition is 90%, i.e. the white field transition has a probability of 90% to be selected as the matched transition special effect. The matching weight between the effect template and the amplified transition is 90%, i.e. the amplified transition has a probability of 90% to be selected as the matching transition special effect. The matching weight between the effect template and the reduced transition is 90%, i.e. the reduced transition has a probability of 90% to be selected as the matched transition special effect. The matching weight between the effect template and the rotating transition is 30%, i.e. the rotating transition has a 30% probability of being selected as the matching transition special effect.
That is, the mobile phone can randomly select the transition special effect 2 replacing the transition special effect 1 by using the matching weight corresponding to each transition special effect. The selection mode is high in flexibility, and high association can be guaranteed to exist between the selected transition special effect 2 and the effect template.
In other embodiments, the video music in the effect template may also have an associated identification with at least one transition special effect. The transition special effect with the associated identification can be preferentially used.
In some embodiments, in the process of adding the transition effect, the mobile phone may add a new transition effect in the video data 1. In some embodiments, the position point of the new transition special effect (i.e., the video frame to which the transition special effect needs to be added) may be a position in the video data 1 where the video frame changes greatly. For example, a position where photographing is suspended occurs during photographing.
As shown in the foregoing embodiment, when an operation indicating that shooting is suspended is received during shooting of a video by a mobile phone, a mark 2 may be marked on a last frame of video (a first video frame) captured at that time. After the mobile phone receives an operation (for example, referred to as a fourth operation) indicating to resume shooting, the marker 2 is also marked on the collected first frame of video frame (second video frame).
Thus, as an implementation manner, the mobile phone can determine whether there is a position in the video data 1 where a transition needs to be added by means of the identification 2. For example, the handset may identify whether video data 1 includes a video frame with a tag 2. After identifying the video frame with label 2, the handset can add transition special effects between each set of adjacent video frames with label 2.
As another implementation manner, the mobile phone may further use an artificial intelligence model to identify, from the video data 1, an adjacent video frame whose picture content similarity is lower than a preset value, and add a transition special effect between the adjacent video frames.
The added transition effect may also be referred to as transition effect 3 or as the first transition effect. The transition special effect 3 can also be a matched transition special effect randomly determined by the mobile phone from left transition, right transition, rotation transition, superposition transition, fuzzy transition, melting transition, black transition, white transition, amplification transition, reduction transition, upward transition, downward transition and the like according to the matching weight between various transition special effects and the effect template 1, and the matched transition special effect is used as the transition special effect 3.
The process of adding different types of transition effects 3 is described below, taking as an example video frame 1 and video frame 2 that are adjacent and have labels 2.
When the matched transition special effect 3 is any one of a left transition, a right transition, a rotation transition, a black transition, a white transition, an amplification transition, a reduction transition, an upward transition or a downward transition, the mobile phone can directly superimpose the transition special effect 3 on the video frame 1 and the video frame 2 without additional processing, and the specific implementation can refer to related technologies, which is not described herein again.
When the matched transition special effect 3 is any one of a superimposed transition, a fuzzy transition or a melting transition, the mobile phone needs to add a video frame to the video data 1 before the transition special effect 3 is superimposed on the video frame 1 and the video frame 2, so that the duration of the video data 1 is prevented from being shortened after the special effect is superimposed.
That is, in some embodiments, after the handset determines that the transition effect 3 to be added between video frame 1 (the first video frame) and video frame 2 (the second video frame) is any one of a lapped transition, a blurred transition, or a melted transition, the handset may add a substitute frame 4 (the first lattice frame) and a substitute frame 5 (the second lattice frame) between video frame 1 and video frame 2. The substitute frame 4 is a freeze frame of the video frame 1, that is, the picture contents of the substitute frame 4 and the video frame 1 are the same. The substitute frame 5 is a freeze frame of the video frame 2, i.e. the picture content of the substitute frame 5 and the video frame 2 is the same.
As shown in fig. 13, when the transition effect 3 is a superimposition effect, the mobile phone may overlap the substitute frame 5 with the video frame 1, where the video frame 1 is placed on top. At the same time, the handset may also overlap the substitute frame 4 with the video frame 2, with the substitute frame 4 at the top. After the overlap is completed, a transparency change special effect is added on the video frame 1 and the substitute frame 4. For example, the transparency of video frame 1 is controlled from 0% to 50%, and the transparency of substitute frame 4 is controlled from 50% to 100%.
In addition, when the transition special effect 3 is a blur special effect or a melt transition special effect, the mobile phone can superimpose the blur special effect or the melt special effect on the superimposed substitute frame 4, the video frame 2, the video frame 1 and the substitute frame 5, and control the transparency change of the video frame 1 and the substitute frame 4, in addition to superimposing the substitute frame 4 on the video frame 2, and controlling the transparency change of the video frame 1 and the substitute frame 4.
In addition, in the foregoing embodiment, in the case of replacing the transition special effect 1 in the video data 1 with the transition special effect 2, after the replacement is finished, the mobile phone can acquire the length of the video data 1. If the length is the same as the length of the original video data 1, no additional processing is performed. If the length is shorter than the length of the original adaptation data 1, the length of the video data 1 can be ensured to be unchanged by inserting the substitute frame 2 before the transition special effect 2 and/or inserting the substitute frame 3 after the transition special effect 2.
In other embodiments, after processing the video data 1 by using the effect template, the mobile phone may further change the effect template used when processing the video data 1 or change the video music used alone according to the operation of the user.
Illustratively, as shown in fig. 14 (a), after the cell phone processes the video data 1 using the effect template, the cell phone may display a preview interface, such as an interface referred to as a third interface, such as an interface 1401. The interface 1401 displays video data 2, also called second video data. The video data 2 is a video obtained by processing the video data 1 with the effect template 1. Obviously, the video data 2 includes the video frames in the video data 1, and of course, also includes the transition effect 3 and the buffer effect 2.
The handset may receive an operation 6 from a user on interface 1401, such as an operation to click on style control 1402 on interface 1401. And in response to this operation 6, as shown in (b) in fig. 14, an interface 1403 is displayed. Interface 1403 is a guide interface that directs the user to select an effect template, among other things. The interface 1403 includes a plurality of template windows indicating different effect templates. Such as window 1404, window 1405, window 1406, and window 1407. Among them, the above-mentioned window 1404 is for indicating an effect template named hello summer, the window 1405 is for indicating an effect template named sunny, the window 1406 is for indicating an effect template named HAPPY, and the window 1407 is for indicating an effect template named xiao mei. Of these effect templates, the template window of the your summer effect template is in the selected state, indicating that effect template 1 is the your summer effect template. Therefore, the mobile phone can determine the effect template 2 selected by the user according to the operation of the user on other template windows. For example, after the cell phone receives a user click operation on the window 1405, the preview window 1408 in the interface 1403 may display a sample of the sunny effect template. In this way, if the mobile phone receives an operation of the control 1409 in the interface 1403 by the user, it can be determined that the sunny effect template is the selected effect template 2. Then, the mobile phone may process the original video data 1 by using the effect template 2 to obtain the video data 1 conforming to the style of the effect template 2, and as shown in (c) of fig. 14, display the interface 1401 again, where the interface 1401 includes the video data 1 processed based on the effect template 2.
In addition, during the display of the interface 1403, in a case where the mobile phone does not receive an operation of the user indicating selection of another effect template, an operation of the control 1409 by the user is received. The handset may determine that the user has instructed to reprocess the original video data 1 using the effect template 1. In the reprocessing process, the transition special effect 2 and the transition special effect 3 can still be randomly determined again by using the matching weight between each transition special effect and the effect template 1, and the transition special effects are used for reprocessing the original video data 1.
It can be understood that the transition effect 2 and the transition effect 3 that are randomly generated again may be different from the transition effect determined by the effect template 1 for the first time. In this way, the visual effects of the video data 1 obtained after the reprocessing are also different, and the diversity of one-key film formation is improved.
As another example, the handset may receive user operation 7 on interface 1401, such as an operation to click music control 1410 on interface 1401. And in response to this operation 7, a different video music is replaced. The replaced video music may be music of the same style as that of the effect template 2, or may be a random piece of music, which is not limited to this.
Additionally, a control 1411 indicating the determination is also included in interface 1401. As shown in (c) in fig. 14, after receiving an operation, such as a click operation, on the control 1411 by a user, the mobile phone saves the processed video data 1, for example, the video data 1 processed based on the effect template is also referred to as video data 2. In addition, the mobile phone may further display a video editing interface corresponding to the video data 2, such as an interface 1412 shown in (d) of fig. 14. The interface 1412 may display video data 2. The mobile phone can play the video data 2 according to the operation of the user in the interface 1412.
In a possible embodiment, a control indicating an undo effect template may also be included in interface 1401, such as control 1413 shown in (a) of fig. 14. The mobile phone may receive a user click operation on the control 1413, delete the video data 1 processed based on the effect template, and display the interface 803 again. When the cell phone displays the interface 803 again, the interface 803 still includes the one-touch large control 1001. If the mobile phone receives the operation of the user on the one-key large-size film 1001 again, a matched effect template may be determined again, and the video data 1 is processed again by using the newly determined effect template, which is not described herein again with reference to the foregoing embodiment. In addition, for the same section of video data 1, the mobile phone can enable the effect templates determined by one key large film to be different at two adjacent times, and the diversity of secondary creation of video data is improved.
In another scenario, if the effect template is selected before the video data 1 is captured, the process of capturing the video data 1 by the mobile phone is different from the foregoing embodiment, and the timing for triggering the mobile phone to process the video data 1 by using the effect template is also different from the foregoing embodiment. However, in practice, the effect template is used to add transition special effects to the video data 1 in the same manner. The following briefly introduces a process of shooting video data 1 by the mobile phone in a case where the effect template a is selected, and processing the video data 1 by using the effect template a:
as shown in fig. 15 (a), during the cell phone display interface 205, that is, during the display of the viewfinder interface for realizing the multi-lens recording. In the interface 205, a micro-movie control 1501 is also included, the micro-movie control 1501 being used to initiate micro-movie functionality. Under the condition of starting the micro-movie function, a user can conveniently create a video work with music, a filter and a special effect through a mobile phone. In addition, in other viewing interfaces for recording videos (e.g., a viewing interface for implementing a single-lens recording function), a micro-movie control with the same function may be included.
In some embodiments, when an operation, such as a click operation, on the micro-movie control 1501 by a user is received during the display of the interface 205 of the mobile phone, an interface 1502 shown in (b) in fig. 15, that is, an eighth interface, may be displayed. The interface 1502 is a guide interface for guiding a user to select an effect template. The interface 1502 includes a plurality of template windows indicating different effect templates. Such as window 1503, window 1504, window 1505, and window 1506. The above window 1503 is used to indicate an effect template named hello summer, the window 1504 is used to indicate an effect template named sunny, the window 1505 is used to indicate an effect template named HAPPY, and the window 1506 is used to indicate an effect template named xiao mei.
Of course, different effect templates may have different filters, stickers, atmosphere effects, and transition effects besides different corresponding videos and music. Obviously, under the coordination of different video music, filters, stickers, atmosphere special effects and transition special effects, the styles of the produced videos are different. That is, the user can produce different styles of video works by selecting different effect templates. The effect template selected by the user may be referred to as a first effect template, and a template window corresponding to the first effect template is also referred to as a first identifier.
In some embodiments, a user may select effect templates of different video styles during the cell phone display interface 1502. That is, the mobile phone may receive an operation, such as a click operation, performed by the user on the first identifier of the template window in the interface 1502, and determine the effect template selected by the user, that is, the first effect template. For example, the cell phone receives a user click on window 1504, and may determine that the user selected an effect template named sunny.
In addition, in other embodiments, a default template may be preset in the mobile phone. For example, the hello summer effect template may be pre-selected to be configured as a default template. Thus, in the case where the handset switches from the interface 205 to the display interface 1502, the hello summer effect template is in the selected state. And then, under the condition that the mobile phone does not receive selection operation aiming at other effect templates, the mobile phone determines that the user selects the good summer effect template. In a case where the mobile phone receives a selection operation for another effect template (e.g., a little beautiful effect template), the mobile phone determines that the user selects the little beautiful effect template.
In some embodiments, each effect template corresponds to a swatch. The sample is a video created in advance based on the effect template. If the effect template is selected by the user, the corresponding dailies may be displayed in the preview window 1507. Therefore, the style effect of the effect template can be previewed by the user, and the user can conveniently select the style effect. For example, when the "hello summer" effect template is selected, a swatch of "hello summer" is played in the preview window 1507.
Additionally, during the cell phone display 1502, the user can change the selected effect template by selecting a different template window.
Of course, the cell phone can also receive user manipulation of control 1508 during cell phone display 1502. After receiving the operation of the control 1508, the mobile phone may determine the effect template in the selected state as the actually selected effect template. For convenience of description, the actually selected effect template may be referred to as an effect template a.
After determining the effect template a, the handset may also switch to displaying an interface 1509, also referred to as a ninth interface, as shown in fig. 15 (c). The interface 1509 is a template viewing interface corresponding to the effect template a.
In interface 1509, a viewfinder frame is also included. When the mobile phone switches from the interface 1502 to the interface 1509, the number of frames corresponding to the interface 1509 is related to the effect template a. At the same time, the video stream displayed by the frame is also associated with the effect template a.
In some embodiments, the effect template may also correspond to a default shot mode. The lens modes may include a single front mode, a single rear mode, a top-front-bottom-rear mode, a top-rear-bottom-front mode, a top-rear (near) bottom-rear (far) mode, a top-rear (far) bottom-rear (near) mode, a picture-in-picture mode, and the like.
Illustratively, when the single-front mode is enabled, interface 1509 includes a view box for previewing the video stream captured by the front-facing camera.
Also illustratively, when single post mode is enabled, interface 1509 includes a view box for previewing the video stream captured by the rear facing camera.
When there are a plurality of front cameras and rear cameras, there is one main camera in the plurality of front cameras, which is referred to as a front camera a, and there is also one main camera in the plurality of rear cameras, which is referred to as a rear camera a. In the single-front mode, the viewfinder is used for displaying the video stream collected by the front camera a. In the single rear mode, the view finder is used for displaying the video stream collected by the rear camera a.
Also illustratively, when the top-front-bottom-rear mode is enabled, the interface 1509 includes two viewports, e.g., viewport 1 and viewport 2. The above-described finder frame 1 and finder frame 2 are arranged up and down in the boundary 1509. The upper view frame 1 is used for displaying the video stream collected by the front camera, and the lower view frame 2 is used for displaying the video stream collected by the rear camera. For example, the viewfinder 1 is used to display the video stream captured by the front camera a, and the viewfinder 2 is used to display the video stream captured by the rear camera a. For another example, the viewfinder 1 is used to display video streams captured by other front cameras, and the viewfinder 2 is used to display video streams captured by other rear cameras. The same applies to the mode with the upper, rear, lower and front being enabled, except that the viewfinder 1 is used for displaying the video stream captured by the rear camera, and the viewfinder 2 is used for displaying the video stream captured by the front camera.
Also illustratively, where the handset includes multiple rear facing cameras, when the back-up (near) back-down (far) mode is enabled, the interface 1509 includes two viewboxes, e.g., viewfinder 1 and viewfinder 2. The above-described finder frame 1 and finder frame 2 are arranged up and down in the interface 1509. The viewfinder 1 and the viewfinder 2 are respectively used for displaying the video streams collected by the two rear cameras.
It can be understood that the types of the plurality of rear cameras installed in the mobile phone may be different, for example, the rear camera of the mobile phone may be one of or a combination of a main camera, a telephoto camera, a wide-angle camera, an ultra-wide-angle camera, a macro camera, and the like. In some examples, the focal lengths corresponding to different rear cameras may be different, so that the distances that different rear cameras can shoot are different.
In the above example, the upper arranged finder frame 1 may be used to display a rear camera having a relatively long focal length, and the lower arranged finder frame 2 may be used to display a rear camera having a relatively short focal length. For example, when the view frames 1 and 2 are respectively used for displaying video streams collected by a rear camera b (tele camera) and a rear camera c (wide camera), the focal length of the tele camera is longer relative to the focal length of the wide camera, so that the view frame 1 is used for displaying the video stream collected by the rear camera b, and the view frame 2 is used for displaying the video stream collected by the rear camera c.
Of course, the viewfinders 1 and 2 can also respectively display other types of rear-facing camera combinations, for example, main and tele cameras, main and wide cameras, main and super wide cameras, main and macro cameras, tele and super wide cameras, tele and macro cameras, or wide and macro cameras, etc. respectively.
Also illustratively, where the cell phone includes multiple rear facing cameras, when back-up (far) back-down (near) is enabled, the interface 1509 includes two viewboxes, e.g., viewbox 1 and viewbox 2. The above-described finder frame 1 and finder frame 2 are arranged up and down in the interface 1509. The upper arranged view finder 1 can be used for displaying the rear camera with a relatively short focal length, and the lower arranged view finder 2 can be used for displaying the rear camera with a relatively long focal length.
Still further illustratively, when picture-in-picture mode is enabled, interface 1509 includes two viewfmgers, e.g., viewfmder 1 and viewfmder 2. The size of the viewfinder frame 2 is smaller than that of the viewfinder frame 1, and the viewfinder frame 2 can be embedded in the viewfinder frame 1. The viewfinder 1 is used for displaying the video stream collected by the rear camera, and the viewfinder 2 is used for displaying the video stream collected by the front camera. Of course, the viewfinder 1 is used for displaying the video stream captured by the front camera, and the viewfinder 2 is used for displaying the video stream captured by the rear camera. In other examples, the viewfinder 1 may be used to display the video stream captured by the front camera, and the viewfinder 2 may be used to display the video stream captured by the rear camera. The viewfinder 1 can be used for displaying a rear camera with a relatively far focal length, and the viewfinder 2 can be used for displaying a rear camera with a relatively close focal length. That is, the cameras corresponding to the frame 1 and the frame 2 in the picture-in-picture mode can be determined by the user. In some examples, by default, when picture-in-picture mode is enabled, view box 1 is used to display the video stream captured by the rear camera and view box 2 is used to display the video stream captured by the front camera.
In the foregoing example, the lens mode described is a mode to which the mobile phone is applied when in the portrait state. When the mobile phone is in the landscape state, the lens modes may further include a left-front-right-rear mode, a left-rear-right-front mode, a left-rear (near) right-rear (far) mode, a left-rear (far) right-rear (near) mode, and the like.
Wherein, the left front and right rear mode, the left rear and right front mode, the left rear (near) and right rear (far) mode, and the left rear (far) and right rear (near) are similar to the above examples of the up front and down rear mode, the up rear and down front mode, the up rear (near) and down rear (far) and down rear (near), and there are two corresponding frames, e.g., the frame 3 and the frame 4, and the above frame 3 and the frame 4 are arranged left and right in the interface 1509. The finder frame 3 corresponds to the finder frame 1, and the finder frame 4 corresponds to the finder frame 2. For example, the front left and right and rear modes are similar to the front up and rear down modes, the viewfinder frame 3 is used to display the video stream of the front camera, and the viewfinder frame 4 is used to display the video stream of the rear camera.
As an example, when the lens mode corresponding to the "hello summer" effect template is top, back, bottom, and front, and it is determined that the "hello summer" effect template is the effect template a as shown in (c) of fig. 15, the interface 1509 displayed on the mobile phone includes the finder frame 206 and the finder frame 207. The finder frame 206 is arranged on the upper side of the finder frame 207. In addition, the view box 206 is used for displaying the video stream captured by the rear camera a, and the view box 207 is used for displaying the video stream captured by the front camera a.
In some embodiments, a mirror cut control, such as control 1510, is also included in interface 1509. The lens cutting control is used for assisting the user to select the lens mode default by the other lens mode replacement effect template a.
In addition, during the display of the interface 1509, the mobile phone can also play video music corresponding to the effect template a. For example, the video music corresponding to the "hello summer" effect template is the song "hello summer", and the cell phone can play the song "hello summer" during the display interface 1509 in the scene where the cell phone determines that the "hello summer" effect template is the effect template a. As shown in fig. 15 (c), the duration of the song "your summer" is 15s, and after the song "your summer" is played for 15s, the mobile phone can cyclically play the song "your summer".
In some embodiments, a song-cutting control, such as control 1511, is also included in interface 1509. The song switching control is used for assisting the user to select the video music corresponding to the alternative music effect template a. And after the mobile phone responds to the instruction of the user and replaces the video music corresponding to the effect template a, the mobile phone plays the replaced music. In some examples, the replacement music corresponding to different effect templates may be different. Between the replacement music and the video music corresponding to the effect template, there may be songs having similar melodies, the same beats, or similar melodies. In other examples, the alternative music corresponding to different effect templates may be the same, for example, the alternative music may be all music stored in the mobile phone, or all music that can be searched by the mobile phone.
During the display 1509, the cell phone has not actually started capturing the video data 1. However, with video music, the user can preview the viewing effect of the current shot via the interface 1509.
In some embodiments, a one-tap control, also referred to as a sixth control, such as control 1512, is also included in interface 1509. After the mobile phone receives an operation, such as a click operation, on the control 1512 by the user, as shown in fig. 15 (d), the mobile phone displays a recording interface, which is also referred to as a tenth interface, such as the interface 1513, and starts to formally capture video data 1, which is also referred to as third video data. In addition, the duration of the captured video data 1 does not exceed the set duration of the effect template a. For example, the set time length may be equal to the time length of the video music corresponding to the effect template a, and for example, the set time length may be slightly shorter than the time length of the video music.
In addition, a second control indicating that shooting is suspended is also included in the tenth interface. Illustratively, the electronic device receives a user operation of the second control at a first time point of recording the third video data, and displays an eleventh interface in response to the operation. Wherein the eleventh interface is an interface for pausing shooting; the eleventh interface includes a third control that indicates to continue shooting. And the electronic equipment responds to the operation of the user on the third control, displays the tenth interface again, and determines a fifth video frame and a sixth video frame which are positioned before and after the first time point in the third video data. The fifth video frame is the last frame collected before the pause instruction is received, and the sixth video frame is the first frame collected after the shooting resuming instruction is received.
Illustratively, the time duration of "your summer" is 15s, and the set time duration of the your summer effect template is also 15s. When shooting the video data 1, if the shooting time reaches 15s, the mobile phone can automatically stop shooting to obtain the video data 1, and trigger the mobile phone to process the video data 1 by using the effect template a.
Also illustratively, during the recording process, the mobile phone displays a recording interface, such as interface 1513 shown in fig. 15 (d). A control, such as control 1514, for indicating suspension of shooting is included in the interface 1513. If the duration of shooting the video data 1 does not reach 15s, the cellular phone may receive an operation of the control 1514 by the user, such as a long-press operation. After receiving the operation for the control 1514, the mobile phone stops continuing to capture, thereby obtaining video data 1, and triggers the mobile phone to process the video data 1 using the effect template a.
For example, as shown in fig. 16, after the shooting duration of the video data 1 reaches the set duration, the mobile phone may process the video data 1 according to the effect template a and display a preview interface for playing the video work, which is also referred to as a twelfth interface, e.g., interface 1601.
In some embodiments, after the recording of the third video data is completed, displaying a twelfth interface; the twelfth interface is used for displaying fourth video data; the fourth video data includes video frames of the third video data, the first music, and the first transition special effect. In addition, the fourth video data further includes a third freeze frame and a fourth freeze frame. Wherein the third freeze frame is the same as the fifth video frame, and the fourth freeze frame is the same as the sixth video frame.
In some embodiments, reference may be made to the foregoing embodiments for a process of processing video data 1 by using the effect template a, which is not described herein again.
Therefore, no matter the effect template is selected before shooting or after shooting, the mobile phone can automatically process the video data, so that the processed video works can be close to the used effect template. In addition, in the process of processing the video by using the effect template, the mobile phone can automatically replace the original transition special effect in the video and flexibly increase the transition special effect. When the transition special effect is added, the video length is not influenced, the quality and the appreciation of the processed video are improved, the human-computer interaction efficiency of video editing is reduced, and the use experience of a user is also improved.
An embodiment of the present application further provides an electronic device, which may include: a memory and one or more processors. The memory is coupled to the processor. The memory is for storing computer program code comprising computer instructions. The computer instructions, when executed by the processor, cause the electronic device to perform the steps performed by the handset in the embodiments described above. Of course, the electronic device includes, but is not limited to, the above-described memory and one or more processors. For example, the structure of the electronic device may refer to the structure of a mobile phone shown in fig. 1.
The embodiment of the present application further provides a chip system, which can be applied to the electronic device in the foregoing embodiments. As shown in FIG. 17, the system-on-chip includes at least one processor 2201 and at least one interface circuit 2202. The processor 2201 may be a processor in the electronic device described above. The processor 2201 and the interface circuit 2202 may be interconnected by wires. The processor 2201 may receive and execute computer instructions from the memory of the electronic device described above via the interface circuit 2202. The computer instructions, when executed by the processor 2201, may cause the electronic device to perform the steps performed by the cell phone in the embodiments described above. Of course, the chip system may further include other discrete devices, which is not specifically limited in this embodiment of the present application.
In some embodiments, it is clear to those skilled in the art from the foregoing description of the embodiments that, for convenience and simplicity of description, the above division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or make a contribution to the prior art, or all or part of the technical solutions may be implemented in the form of a software product stored in a storage medium and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard drive, read only memory, random access memory, magnetic or optical disk, and the like.
The above description is only a specific implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any changes or substitutions within the technical scope disclosed in the embodiments of the present application should be covered within the scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A transition special effect adding method is applied to electronic equipment and comprises the following steps:
the electronic equipment displays a first interface; wherein the first interface comprises a first thumbnail of first video data;
the electronic equipment receives a first operation of a user on the first thumbnail;
the electronic equipment responds to the first operation and displays a second interface; the second interface is a video editing interface of the first video data; the second interface comprises a one-key large-film control;
after receiving a second operation of the one-key large-film control by the user, displaying a third interface by the electronic equipment; the third interface is used for displaying second video data; the second video data comprises a video frame of the first video data and a first transition special effect;
when the first transition special effect is any one of a superposition transition, a fuzzy transition or a melting transition, the second video data further comprises a first lattice frame and a second lattice frame; the first freeze frame is the same as a first video frame, the second freeze frame is the same as a second video frame, and the first video frame and the second video frame are two adjacent frames in the first video data; the first freeze frame overlaps the second video frame, the second freeze frame overlaps the first video frame, and the first transition special effect is superimposed over the first video frame and the second video frame.
2. The method of claim 1,
before the electronic device displays the first interface, the method further comprises:
the electronic equipment displays a fourth interface; the fourth interface is a view preview interface provided by a camera application; the fourth interface comprises a first control for indicating starting to shoot the video;
the electronic equipment receives a third operation of the first control by a user;
the electronic equipment responds to the third operation, displays a fifth interface and starts to record the first video data; wherein the fifth interface is a video recording interface; the fifth interface comprises a second control indicating that shooting is suspended;
when the first time point of the first video data is recorded, the electronic equipment responds to the operation of the user on the second control and displays a sixth interface; wherein the sixth interface is an interface for pausing shooting; the sixth interface comprises a third control for indicating to continue shooting;
the electronic equipment receives a fourth operation of the user on the third control;
the electronic equipment responds to the fourth operation, displays the fifth interface again, and determines the first video frame and the second video frame which are positioned before and after the first time point in the first video data; wherein the fifth interface includes a fourth control indicating to stop shooting;
the electronic equipment receives a fifth operation of the user on the fourth control;
the electronic device displays a first interface, comprising: the electronic equipment responds to the fifth operation and displays the first interface; the first interface is also a viewing preview interface provided by the camera application.
3. The method of claim 2, wherein the fifth interface is a video recording interface in a first shot mode, the fifth interface comprising a fifth control indicating a switch shot mode;
before the electronic device receives the fifth operation, the method further includes:
when the second time point of the first video data is recorded, the electronic equipment responds to a sixth operation of the user on the fifth control, and freezes a third video frame to obtain a multi-frame first substitute frame; wherein the third video frame is a last frame of video frame collected before the sixth operation is received; the sixth operation is an operation of instructing switching to a second lens mode;
after the electronic device displays a seventh interface, canceling the freeze frame for the third video frame; wherein the seventh interface is a video recording interface in the second lens mode;
after the electronic device receives the fifth operation, the method further comprises: the electronic device superimposes a second transition special effect on the first replacement frame.
4. The method of claim 3, wherein the second video data further comprises: a second substitute frame, a first music and a third transition special effect; the third transition special effect corresponds to the first music; the third transition special effect is superimposed on the second replacement frame, the second replacement frame being used to replace the first replacement frame in the first video data.
5. The method of claim 4, wherein prior to the electronic device displaying the third interface, the method further comprises:
the electronic device deleting the first substitute frame; wherein the first substitute frame is located between the third video frame and a fourth video frame; the fourth video frame is a first frame video frame collected after the electronic device displays the seventh interface;
the electronic device determining a plurality of frames of the second substitute frame; wherein the second substitute frame comprises a same frame as the third video frame and a same frame as the fourth video frame; the number of the second replacement frames is not less than the number of the first replacement frames;
and the electronic equipment superimposes the third transition special effect on the second substitute frame to obtain the second video data.
6. The method of claim 1,
before the electronic device displays the first interface, the method further comprises: the electronic equipment displays a main interface; the main interface comprises an icon of a gallery application; the electronic equipment receives a seventh operation of the icon of the gallery application by the user;
the electronic device displays a first interface, comprising: and the electronic equipment responds to the seventh operation and displays the first interface, and the first interface is an application interface provided by the gallery application.
7. The method of any of claims 1-6, wherein prior to the electronic device displaying the third interface, the method further comprises:
the electronic device determines a first effect template from a plurality of preconfigured effect templates in response to the second operation; the first effect template comprises first music;
the electronic device determining the first transition special effect corresponding to the first music;
when the first transition special effect is any one of a superposition transition, a fuzzy transition or a melting transition, the electronic equipment overlaps the first lattice frame on the second video frame; and, overlaying the second freeze frame below the first video frame;
the electronic device overlays the first transition special effect on the first video frame and the second video frame.
8. The method of claim 7, wherein the first effects template corresponds to a first style; the determining a first effects template from a plurality of preconfigured effects templates, comprising:
the electronic equipment determines that the first video data is matched with the first style by using a preset artificial intelligence model; the electronic equipment belongs to the first style effect template, and the first effect template is determined;
or, the electronic device randomly determines the first effect template from a plurality of preset effect templates.
9. The method of claim 7, wherein the electronic device determining the first transition special effect corresponding to the first music comprises:
the electronic equipment determines a first transition special effect with associated identification with the first music from a plurality of preset transition special effects;
or the electronic equipment determines the first transition special effect from a plurality of preset transition special effects based on the matching weight;
and each preset transition special effect corresponds to one matching weight, and the matching weight is a quantization ratio parameter of the adaptation degree between the first music and the preset transition special effect.
10. A transition special effect adding method is applied to electronic equipment and comprises the following steps:
the electronic equipment displays an eighth interface; wherein the eighth interface comprises a first identifier indicating a first effect template; the first effect template comprises first music and a first transition special effect corresponding to the first music;
the electronic equipment responds to the operation of the user on the first identifier and displays a ninth interface; the ninth interface is a view-finding preview interface corresponding to the first effect template; the ninth interface includes a sixth control to instruct to start shooting;
the electronic equipment responds to the operation of the user on the sixth control, displays a tenth interface and starts to record third video data; the tenth interface is a video recording interface corresponding to the first effect template; the tenth interface comprises a second control indicating a pause in shooting;
when the first time point of the third video data is recorded, the electronic equipment responds to the operation of the user on the second control and displays an eleventh interface; wherein the eleventh interface is an interface that suspends shooting; the eleventh interface includes a third control indicating to continue shooting;
the electronic equipment responds to the operation of the user on the third control, displays the tenth interface again, and determines a fifth video frame and a sixth video frame which are positioned before and after the first time point in the third video data;
displaying a twelfth interface after the third video data is recorded; the twelfth interface is used for displaying fourth video data; the fourth video data comprises video frames of the third video data, the first music, and the first transition special effect;
when the first transition special effect is any one of a superposition transition, a fuzzy transition or a melting transition, the fourth video data further comprises a third freeze frame and a fourth freeze frame; the third freeze frame is the same as the fifth video frame, the fourth freeze frame is the same as the sixth video frame, the third freeze frame overlaps the sixth video frame, the fourth freeze frame overlaps the fifth video frame, and the first transition special effect is superimposed on the fifth video frame and the sixth video frame.
11. An electronic device, characterized in that the electronic device comprises one or more processors and memory; the memory is coupled to the processor, the memory for storing computer program code, the computer program code comprising computer instructions, which when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-10.
12. A computer storage medium comprising computer instructions that, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-10.
CN202210056663.XA 2021-06-16 2022-01-18 Transition special effect adding method and electronic equipment Pending CN115484423A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN2021106767093 2021-06-16
CN202110676709 2021-06-16
CN2021114341317 2021-11-29
CN202111434131 2021-11-29

Publications (1)

Publication Number Publication Date
CN115484423A true CN115484423A (en) 2022-12-16

Family

ID=84420779

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210056663.XA Pending CN115484423A (en) 2021-06-16 2022-01-18 Transition special effect adding method and electronic equipment

Country Status (1)

Country Link
CN (1) CN115484423A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115996322A (en) * 2023-03-21 2023-04-21 深圳市安科讯实业有限公司 Image data management method for digital video shooting

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104144301A (en) * 2014-07-30 2014-11-12 厦门美图之家科技有限公司 Method for transition special effects on basis of mixed modes
CN105516618A (en) * 2014-09-27 2016-04-20 北京金山安全软件有限公司 Method and device for making video and communication terminal
WO2016095369A1 (en) * 2014-12-18 2016-06-23 中兴通讯股份有限公司 Screen recording method and device
WO2016124095A1 (en) * 2015-02-04 2016-08-11 腾讯科技(深圳)有限公司 Video generation method, apparatus and terminal
CN106210531A (en) * 2016-07-29 2016-12-07 广东欧珀移动通信有限公司 Video generation method, device and mobile terminal
CN108111752A (en) * 2017-12-12 2018-06-01 北京达佳互联信息技术有限公司 video capture method, device and mobile terminal
CN108495171A (en) * 2018-04-03 2018-09-04 优视科技有限公司 Method for processing video frequency and its device, storage medium, electronic product
WO2020078273A1 (en) * 2018-10-15 2020-04-23 华为技术有限公司 Photographing method, and electronic device
CN111541936A (en) * 2020-04-02 2020-08-14 腾讯科技(深圳)有限公司 Video and image processing method and device, electronic equipment and storage medium
CN111835986A (en) * 2020-07-09 2020-10-27 腾讯科技(深圳)有限公司 Video editing processing method and device and electronic equipment
CN111866404A (en) * 2019-04-25 2020-10-30 华为技术有限公司 Video editing method and electronic equipment
WO2021052292A1 (en) * 2019-09-18 2021-03-25 华为技术有限公司 Video acquisition method and electronic device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104144301A (en) * 2014-07-30 2014-11-12 厦门美图之家科技有限公司 Method for transition special effects on basis of mixed modes
CN105516618A (en) * 2014-09-27 2016-04-20 北京金山安全软件有限公司 Method and device for making video and communication terminal
WO2016095369A1 (en) * 2014-12-18 2016-06-23 中兴通讯股份有限公司 Screen recording method and device
WO2016124095A1 (en) * 2015-02-04 2016-08-11 腾讯科技(深圳)有限公司 Video generation method, apparatus and terminal
CN106210531A (en) * 2016-07-29 2016-12-07 广东欧珀移动通信有限公司 Video generation method, device and mobile terminal
CN108111752A (en) * 2017-12-12 2018-06-01 北京达佳互联信息技术有限公司 video capture method, device and mobile terminal
CN108495171A (en) * 2018-04-03 2018-09-04 优视科技有限公司 Method for processing video frequency and its device, storage medium, electronic product
WO2020078273A1 (en) * 2018-10-15 2020-04-23 华为技术有限公司 Photographing method, and electronic device
CN111866404A (en) * 2019-04-25 2020-10-30 华为技术有限公司 Video editing method and electronic equipment
WO2021052292A1 (en) * 2019-09-18 2021-03-25 华为技术有限公司 Video acquisition method and electronic device
CN111541936A (en) * 2020-04-02 2020-08-14 腾讯科技(深圳)有限公司 Video and image processing method and device, electronic equipment and storage medium
CN111835986A (en) * 2020-07-09 2020-10-27 腾讯科技(深圳)有限公司 Video editing processing method and device and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115996322A (en) * 2023-03-21 2023-04-21 深圳市安科讯实业有限公司 Image data management method for digital video shooting

Similar Documents

Publication Publication Date Title
CN113475092B (en) Video processing method and mobile device
CN112532857A (en) Shooting method and equipment for delayed photography
CN115379112A (en) Image processing method and related device
CN113727017B (en) Shooting method, graphical interface and related device
CN113727015B (en) Video shooting method and electronic equipment
WO2021223500A1 (en) Photographing method and device
CN115914826A (en) Image content removing method and related device
WO2023160241A1 (en) Video processing method and related device
WO2023134583A1 (en) Video recording method and apparatus, and electronic device
CN113099146B (en) Video generation method and device and related equipment
CN115484423A (en) Transition special effect adding method and electronic equipment
WO2022262537A1 (en) Transition processing method for video data and electronic device
CN115484400B (en) Video data processing method and electronic equipment
CN115484387A (en) Prompting method and electronic equipment
CN115734032A (en) Video editing method, electronic device and storage medium
CN115442509A (en) Shooting method, user interface and electronic equipment
CN115225756A (en) Method for determining target object, shooting method and device
CN115484425A (en) Transition special effect determination method and electronic equipment
CN114615421B (en) Image processing method and electronic equipment
WO2023231696A1 (en) Photographing method and related device
WO2023231616A9 (en) Photographing method and electronic device
EP4277257A1 (en) Filming method and electronic device
CN115484390A (en) Video shooting method and electronic equipment
CN115811656A (en) Video shooting method and electronic equipment
CN115484392A (en) Video shooting method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination