CN115484425A - Transition special effect determination method and electronic equipment - Google Patents

Transition special effect determination method and electronic equipment Download PDF

Info

Publication number
CN115484425A
CN115484425A CN202210057018.XA CN202210057018A CN115484425A CN 115484425 A CN115484425 A CN 115484425A CN 202210057018 A CN202210057018 A CN 202210057018A CN 115484425 A CN115484425 A CN 115484425A
Authority
CN
China
Prior art keywords
transition
special effect
video
video data
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210057018.XA
Other languages
Chinese (zh)
Inventor
牛思月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Publication of CN115484425A publication Critical patent/CN115484425A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/915Television signal processing therefor for field- or frame-skip recording or reproducing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application provides a method for determining a transition special effect and electronic equipment, and relates to the technical field of terminals. The problem of low human-computer interaction efficiency of video editing is solved. The specific scheme is as follows: displaying a first interface; receiving selection operation of a user on the first identifier; responding to the selection operation, and displaying a second interface; receiving a first operation of a user on a first control; in response to the first operation, starting recording first video data; after the first video data is recorded, displaying a third interface; the second video data includes: a video frame of first video data, first music, a first transition special effect, and a second transition special effect; the first transition special effect is superposed on a video frame corresponding to a first time point in the first video data; the second transition special effect is superposed on a video frame corresponding to a second time point in the first video data, and the second time point is positioned behind the first time point; the second transition effect is one of a plurality of preset transition effects.

Description

Transition special effect determination method and electronic equipment
The present application claims priority of chinese patent application entitled "a method and electronic device for user video creation based on story line mode" filed by the national intellectual property office at 16/6/2021 under the application number 2021106709.3, the entire contents of which are incorporated herein by reference.
The present application also claims priority of chinese patent application entitled "a method for determining transition special effect and electronic device" filed by the national intellectual property office at 29/11/2021 under the application number 202111434610.9, the entire contents of which are incorporated herein by reference.
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a method for determining a transition special effect and an electronic device.
Background
With the development of electronic technology, electronic devices such as mobile phones and tablet computers are generally configured with a plurality of cameras, such as a front camera, a rear camera, a wide-angle camera, and the like. The plurality of cameras facilitate shooting of video works by a user through the electronic equipment. After the user shoots a finished video by using the electronic equipment, a video work with higher ornamental value can be obtained by adding a special effect to the video and the like. However, at present, the problem of low human-computer interaction efficiency still exists when transition special effects are manually added into videos.
Disclosure of Invention
The embodiment of the application provides a method for determining a transition special effect and electronic equipment, which are used for improving the man-machine interaction efficiency of adding the transition special effect.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, a method for determining a transition special effect provided in an embodiment of the present application is applied to an electronic device, and the method includes: the electronic equipment displays a first interface, wherein the first interface comprises a first identifier indicating a first shooting template; the first shooting template comprises first music, and the first music corresponds to a first transition special effect; the electronic equipment receives a selection operation of a user on the first identifier; the electronic equipment responds to the selection operation and displays a second interface; the second interface is a recording preview interface; the second interface comprises a first control for indicating the start of shooting; the electronic equipment receives a first operation of a user on a first control; the electronic equipment responds to the first operation and starts to record the first video data; after the first video data is recorded, the electronic equipment displays a third interface; the third interface is used for displaying second video data; the second video data includes: a video frame of first video data, first music, a first transition effect, and a second transition effect; the first transition special effect is superposed on a video frame corresponding to a first time point in the first video data; the second transition special effect is superposed on a video frame corresponding to a second time point in the first video data, and the second time point is positioned behind the first time point; the second transition effect is one of a plurality of preset transition effects.
In the above-described embodiment, after a shooting template (e.g., a first shooting template) is selected, first video data is recorded. After the first video data are recorded, second video data are created based on first music and a transition special effect (a first transition special effect) corresponding to the first shooting template. The created second video data is provided with the first music, and further includes a first transition effect and a second transition effect. The first transition special effect is absolutely related to the first music, the second transition special effect is relatively related to the first music, and the types of the second transition special effect are random, so that the transition special effect is matched with the first music, the diversification of the transition special effect is increased, the content quality of created video data is enriched, the possibility of reworking indicated by a user is reduced, in addition, in the process of creating the second video data, the operation of the required user is simple, and the man-machine interaction efficiency of creating videos is improved.
In some possible embodiments, before the electronic device displays the third interface, the method further includes: the electronic equipment determines a second transition special effect from a plurality of preset transition special effects based on the matching weight; each preset transition special effect is correspondingly provided with a matching weight, and the matching weight is a quantitative ratio parameter of the adaptation degree between the first music and the preset transition special effect; the plurality of preset transition effects include a first transition effect.
In the above embodiment, the second transition special effect is obtained randomly based on the matching weight, so that the obtained second transition special effect can give consideration to both randomness and relevance to the first music, so that the second transition special effect is more suitable for the first music configured in the second video data, and the quality of the video content is also improved.
In some possible embodiments, the second video data further includes a third transition special effect corresponding to the first music, the third transition special effect being superimposed on a video frame corresponding to a third time point in the first video data, the third time point being located between the first time point and the second time point.
In some possible embodiments, the second video data further includes a fourth transition special effect, the fourth transition special effect is superimposed on a video frame corresponding to a fourth time point in the first video data, and the fourth time point is located after the second time point; the fourth transition effect is one of a plurality of preset transition effects, or one of the first transition effect, the second transition effect, and the third transition effect.
In some possible embodiments, the first time point, the second time point, the third time point, and the fourth time point are all video frame division points in the first video data.
In some possible embodiments, the first music also corresponds to a maximum number of transitions; before the electronic device displays the third interface, the method further comprises: the electronic equipment determines that the number of the types of the first transition special effect, the second transition special effect and the third transition special effect is equal to the maximum number of the types of the transitions; the electronic equipment determines a fourth transition special effect from the first transition special effect, the second transition special effect and the third transition special effect based on the matching weight; each preset transition special effect is correspondingly provided with a matching weight, and the matching weight is a quantization ratio parameter of the adaptation degree between the first music and the preset transition special effect; the plurality of preset transition effects include a first transition effect and a third transition effect.
In the above embodiment, while ensuring the diversity of transition effects added to the second video data, it is also avoided that the transition effects added are too much, so that the content of the second video data is disordered. Therefore, the possibility that the user manually changes the transition special effect type can be reduced, and therefore the man-machine interaction efficiency of video production is improved.
In some possible embodiments, the first music also corresponds to a maximum number of transitions; before the electronic device displays the third interface, the method further comprises: the electronic equipment determines that the number of the types of the first transition special effect, the second transition special effect and the third transition special effect is less than the maximum number of the types of the transitions; the electronic equipment determines a fourth transition special effect from a plurality of preset transition special effects based on the matching weight; each preset transition special effect is correspondingly provided with a matching weight, and the matching weight is a quantitative ratio parameter of the adaptation degree between the first music and the preset transition special effect; the plurality of preset transition effects include a first transition effect and a third transition effect.
In some possible embodiments, there are transition effects belonging to the same category between the first transition effect, the second transition effect, and the third transition effect.
In some possible embodiments, when the first video data is a video shot with a landscape screen, the plurality of preset transition effects include: rotating transition, folding transition, fuzzy transition, melting transition, black transition, white transition, amplifying transition, reducing transition, upward transition and downward transition; when the first video data is a video shot by a vertical screen, the multiple preset transition special effects comprise: left transition, right transition, rotation transition, superposition transition, fuzzy transition, melting transition, black transition, white transition, enlargement transition and reduction transition.
In a second aspect, a method for determining a transition special effect provided in an embodiment of the present application is applied to an electronic device, and the method includes: the electronic equipment displays a fourth interface, wherein the fourth interface comprises a thumbnail of the third video data; the electronic equipment receives a selection operation of a user on a thumbnail of the third video data; the electronic equipment responds to the selection operation and displays a fifth interface; wherein the fifth interface is a video editing interface of the third video data; the fifth interface comprises a one-key large-piece control; the electronic equipment receives a second operation of the user on the one-key large-size control; the electronic equipment responds to the second operation and determines a first shooting template matched with the third video data; the first shooting template comprises first music, and the first music corresponds to a first transition special effect; after determining the first shooting template, the electronic device displays a sixth interface; the sixth interface is used for displaying fourth video data; the fourth video data includes: a video frame of third video data, a first music, a first transition special effect, and a second transition special effect; the first transition special effect is superposed on a video frame corresponding to the first time point in the third video data; the second transition special effect is superposed on a video frame corresponding to a second time point in the third video data, and the second time point is positioned behind the first time point; the second transition effect is one of a plurality of preset transition effects.
In the above embodiment, for the saved video data (e.g., the third video data), the user may trigger to create the fourth video data on the basis of the third video data by selecting the first shooting template. The created fourth video data is provided with the first music, and further includes a first transition effect and a second transition effect. In addition, in the process of creating the fourth video data, the operation of required users is simple, and the man-machine interaction efficiency of creating videos is improved.
In some possible embodiments, before the electronic device displays the sixth interface, the method further includes: the electronic equipment determines a second transition special effect from the multiple preset transition special effects based on the matching weight; each preset transition special effect is correspondingly provided with a matching weight, and the matching weight is a quantitative ratio parameter of the adaptation degree between the first music and the preset transition special effect; the plurality of preset transition effects include a first transition effect.
In some possible embodiments, the first capture template corresponds to a first genre; the electronic equipment determines a first shooting template matched with the third video data, and comprises the following steps: the electronic equipment determines that the third video data is matched with the first style by using a preset artificial intelligence model; the electronic equipment determines that the first shooting template is matched with the third video data; or the electronic equipment randomly determines the first shooting template from a plurality of preset shooting templates.
In a third aspect, an electronic device provided in an embodiment of the present application includes one or more processors and a memory; a memory coupled to the processor, the memory for storing computer program code, the computer program code comprising computer instructions, which when executed by the one or more processors, cause the one or more processors to: displaying a first interface, the first interface including a first identifier indicating a first shooting template; the first shooting template comprises first music, and the first music corresponds to a first transition special effect; receiving a selection operation of a user on the first identifier; responding to the selection operation, and displaying a second interface; the second interface is a recording preview interface; the second interface comprises a first control for indicating the start of shooting; receiving a first operation of a user on the first control; in response to the first operation, starting recording first video data; after the first video data is recorded, displaying a third interface; the third interface is used for displaying second video data; the second video data includes: a video frame of the first video data, the first music, the first transition special effect, and a second transition special effect; the first transition special effect is superposed on a video frame corresponding to a first time point in the first video data; the second transition special effect is superposed on a video frame corresponding to a second time point in the first video data, and the second time point is positioned behind the first time point; the second transition effect is one of a plurality of preset transition effects.
In some possible embodiments, prior to displaying the third interface, the one or more processors are further to: determining a second transition special effect from the multiple preset transition special effects based on the matching weight; the matching weight corresponds to each preset transition special effect, and the matching weight is a quantization ratio parameter of the adaptation degree between the first music and the preset transition special effect; the plurality of preset transition effects include the first transition effect.
In some possible embodiments, the second video data further includes a third transition special effect corresponding to the first music, the third transition special effect being superimposed on a video frame corresponding to a third time point in the first video data, the third time point being located between the first time point and the second time point.
In some possible embodiments, the second video data further includes a fourth transition special effect superimposed on a video frame in the first video data corresponding to a fourth time point, the fourth time point being subsequent to the second time point; the fourth transition special effect is one of the multiple preset transition special effects, or one of the first transition special effect, the second transition special effect and the third transition special effect.
In some possible embodiments, the first time point, the second time point, the third time point, and the fourth time point are all video frame division points in the first video data.
In some possible embodiments, the first music further corresponds to a maximum number of transitions; prior to displaying the third interface, the one or more processors are further to: determining that the number of the types of the first transition special effect, the second transition special effect and the third transition special effect is equal to the maximum number of the types of the transitions; determining a fourth transition special effect from the first transition special effect, the second transition special effect and the third transition special effect based on the matching weight; the matching weight corresponds to each preset transition special effect, and the matching weight is a quantization ratio parameter of the adaptation degree between the first music and the preset transition special effect; the plurality of preset transition effects include the first transition effect and a third transition effect.
In some possible embodiments, the first music further corresponds to a maximum number of transitions; prior to displaying the third interface, the one or more processors are further to: determining that the number of the types of the first transition special effect, the second transition special effect and the third transition special effect is less than the maximum number of the types of the transitions; determining a fourth transition special effect from the multiple preset transition special effects based on the matching weight; the matching weight corresponds to each preset transition special effect, and the matching weight is a quantization ratio parameter of the adaptation degree between the first music and the preset transition special effect; the plurality of preset transition effects include the first transition effect and a third transition effect.
In some possible embodiments, there are transition effects belonging to the same category between the first transition effect, the second transition effect and the third transition effect.
In some possible embodiments, when the first video data is a video shot with a landscape screen, the plurality of preset transition special effects includes: rotating transition, folding transition, fuzzy transition, melting transition, black transition, white transition, amplifying transition, reducing transition, upward transition and downward transition; when the first video data is a video shot in a vertical screen mode, the multiple preset transition special effects comprise: left transition, right transition, rotation transition, superposition transition, fuzzy transition, melting transition, black transition, white transition, enlargement transition and reduction transition.
In a fourth aspect, an electronic device provided in an embodiment of the present application includes one or more processors and a memory; a memory coupled to the processor, the memory for storing computer program code, the computer program code comprising computer instructions that, when executed by the one or more processors, cause the one or more processors to: displaying a fourth interface, the fourth interface including a thumbnail of third video data; receiving a selection operation of a user on a thumbnail of the third video data; responding to the selection operation, and displaying a fifth interface; wherein the fifth interface is a video editing interface of the third video data; the fifth interface comprises a one-key large-piece control; receiving a second operation of the one-key large control by the user; in response to the second operation, determining a first shooting template matching the third video data; the first shooting template comprises first music, and the first music corresponds to a first transition special effect; after determining the first shooting template, displaying a sixth interface; the sixth interface is used for displaying fourth video data; the fourth video data includes: a video frame of the third video data, the first music, the first transition special effect, and a second transition special effect; the first transition special effect is superposed on a video frame corresponding to a first time point in the third video data; the second transition special effect is superimposed on a video frame corresponding to a second time point in the third video data, and the second time point is positioned after the first time point; the second transition effect is one of a plurality of preset transition effects.
In some possible embodiments, prior to displaying the sixth interface, the one or more processors are further to: determining a second transition special effect from the multiple preset transition special effects based on the matching weight; each preset transition special effect corresponds to one matching weight, and the matching weight is a quantization ratio parameter of the adaptation degree between the first music and the preset transition special effect; the plurality of preset transition effects include the first transition effect.
In some possible embodiments, the first capture template corresponds to a first genre; one or more processors configured to: determining that the third video data matches the first style using a preset artificial intelligence model; determining that the first capture template matches the third video data; or randomly determining the first shooting template from a plurality of preset shooting templates.
In a fifth aspect, an embodiment of the present application provides a computer storage medium, which includes computer instructions, and when the computer instructions are executed on an electronic device, the electronic device is caused to execute the method described in the first aspect and possible embodiments thereof, or the electronic device is caused to execute the method described in the second aspect and possible embodiments thereof.
In a sixth aspect, the present application provides a computer program product, which, when run on the above electronic device, causes the electronic device to perform the method described in the above first aspect and possible embodiments thereof; or cause the electronic device to perform the method described in the second aspect and possible embodiments thereof.
It is understood that the method, the electronic device, the computer-readable storage medium and the computer program product provided by the foregoing aspects are all applied to the corresponding methods provided above, and therefore, the beneficial effects achieved by the method can refer to the beneficial effects in the corresponding methods provided above, and are not described herein again.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2A is a flowchart illustrating steps of a method for determining a transition special effect according to an embodiment of the present application;
fig. 2B is a flowchart illustrating a sub-step of S101 according to an embodiment of the present disclosure;
FIG. 3 is an exemplary diagram of a display interface provided by an embodiment of the present application;
fig. 4 is a second exemplary diagram of a display interface provided in the embodiment of the present application;
fig. 5A is a second flowchart of the sub-steps of S101 according to the embodiment of the present application;
FIG. 5B is a third exemplary view of a display interface provided in an embodiment of the present application;
FIG. 6 is a fourth illustration of an example display interface provided by an embodiment of the present application;
FIG. 7 is a fifth illustration of an exemplary display interface provided by an embodiment of the application;
FIG. 8 is a sixth illustration of an exemplary display interface provided by an embodiment of the present application;
FIG. 9 is a seventh illustration of an example display interface provided by an embodiment of the present application;
FIG. 10 is an eighth illustration of an exemplary display interface provided by an embodiment of the present application;
fig. 11 is one of the schematic diagrams of adding transition special effects provided in the embodiment of the present application;
fig. 12 is a second schematic diagram illustrating a principle of adding transition special effects according to an embodiment of the present application;
fig. 13 is a third schematic diagram illustrating a principle of adding transition special effects according to an embodiment of the present application;
fig. 14 is a schematic composition diagram of a chip system according to an embodiment of the present disclosure.
Detailed Description
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present embodiment, "a plurality" means two or more unless otherwise specified.
Embodiments of the present embodiment will be described in detail below with reference to the accompanying drawings.
Generally, after a user uses a video shot by an electronic device, the shot video can be edited by operating the electronic device, such as configuring video music, adding an animation special effect, adding a transition special effect, and the like. Therefore, the video created secondarily is more vivid and rich and conforms to the creative intention of the user. The transition special effect is added, so that the transition of the video content is more natural, and the content presented by the video is richer. However, in the related art, when adding the transition special effect, a user needs to determine a position point where the transition special effect needs to be inserted in the process of playing a video, that is, determine a video frame where the transition special effect needs to be superimposed, and superimpose the selected transition special effect. However, after the user manually adds the transition special effect, there may be a case where the transition special effect selected by the user does not conform to the video music style. In this case, the user also needs to reselect the transition special effect, and the rework rate is extremely high. This undoubtedly increases the complexity of the operation of adding transition special effects and reduces the efficiency of human-computer interaction for video creation.
The embodiment of the application provides a method for determining a transition special effect, and the method can be applied to electronic equipment with a plurality of cameras. By adopting the method provided by the embodiment of the application, the electronic equipment can automatically determine the transition special effect matched with the video data according to the style of the video music and add the transition special effect into the video data. Therefore, user operation is not needed, the matching degree between the transition special effect and the video music is improved, the rework rate of selecting the transition special effect is reduced, and the human-computer interaction efficiency of creating videos is improved.
For example, the electronic device in the embodiment of the present application may be a mobile phone, a tablet computer, a smart watch, a desktop, a laptop, a handheld computer, a notebook, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a Personal Digital Assistant (PDA), an Augmented Reality (AR) \ Virtual Reality (VR) device, and other devices including multiple cameras, and the embodiment of the present application does not particularly limit the specific form of the electronic device.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings. Please refer to fig. 1, which is a schematic structural diagram of an electronic device 100 according to an embodiment of the present disclosure. As shown in fig. 1, the electronic device 100 may include: the mobile terminal includes a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like.
The sensor module 180 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
It is to be understood that the illustrated structure of the present embodiment does not constitute a specific limitation to the electronic apparatus 100. In other embodiments, electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), among others. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the time sequence signal to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
It should be understood that the interface connection relationship between the modules illustrated in the present embodiment is only an exemplary illustration, and does not limit the structure of the electronic device 100. In other embodiments, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-o led, a quantum dot light-emitting diode (QLED), or the like.
The electronic device 100 may implement a photographing function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, and the application processor, etc.
The ISP is used to process the data fed back by the camera 193. For example, when a user takes a picture, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, an optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and converting into an image visible to the naked eye. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 293.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, electronic device 100 may include N cameras 193, N being a positive integer greater than 1.
For example, the N cameras 193 may include: one or more front cameras and one or more rear cameras. For example, the electronic device 100 is a mobile phone. The mobile phone comprises at least one front camera. The front camera is disposed on the front side of the mobile phone, such as the front camera 301 shown in fig. 3 (a). In addition, the mobile phone comprises at least one rear camera. The rear camera is arranged on the back side of the mobile phone. Thus, the front camera and the rear camera face different directions.
In some embodiments, the electronic device may enable at least one of the N cameras 193 to take a photograph and generate a corresponding photograph or video. For example, one front camera of the electronic apparatus 100 is used alone for shooting. For another example, a rear camera of the electronic apparatus 100 is used alone for shooting. For another example, two front cameras are simultaneously started to shoot. For another example, two rear cameras are simultaneously started to shoot. For another example, a front camera and a rear camera are simultaneously started for shooting, and the like.
It is understood that the single camera 193 is enabled for shooting, which may be referred to as enabling a single-shot mode, such as a forward-shot mode and a backward-shot mode. The multiple cameras 193 are simultaneously activated for shooting, which may be collectively referred to as a multi-shot mode, such as a front-front mode, a front-back mode, a back-back mode, and a picture-in-picture mode.
For example, a front camera and a rear camera are simultaneously enabled. After a front camera and a rear camera are started to shoot, the electronic equipment can render and combine image frames acquired by the front camera and the rear camera. The rendering and combining can be splicing image frames acquired by different cameras. If after the front and back mode is adopted for vertical screen shooting, the image frames collected by different cameras can be spliced up and down. For another example, after the rear-back mode is adopted for transverse screen shooting, the image frames collected by different cameras can be spliced left and right. For another example, after taking a picture in picture mode, the image frames collected by one camera can be embedded in the image frames collected by another camera. Then, the picture is generated by encoding.
In addition, after a front camera and a rear camera are started to shoot videos, the front camera collects a path of video stream and caches the video stream. The rear camera collects a path of video stream and caches the video stream. Then, the electronic device 100 renders and merges the two buffered video streams frame by frame, that is, renders and merges video frames with the same or matched acquisition time points in the two buffered video streams. And then, encoding is carried out to generate a video file.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The audio module 170 is used to convert digital audio information into analog audio signals for output, and also used to convert analog audio inputs into digital audio signals. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110. The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. In this way, the electronic device 100 may play audio data, such as video music, etc.
The pressure sensor is used for sensing a pressure signal and converting the pressure signal into an electric signal. In some embodiments, the pressure sensor may be disposed on the display screen 194. The gyro sensor may be used to determine the motion pose of the electronic device 100. The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment 100 and applied to horizontal and vertical screen switching and other applications. Touch sensors, also known as "touch panels". The touch sensor may be disposed on the display screen 194, and the touch sensor and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type.
The methods in the following embodiments may be implemented in the electronic device 100 having the above-described hardware structure. In the following embodiments, the method of the embodiments of the present application is described by taking the electronic device 100 as a mobile phone as an example.
The embodiment of the application provides a method for determining a transition special effect, which can be suitable for a process that a user creates a video work by using a mobile phone. The mobile phone can comprise a plurality of cameras.
In some embodiments, the process of creating a video work by the mobile phone includes: the method comprises the following steps of starting a stage of shooting a single-mirror video by a single camera and a stage of editing the single-mirror video (such as adding a transition special effect in the single-mirror video). The single-mirror video may be a video obtained according to a video stream collected by a single camera.
In another embodiment, the process of creating the video by the mobile phone includes: the method comprises the following steps of starting a stage of shooting multi-mirror videos by a plurality of cameras and a stage of editing the multi-mirror videos (such as adding transition special effects in the multi-mirror videos). The multi-mirror video may be a video obtained after the video streams collected by the multiple cameras are rendered and combined.
Or, in another embodiment, the process of creating a video work by the mobile phone includes: a stage of selecting photographed video data (e.g., single-mirror video or multi-mirror video) and a stage of editing the selected video.
It can be understood that the implementation principle of the transition special effect determination method is the same whether the transition special effect determination method is applied to single-mirror video or multi-mirror video. Illustratively, as shown in fig. 2A, the above method may include the steps of:
s101, video data 1 is acquired.
In some embodiments, the above-described video data 1 may be a video photographed based on a photographing template. In other embodiments, the video data 1 may be a video that is not photographed based on the photographing template.
When the video works with the music, the filter and the special effect are created, the shooting template is used for simplifying the complexity of creation. Illustratively, the shooting template includes a filter, a sticker, a plurality of optional special effects (e.g., atmosphere special effect, transition special effect, sticker, etc.), and a corresponding video music. In this way. The user only needs to operate the mobile phone to shoot the video picture, that is, shoot the video data 1, and the mobile phone can automatically process the video data 1 according to the shooting template, for example, automatically add a transition special effect and the like.
Taking the example that the video data 1 is a shot video based on a shooting template, as shown in fig. 2B, the above S101 may include:
s101-1, displaying an interface 1 by the mobile phone. The interface 1 is a viewfinder interface for recording video.
The interface 1 may be an application interface provided by a camera APP in a mobile phone. In other embodiments, the interface 1 may also be an application interface provided by other APPs (e.g., short video APPs) in the mobile phone. During the display interface 1 of the mobile phone, the user can instruct the mobile phone to start video recording through operation.
Illustratively, as shown in fig. 3 (a), the interface 302 displayed by the mobile phone is a view interface provided by the camera application, and is also a view interface for implementing the two-lens recording function. In addition, the interface 302 includes controls corresponding to a plurality of functional modes of the camera application, such as a photographing control, a video recording control, a dual-lens video recording control, and the like. During display of interface 302, the two-mirror video control is in a selected state. The user can switch to the view interface realizing different functions by operating the control corresponding to the function mode. For example, when the mobile phone detects that the user operates the video recording control, for example, when the user clicks on the video recording control, the mobile phone may switch to display a view interface for implementing a single-lens video recording function. The switched viewing interface also comprises controls corresponding to a plurality of functional modes, and at the moment, the video recording controls are in a selected state.
In some embodiments, a viewfinder 303 and a viewfinder 304 are included in interface 302 in the event that the two-mirror video control is selected. The arrangement position relationship between the viewfinder 303 and the viewfinder 304 is related to the posture of the mobile phone. For example, in a scene in which the gyro sensor of the mobile phone recognizes that the mobile phone is in the portrait state, the above-described finder frame 303 and finder frame 304 are arranged up and down. The view frames 303 and 304 are arranged left and right in a scene where the gyroscope sensor of the mobile phone recognizes that the mobile phone is in a landscape state.
In addition, the finder frames 303 and 304 correspond to cameras, respectively. For example, the view frame 303 corresponds to the camera 1 (e.g., a rear camera), and thus the view frame 303 can be used to display a video frame captured by the camera 1. The view frame 304 corresponds to the camera 2 (e.g., a front camera), so that the view frame 304 can be used for displaying video frames captured by the camera 2. It is understood that the camera corresponding to each frame (e.g., the frame 303 and the frame 304) can be adjusted according to the operation of the user.
In interface 302, a micro-movie control 305 is also included, the micro-movie control 305 being used to initiate micro-movie functionality. Under the condition of starting the micro-movie function, a user can conveniently create a video work with music, a filter and a special effect through a mobile phone. In addition, in other viewing interfaces for recording videos (e.g., a viewing interface for implementing a single-lens recording function), a micro-movie control with the same function may be included.
And S101-2, the mobile phone responds to the operation of the user in the interface 1 and shoots the video data 1 under the micro-movie function.
In some embodiments, when a user operation, such as a click operation, on the micro-movie control 305 is received during the display of the interface 302 by the mobile phone, a first interface, such as the interface 306 shown in (b) of fig. 3, may be displayed. The interface 306 refers to a guidance interface for guiding the user to select the shooting template. The interface 306 includes a plurality of template windows indicating different shooting templates. Such as window 307, window 308, window 309, and window 310. The above window 307 is used to indicate a shooting template named hello summer, the window 308 is used to indicate a shooting template named sunny, the window 309 is used to indicate a shooting template named HAPPY, and the window 310 is used to indicate a shooting template named xiao mei.
Of course, different shooting templates may have different filters, stickers, atmosphere effects, and transition effects, besides different corresponding videos and music. Obviously, under the coordination of different video music, filters, stickers, atmosphere special effects and transition special effects, the styles of the produced videos are different. That is, the user can make different styles of video works by selecting different shooting templates.
In some embodiments, the user may select different video style capture templates during the cell phone display interface 306. That is, the mobile phone may receive an operation, such as a click operation, performed by the user on the template window in the interface 306, and determine the shooting template selected by the user. For example, the mobile phone receives a click operation of the window 308 by the user, and may determine that the shooting template named sunny is selected by the user. In addition, in other embodiments, a default template may be preset in the mobile phone. For example, the hello summer shot template may be pre-selected to be configured as a default template. Thus, in the case where the mobile phone switches from the interface 302 to the display interface 306, the shooting template is in a selected state in the summer. And then, under the condition that the mobile phone does not receive selection operation aiming at other shooting templates, the mobile phone determines that the user selects the shooting template in summer. In a case where the mobile phone receives a selection operation for another photographing template (e.g., a small nice photographing template), the mobile phone determines that the user selects the small nice photographing template.
In some embodiments, each capture template corresponds to a swatch. The sample is a video created in advance based on the shooting template. When the shooting template is selected by the user, a corresponding dailies may be displayed in the preview window 311. Therefore, the style effect of the shooting template can be previewed by the user, and the user can conveniently select the style effect. For example, when the "your summer" shooting template is selected, a sample "your summer" is played in the preview window 311.
In addition, during the cell phone display interface 306, the user may change the selected capture template by selecting a different template window.
Of course, the cell phone may also receive user manipulation of the control 312 during the cell phone display interface 306. After receiving the operation of the control 312, the mobile phone may determine the currently selected shooting template as the actually selected shooting template. For convenience of description, the shooting template actually selected may also be referred to as the shooting template 1 or as a first shooting template, and a template window corresponding to the first shooting template may also be referred to as a first identifier. In addition, the video music corresponding to the first shooting template is also called first music.
After determining the photographing template 1, the mobile phone may also switch to displaying a second interface, such as the interface 313 shown in fig. 3 (c). The interface 313 is a template viewing interface corresponding to the shooting template 1, and may also be referred to as a recording preview interface.
In the interface 313, a viewfinder frame is also included. When the mobile phone is switched from the interface 306 to the interface 313, the number of the view frames corresponding to the interface 313 is related to the shooting template 1. At the same time, the video stream displayed by the frame is also related to the shooting template 1.
In some embodiments, the capture template may also correspond to a default shot mode. The lens modes may include a single front mode, a single rear mode, a top-front-bottom-rear mode, a top-rear-bottom-front mode, a top-rear (near) bottom-rear (far) mode, a top-rear (far) bottom-rear (near) mode, a picture-in-picture mode, and the like.
Illustratively, when the single-front mode is enabled, the interface 313 includes a view box for previewing the video stream captured by the front camera.
Further illustratively, when the single rear mode is enabled, the interface 313 includes a view box for previewing a video stream captured by the rear camera.
When there are a plurality of front cameras and rear cameras, there is one main camera in the plurality of front cameras, which is referred to as a front camera a, and there is also one main camera in the plurality of rear cameras, which is referred to as a rear camera a. In the single-front mode, the viewfinder is used for displaying the video stream collected by the front camera a. In the single rear mode, the viewfinder is used to display the video stream collected by the rear camera a.
Also illustratively, when the top-front-bottom-rear mode is enabled, the interface 313 includes two viewfmgers, e.g., frame 1 and frame 2. The above-described finder frame 1 and finder frame 2 are arranged up and down in the interface 313. The upper view frame 1 is used for displaying the video stream collected by the front camera, and the lower view frame 2 is used for displaying the video stream collected by the rear camera. For example, the viewfinder 1 is used to display the video stream captured by the front camera a, and the viewfinder 2 is used to display the video stream captured by the rear camera a. For another example, the viewfinder 1 is used for displaying the video streams captured by other front cameras, and the viewfinder 2 is used for displaying the video streams captured by other rear cameras. The same applies to the mode with the upper, rear, lower and front being enabled, except that the viewfinder 1 is used for displaying the video stream captured by the rear camera, and the viewfinder 2 is used for displaying the video stream captured by the front camera.
Also illustratively, where the handset includes multiple rear facing cameras, when the back-up (near) back-down (far) mode is enabled, the interface 313 includes two viewboxes, e.g., viewfinder 1 and viewfinder 2. The above-described finder frame 1 and finder frame 2 are arranged up and down in the interface 313. The viewfinder 1 and the viewfinder 2 are respectively used for displaying video streams collected by the two rear cameras.
It can be understood that the types of the plurality of rear cameras installed in the mobile phone may be different, for example, the rear camera of the mobile phone may be one of or a combination of a main camera, a telephoto camera, a wide-angle camera, an ultra-wide-angle camera, a macro camera, and the like. In some examples, the focal lengths corresponding to different rear cameras may be different, so that the distances that different rear cameras can shoot are different.
In the above example, the upper arranged finder frame 1 may be used to display a rear camera having a relatively long focal length, and the lower arranged finder frame 2 may be used to display a rear camera having a relatively short focal length. For example, when the view frames 1 and 2 are respectively used for displaying video streams collected by the rear camera b (tele camera) and the rear camera c (wide camera), the focal length of the tele camera is longer relative to the focal length of the wide camera, so that the view frame 1 is used for displaying the video stream collected by the rear camera b, and the view frame 2 is used for displaying the video stream collected by the rear camera c.
Of course, the viewfinders 1 and 2 can also respectively display other types of rear-facing camera combinations, for example, main and tele cameras, main and wide cameras, main and super wide cameras, main and macro cameras, tele and super wide cameras, tele and macro cameras, or wide and macro cameras, etc. respectively.
Further illustratively, where the handset includes multiple rear facing cameras, when back-up (far) back-down (near) is enabled, the interface 313 includes two viewfmgers, e.g., viewfinder 1 and viewfinder 2. The above-described finder frame 1 and finder frame 2 are arranged up and down in the interface 313. The upper arranged view finder 1 can be used for displaying the rear camera with a relatively short focal length, and the lower arranged view finder 2 can be used for displaying the rear camera with a relatively long focal length.
Still further illustratively, when picture-in-picture mode is enabled, interface 313 includes two viewfmgers, e.g., viewfmder 1 and viewfmder 2. The size of the view frame 2 is smaller than that of the view frame 1, and the view frame 2 can be embedded into the view frame 1. The viewfinder 1 is used for displaying the video stream collected by the rear camera, and the viewfinder 2 is used for displaying the video stream collected by the front camera. Of course, the viewfinder 1 is used for displaying the video stream captured by the front camera, and the viewfinder 2 is used for displaying the video stream captured by the rear camera. In other examples, the viewfinder 1 may be used to display a video stream captured by the front camera, and the viewfinder 2 may be used to display a video stream captured by the rear camera. The viewfinder 1 can be used for displaying a rear camera with a relatively far focal length, and the viewfinder 2 can be used for displaying a rear camera with a relatively close focal length. That is, the cameras corresponding to the view frames 1 and 2 in the pip mode may be determined by the user. In some examples, by default, when picture-in-picture mode is enabled, view box 1 is used to display the video stream captured by the rear camera and view box 2 is used to display the video stream captured by the front camera.
In the foregoing example, the lens mode described is a mode to which the mobile phone is applied when in the portrait state. When the mobile phone is in a landscape state, the lens modes may further include a left-front-right-rear mode, a left-rear-right-front mode, a left-rear (near) right-rear (far) mode, a left-rear (far) right-rear (near) mode, and the like.
Here, the left front and right rear mode, the left rear and right front mode, the left rear (near) and right rear (far) mode, and the left rear (far) and right rear (near) are similar to the upper front and lower rear mode, the upper rear (near) and lower rear (far) mode, and the upper rear (far) and lower rear (near) mode in the above examples, and two frames, i.e., the frame 3 and the frame 4, are provided, and the frame 3 and the frame 4 are arranged left and right in the interface 313. The finder frame 3 corresponds to the finder frame 1, and the finder frame 4 corresponds to the finder frame 2. For example, the front left and right and rear modes are similar to the front up and rear down modes, the viewfinder frame 3 is used to display the video stream of the front camera, and the viewfinder frame 4 is used to display the video stream of the rear camera.
As an example, when the lens mode corresponding to the "hello summer" shooting template is up, down, front, as shown in (c) of fig. 3, and it is determined that the "hello summer" shooting template is the shooting template 1, the interface 313 displayed by the mobile phone includes the finder frame 314 and the finder frame 315. The viewfinder 314 is arranged on the upper side of the viewfinder 315. In addition, the viewfinder 314 is used to display the video stream captured by the rear camera, and the viewfinder 315 is used to display the video stream captured by the front camera.
In some embodiments, a mirror cut control, such as control 316, is also included in interface 313. The lens cutting control is used for assisting the user in selecting other lens modes to replace the default lens mode of the shooting template 1.
In addition, during the display of the interface 313, the mobile phone may also play video music corresponding to the shooting template 1. For example, in the scene where the video music corresponding to the "hello summer" shooting template is the song "hello summer", and the mobile phone determines that the "hello summer" shooting template is the shooting template 1, the mobile phone can play the song "hello summer" during the display interface 313. As shown in (c) of fig. 3, the duration of the song "hello summer" is 15s, and after the song "hello summer" is played for 15s, the mobile phone can cyclically play the song "hello summer".
In some embodiments, a song-cutting control, such as control 317, is also included in interface 313. The song switching control is used for assisting the user to select alternative music to replace the video music corresponding to the shooting template 1. And after the mobile phone responds to the instruction of the user and replaces the video music corresponding to the shooting template 1, the mobile phone plays the replaced music. In some examples, the alternative music corresponding to different capture templates may be different. Between the replacement music and the video music corresponding to the shooting template, songs having similar music styles, the same beats, or similar melodies may be used. In other examples, the alternative music corresponding to different shooting templates may be the same, for example, the alternative music may be all music stored in the mobile phone, or all music that can be searched by the mobile phone.
During the display of the interface 313, the cell phone has not actually started the shooting of the video data 1. However, the user can preview the viewing effect of the current shooting through the interface 313 along with the video music.
In some embodiments, a tap control, also referred to as a first control, such as control 318, is also included in interface 313. After the mobile phone receives an operation, such as a click operation, on the control 318 by the user, as shown in (d) in fig. 3, the mobile phone records an interface, that is, the interface 319, and starts to take a formal shot of the video data 1. In addition, the time length of the captured video data 1 does not exceed the set time length of the capture template 1. For example, the set time period may be equal to the time period for shooting the video music corresponding to the template 1, and for example, the set time period may be slightly shorter than the time period for shooting the video music.
Illustratively, the time duration of "your summer" is 15s, and the set time duration of the your summer shooting template is also 15s. When shooting the video data 1, if the shooting duration reaches 15s, the mobile phone can automatically stop shooting to obtain the video data 1, which is also called as first video data.
Also illustratively, during the recording process, the mobile phone displays a recording interface, such as interface 319 shown in fig. 3 (d). A control, such as control 320, for instructing suspension of shooting is included in interface 319. If the duration of shooting the video data 1 does not reach 15s, the mobile phone can receive the operation of the control 320 by the user, such as a long-press operation. After receiving the operation for the control 320, the mobile phone stops continuing shooting, and video data 1 is obtained.
After the video data 1 is obtained, the process may enter S102, so that the mobile phone may automatically perform secondary editing on the video data 1 to obtain the finally output video product. For example, as shown in fig. 4 and fig. 4, after the shooting duration of the video data 1 reaches the set duration, the mobile phone may process the video data 1 according to the shooting template to obtain a video work, that is, the second video data, and display a preview interface for playing the video work, that is, a third interface, such as the interface 401.
The continuation-with-video data 1 is a photographed video that is not based on the photographing template. As an example, as shown in fig. 5A, the S101 may further include:
s101-3, during displaying of the main interface of the mobile phone, displaying the application interface 1 provided by the gallery APP according to operation of a user on the icon of the gallery APP.
In some embodiments, the main interface may be a user interface displayed after the mobile phone is unlocked, and the main interface may provide an interface for enabling an existing APP in the mobile phone, such as an APP icon. Illustratively, as shown in (a) of fig. 5B, a main interface (i.e., interface 501) of the mobile phone includes a plurality of icons of APPs therein. Each icon is opposite to an APP. During the display of the main interface, the mobile phone can start the corresponding application program according to the icon clicked by the user.
For example, the mobile phone may receive a click operation on an icon 502 of the gallery APP, and in response to the click operation, display an application interface 1, also referred to as a fourth interface, provided by the gallery APP, such as an interface 503 shown in (B) in fig. 5B.
In some embodiments, the application interface 1 displays thumbnails of various picture resources and video resources, which may be captured and stored by the mobile phone, for example, video data captured and stored (e.g., a single-mirror video or a multi-mirror video) in a case where a capture template is not selected, or downloaded from the internet, or synchronized to the cloud. For example, an interface 503 shown in (B) in fig. 5B is an application interface 1 provided by the gallery APP, and the interface 503 includes thumbnails of the video 504 and thumbnails of multiple pictures.
It can be seen that, during the display of the application interface 1 by the mobile phone, the user can browse and query the stored video data, thereby facilitating the user to select the video as the video data 1.
S101-4, responding to the operation of the user in the application interface 1, and determining the video data 1.
In some embodiments, the mobile phone may receive a user selection operation on a thumbnail of any video in the application interface 1, and in response to the user selection operation on the thumbnail of any video, the mobile phone may display a video editing interface, also referred to as a fifth interface. For example, in response to a click operation of the user on the video 504 in the interface 503, the mobile phone may display an interface 505 shown in (c) in fig. 5B, where the interface 505 is an editing interface of the video 504. In this scenario, the selected thumbnail may be referred to as a thumbnail of the third video data.
The video editing interface includes a one-touch large-screen control 506. The one-touch large control 506 is used to trigger the mobile phone to process the video 504, such as adding transition special effects, configuring video music, and the like.
In some embodiments, when the cell phone receives a user operation, such as a click operation, on the one-click widgets 506, the cell phone may determine that the video 504 is video data 1.
In addition, in response to the operation of the one-touch large control 506 by the user, the mobile phone may randomly determine a shooting template, which may also be referred to as shooting template 1, from the existing shooting templates and process the video 504. Or, the mobile phone responds to the operation of the user on the one-key large-size control 506, recognizes the picture style corresponding to the video 504 by using a preset artificial intelligence model, and then acquires a shooting template with a high style adaptation degree with the picture style of the video 504 from the existing shooting templates, that is, the shooting template 1 may correspond to the first style, and when the video 504 belongs to the first style, the shooting template 1 may be determined. Thus, the determined shooting template 1 is also matched with the video 504 and can be used to process the video 504. The processed video work, which may also be referred to as fourth video data, is similar to the second video data in the foregoing embodiments. In addition, after processing the video 504, a sixth interface may be displayed, and a play window in the sixth interface is used for displaying the fourth video data.
S102, after the video data 1 are obtained, the mobile phone divides the video data 1 to obtain a plurality of video clips.
In some embodiments, the mobile phone may divide the video data 1 by using an artificial intelligence model to obtain a plurality of video segments. Wherein the content similarity between video frames of the same video segment is not lower than a set value (e.g., 80%).
In other embodiments, the mobile phone may determine the segmentation point in the video data 1 according to the rhythm of the video music in the shooting template 1, and divide the video data 1 into a plurality of video segments based on the segmentation point.
The above-mentioned division point can be understood as a time point, and the time point belongs to the relative time axis corresponding to the video data 1. It will be appreciated that time 0 of this relative time axis corresponds to the acquisition time of the first frame of video data 1. The time interval between the video frame and the segmentation point mentioned in the following embodiments may refer to the time interval between the acquisition time of the video frame and the segmentation point.
In addition, after the mobile phone divides the video data 1 according to the segmentation point, the video frame with the acquisition time before the segmentation point and the video frame with the acquisition time after the segmentation point belong to different video clips respectively.
As an implementation manner, the above-mentioned mobile phone may determine the cut point in the video data 1 according to the rhythm of the video music, and the determining the cut point may include:
(1) The mobile phone obtains a segmentation step length matched with the rhythm of the video music. Wherein the slicing step is used to determine a plurality of selectable slicing positions (also referred to as pre-selected points) on the video data 1, such that the time interval between two adjacent pre-selected points is equal to the slicing step.
As an example, the segmentation step sizes corresponding to different shooting templates may be different. It can be understood that different shooting templates correspond to different video music, and meanwhile, the characteristics of beats, melodies and the like of different video music are different. However, the segmentation step corresponding to the same shooting template is fixed. Thus, the mobile phone can configure the segmentation step length of the video music in advance.
That is, the shooting template may further include a parameter of the segmentation step length, so that the mobile phone may obtain the corresponding segmentation step length through the shooting template.
As an example, the photographing template may be as shown in table 1 below:
TABLE 1
Figure BDA0003476767050000141
Examples as shown in table 1: the styles of the shooting template include relief, joy and the like. Wherein, the shooting template with the name of your summer and little beauty belongs to the relaxing style, and the names of sunny and HAPPY belong to the joy style. The display sequence is used to indicate the arrangement sequence of the template windows corresponding to the shooting templates in the interface 306. Of course, the shooting template whose display order is 1 may be a default template.
According to table 1, the shot mode corresponding to the shooting template in the good summer is the mode of up, down and front, the video music is "good summer", the time length corresponding to the video music is 15s, and the segmentation step length is 1.5s.
Similarly, it can be known from table 1 that the shot mode corresponding to the good-quality shooting template is the single-shot mode, the video music is romantic, the duration corresponding to the video music is 20s, and the segmentation step length is 1.2s. The related information of other shooting templates can also be determined by table 1, and is not described herein again. In addition, one item of time values (beats) in table 1 is used to record the beat of the video music. Each piece of video music corresponds to a beat, and therefore, the shooting template to which the video music belongs also corresponds to a time value (beat).
(2) After the corresponding segmentation step length is obtained, the mobile phone determines a pre-selection point according to the segmentation step length. In this way, each preselected point is determined to be adapted to the tempo of the video music. In this way, a stuck-point cut for video music can be achieved.
(3) The mobile phone selects the segmentation point from the plurality of pre-selection points.
It is understood that although each of the pre-selected points qualifies as a cut point, in order to avoid the video data 1 being classified into more than one pieces, the mobile phone may perform a filtering operation based on a plurality of pre-selected points to select a cut point. Therefore, the mobile phone is ensured to use the selected segmentation point to divide the video data 1 into a plurality of video clips meeting the limitation of the segmentation duration.
Illustratively, the above-described division time length limitation may be a slice length limitation condition. The split duration limit includes a minimum slice length and a maximum slice length.
Take the example that the mobile phone divides the video data 1 into i video segments (i is a positive integer greater than 1) according to the selected segmentation point. The way that i video segments satisfy the segmentation duration limit is as follows: the slice length of the first i-1 video slices is between the minimum value of the slice length and the maximum value of the slice length, and meanwhile, the slice length of the ith video slice is not less than the minimum value of the slice length. Wherein, the i video clips are arranged according to the sequence of the acquisition time.
In some examples, the handset may also pre-configure the split duration limit. That is, each shooting template may further include a division duration limit. As shown in table 1 above, the shooting template further includes a division duration limit. For example, the division time length corresponding to the shooting templates of "hello summer" and "xiaomei" is limited to 5s to 10s, and the division time length corresponding to the shooting templates of "sunny" and "HAPPY" is limited to 4s to 8s.
Taking the division duration limited to 5s to 10s as an example, the mobile phone divides the video data 1 into i video segments according to the selected division points. The length of the first i-1 video clips in the i video clips is not less than 5s and not more than 10s, and the length of the ith video clip is not less than 5s. In this case, the mobile phone may determine that all the divided video segments satisfy the segmentation duration limit.
And S103, the mobile phone acquires the transition special effect matched with the shooting template 1 and is used for connecting the video clip corresponding to the video data 1.
In some embodiments, the mobile phone may determine a transition special effect matching the shooting template 1 from a plurality of available transition special effects (which may also be referred to as preset transition special effects) for splicing the video segments corresponding to the video data 1. Among the available transition effects are: left transition, right transition, rotation transition, superposition transition, fuzzy transition, melting transition, black transition, white transition, enlargement transition, reduction transition, upward transition, downward transition, and the like.
Illustratively, as shown in fig. 6, when the mobile phone plays to the video frame 1 to which the left shift special effect is added in the case of adding the left shift field to the video data 1, the video frame 1 is shifted to the left by the distance 1 in the interface 401 at the speed 1. During the movement of video frame 1, video frame 2 appears from the right side of interface 401 following video frame 1. Understandably, the video frame 1 and the video frame 2 are two adjacent video frames in the video data 1. Then, the video frame 1 moves further to the left in the interface 401 according to the speed 2 until disappearing from the interface 401, wherein the speed 2 is greater than the speed 1. After video frame 1 disappears, video frame 2 is displayed in interface 401, and it is ensured that other video frames after video frame 1 can be played in sequence.
In addition, the right transfer field and the left transfer field are similar in implementation principle, and the difference is that the moving directions are opposite, which is not described herein again.
Further exemplarily, as shown in fig. 7, when the mobile phone plays to the video frame 1 added with the special rotation effect in the case of adding the rotation transition to the video data 1, the video frame 1 is controlled to rotate, and after the video frame 1 rotates to the set angle, the video frame 1 disappears (e.g., the video frame 1 is canceled to be displayed), and the video frame 2 is displayed.
Further exemplarily, as shown in fig. 8, in the case of adding an aliasing transition to the video data 1, when the mobile phone plays the video frame 1 added with the aliasing transition, the video frame 1 and the video frame 2 are overlapped, and the video frame 1 is set at the top. Then, the transparency of the video frame 1 is gradually changed from 0% to 100%, so that the video frame 1 disappears from the interface 401, and the video frame 2 is displayed in the interface 401.
Further exemplarily, as shown in fig. 9, when the mobile phone plays the video frame 1 with the blur transition added thereto in the case that the blur transition is added to the video data 1, the mobile phone performs gaussian blurring on the video frame 1 to obtain a blurred video frame 1, and then overlaps the blurred video frame 1 and the video frame 2. Where the blurred video frame 1 is set to the top. Then, the transparency of the blurred video frame 1 gradually changes from 0% to 100%, so that the blurred video frame 1 disappears from the interface 401, and the video frame 2 is displayed in the interface 401.
In addition, the principle of melting transition is similar to that of blurring transition, and the difference between melting transition and blurring transition is as follows: the video frame 1 added with the melting transition is superimposed with a melting special effect, and the video frame 1 added with the fuzzy transition is superimposed with a Gaussian fuzzy special effect. Details of implementation of the melting transition are not described herein.
As another example, in the case of adding a black transition to the video data 1, for example, the black transition is added between the video frame 1 and the video frame 2, after the mobile phone plays the video frame 1, the black image frame is superimposed on the video frame 2, and at the same time, the transparency of the black image frame is rapidly changed from 0% to 100% until the video frame 2 is clearly displayed in the interface 401.
As another example, in the case of adding a white transition to the video data 1, for example, the white transition is added between the video frame 1 and the video frame 2, after the mobile phone plays back the video frame 1, the white image frame is superimposed on the video frame 2, and at the same time, the transparency of the white image frame is rapidly changed from 0% to 100% until the video frame 2 is clearly displayed in the interface 401.
The transition special effect introduced in the above example is suitable for the video data 1 shot by the vertical screen of the mobile phone, which is also called as the vertical screen video data 1. Of course, in addition to the left transition and the right transition, other transitions are suitable for the video data 1 captured across the screen, which may also be referred to as the video data 1 across the screen. In addition, the up transition and the down transition are also suitable for the video data 1 shot across the screen.
Further illustratively, as shown in fig. 10, when the mobile phone plays a video frame 1 to which an upward-moving special effect is added in the case where an upward-moving field is added to video data 1 shot on a landscape screen, the video frame 1 moves upward by a distance 2 in an interface 401 at a speed of 1. During the movement of video frame 1, video frame 2 appears at interface 401 following video frame 1. Video frame 1 then continues to move up in interface 401 at speed 2 again until video frame 1 disappears. After video frame 1 disappears, video frame 2 is displayed in interface 401, and it is ensured that other video frames after video frame 1 can be played in sequence. The same reason for the downward transition is not described herein.
Therefore, the transition special effect has the effect of connecting and transiting different fragments. In some embodiments, the cell phone may divide video data 1 into a plurality of video segments before adding the transition special effect. Then, any two adjacent video clips are connected by using the transition special effect. For example, video clip 1 is the adjacent previous video clip to video clip 2. Therefore, in a scene that the transition special effect is utilized to link the video clip 1 and the video clip 2, the mobile phone determines that the last frame of the video clip 1 is the video frame 1 and determines that the first frame of the video clip 2 is the video frame 2. Thus, the video data 1 is more enjoyable after the transition special effect is added.
It will be appreciated that different transition effects will have different degrees of adaptation to different video music. That is, the adaptation degrees between different transition special effects and the shooting template are also different. Generally, the transition special effect with higher adaptation degree is more suitable for video data obtained based on the shooting template relatively. The transition special effect with lower adaptation degree is relatively unsuitable for video data obtained based on the shooting template.
In some embodiments, the degree of adaptation of each transition special effect to the shooting template may be preconfigured. Of course, the matching weight between each transition special effect and the shooting template can also be configured in advance according to the adaptation degree. Understandably, the transition special effect with higher matching weight is relatively easier to be selected as the transition special effect matched with the shooting template. The transition effect with the lower matching weight is relatively more difficult to be selected as the transition effect matched with the shooting template. In other words, the matching weight may be a quantization ratio parameter of the adaptation degree.
As an example, on the basis of table 1, as shown in table 2 below, matching weights between different transition effects may also be included in the shooting template.
TABLE 2
Figure BDA0003476767050000171
And the percentage value corresponding to each transition special effect in the table is the matching weight between the transition special effect and the shooting template.
Take the example of the hello summer shot template recorded in table 2. The matching weight between the shooting template and the superimposed transition is 50%, that is, the superimposed transition has a probability of 50% to be selected as the matched transition special effect. In addition, the matching weight between the shooting template and the fuzzy transition is 0%, that is, the fuzzy transition cannot be selected as the matched transition special effect. The matching weight between the shooting template and the melting transition is 0%, that is, the melting transition cannot be selected as the matched transition special effect. The matching weight between the shooting template and the upper transition is 50%, that is, under the scene of the cross-screen video data 1 to be processed by the mobile phone, the probability of 50% of the upper transition is selected as the matched transition special effect. The matching weight between the shooting template and the lower transition is 50%, and certainly, in the scene of the cross-screen video data 1 to be processed by the mobile phone, the lower transition has a probability of 50% to be selected as the matched transition special effect. The matching weight between the shooting template and the left transition field is 50%, that is, under the scene of the vertical screen video data 1 to be processed by the mobile phone, the left transition field has a probability of 50% to be selected as the matched transition special effect. The matching weight between the shooting template and the right transition field is 50%, and certainly, under the scene of the vertical screen video data 1 to be processed by the mobile phone, the right transition field has a probability of 50% and is selected as the matched transition special effect. The matching weight between the shooting template and the black field transition is 90%, that is, the black field transition has a probability of 90% to be selected as the matched transition special effect. The matching weight between the shooting template and the white field transition is 90%, that is, the probability of 90% of the white field transition is selected as the matched transition special effect. The matching weight between the shooting template and the amplified transition is 90%, i.e. the amplified transition has a probability of 90% to be selected as the matched transition special effect. The matching weight between the shooting template and the reduced transition is 90%, i.e. the reduced transition has a probability of 90% to be selected as the matched transition special effect. The matching weight between the shooting template and the rotating transition is 30%, that is, the rotating transition has a probability of 30% to be selected as the matched transition special effect.
In some embodiments, the mobile phone may randomly obtain a transition special effect according to the matching weight corresponding to each type of transition special effect, and the transition special effect is used as the transition special effect matched with the shooting template 1. And processing each group of adjacent video clips in the video data 1 based on the matched transition special effect, namely, realizing the connection of the adjacent video clips in the video data 1.
In addition, it should be noted that since the up-shift and down-shift are suitable for splicing the horizontal screen video data 1, the left-shift and right-shift are only suitable for splicing the vertical screen video data 1.
In some embodiments, when video data 1 for splicing the landscape is randomly acquired, the left transition and the right transition are not within a random range. When video data 1 for linking up the portrait screen is randomly acquired, the up-shift and down-shift are not within a random range.
In other embodiments, the type of transition special effect that must be used may also be marked in the capture template. There may also be a priority between effects when there are multiple transition effects that must be used.
For example, transition effects labeled [ No.1 ] and [ No.2 ] are included in table 2. The transition special effect marked as [ No.1 ] is called a first transition special effect, and is a special effect for connecting a first video segment and a second video segment. The transition effect labeled [ No.2 ], which may be referred to as a third transition effect, is a effect for joining the second video segment and the third video segment.
In other words, when only one transition splicing is required in the video data 1, that is, in a scene in which the video data 1 is divided into two video segments, the transition special effect labeled [ No.1 ] is preferentially used.
When two transition splicing operations are required in the video data 1, that is, in a scene where the video data 1 is divided into three video segments, a transition special effect labeled [ No.1 ] is used between the first video segment and the second video segment. Between the second video segment and the third video segment, the transition special effect labeled [ No.2 ] is used.
For example, the shooting template 1 is your summer. As shown in fig. 11, the mobile phone may determine the transition effect 1 (i.e., black transition) as the transition effect matching the shooting template 1, and is used to join the first video segment and the second video segment. Then, the cell phone proceeds to determine transition effect 2 (i.e., an enlarged transition) as the transition effect matching the shooting template 1, and is used to join the second video piece and the third video piece.
Additionally, in some embodiments, when the upper transfer field is labeled [ No.1 ] or [ No.2 ], the left transfer field will also have the same label. When the down-shift field is labeled [ No.1 ] or [ No.2 ], the right-shift field will also have the same label. In this way, it is ensured that both the landscape video data 1 and the portrait video data 1 can correspond to the transition special effect that must be used.
When three transition splicing operations are required in the video data 1, that is, in a scene where the video data 1 is divided into four video clips, a transition special effect marked as [ No.1 ] is used between a first video clip and a second video clip. Between the second video segment and the third video segment, the transition special effect labeled [ No.2 ] is used. Between the third video clip and the fourth video clip, the mobile phone uses the transition special effect randomly determined based on the matching weight, namely, the second transition special effect.
For example, the shooting template 1 is your summer. As shown in fig. 12, the mobile phone may determine the transition effect 1 (i.e., black transition) as the transition effect matching the shooting template 1, and is used to join the first video segment and the second video segment. Then, the mobile phone continues to determine transition effect 2 (i.e., an enlarged transition) as the transition effect matching the shooting template 1, and is used to join the second video clip and the third video clip. When the video data 1 is a vertical screen video, the mobile phone further needs to randomly determine a transition special effect, that is, a second transition special effect, for example, the transition special effect 3, as a transition special effect matched with the shooting template 1 from left transition, right transition, rotational transition, superimposed transition, blurred transition, melting transition, black transition, white transition, enlarged transition and reduced transition according to the corresponding matching weights. When the video data 1 is a landscape video, the mobile phone further needs to randomly determine a transition special effect, for example, a transition special effect 3, as a transition special effect matched with the shooting template 1 from up transition, down transition, rotation transition, superposition transition, fuzzy transition, melting transition, black transition, white transition, magnification transition and reduction transition according to the corresponding matching weights. It is understood that the transition special effect with higher matching weight is easier to be determined as the matching transition special effect. Of course, the corresponding transition special effect with lower matching weight also has a certain probability to be determined as the matched transition special effect. Then, the mobile phone can use the transition special effect 3 to link the third video clip and the fourth video clip.
Of course, when the shooting template 1 does not mark a transition special effect that must be used, the mobile phone can randomly determine the transition special effect matched with the shooting template 1 from the multiple types of transition special effects by combining the corresponding matching weights, and use the transition special effect to link the first video clip and the second video clip. And then, determining a transition special effect matched with the shooting template 1 again in a random mode, and connecting the second video clip with the third video clip.
In addition, under the condition that the divided video segments are more, for example, the video data 1 further includes a fifth video segment and a sixth video segment, and the like, the mobile phone may further randomly determine a fourth transition special effect by combining the matching weight, so as to link the fifth video segment and the sixth video segment, and so on.
Therefore, in some embodiments, the mobile phone can acquire the transition special effect matched with the shooting template 1 for multiple times, so that multiple groups of adjacent video clips are connected.
When more transition splicing is required in the video data 1, except for transition special effects marked as [ No.1 ] and [ No.2 ] selected for the first time and the second time, the transition special effect for splicing can be determined in a random mode. It can be understood that the random selection of the transition special effect can improve the diversity of the processing of the video data 1 by the shooting template 1. In addition, the mobile phone performs random selection based on the corresponding matching weights, for example, a second transition special effect and a fourth transition special effect are selected, so that the degree of matching between the actually selected transition special effect and the style of the shooting template 1 can be improved.
In some embodiments, the maximum number of categories is also included in the capture template, as shown in table 2. Wherein the maximum number of types is used to limit the number of types of transition special effects used in the same video data 1. For example, if the maximum number of types of the shooting template 1 is 3, the types of transition special effects actually used when processing the video data 1 cannot exceed three types.
For example, scenes with a maximum number of categories of 3. As shown in fig. 13, video data 1 corresponds to five video segments, wherein a first video segment is connected to a second video segment by using transition effect 1, the second video segment is connected to a third video segment by using transition effect 2, and the third video segment is connected to a fourth video segment by using transition effect 3. If the transition special effect 1, the transition special effect 2 and the transition special effect 3 are different types of transition special effects, the number of corresponding types is 3, that is, the maximum number of types is reached. In this scenario, the mobile phone further needs to randomly determine a transition special effect, such as transition special effect 4, from transition special effect 1, transition special effect 2, and transition special effect 3 according to the corresponding matching weights, as the transition special effect matched with the shooting template 1, and to join the fourth video segment and the fifth video segment. If at least two of the transition effects 1, 2 and 3 are the same, the mobile phone continues to randomly determine a transition effect from left transition (or upper transition), right transition (or lower transition), rotation transition, overlapping transition, fuzzy transition, melting transition, black transition, white transition, enlarging transition and reducing transition according to the corresponding matching weights, and the transition effect is used for connecting a fourth video clip and a fifth video clip.
It is understood that, in the video data 1, the time between the first video segment and the second video segment may also be referred to as a first time point. In the video data 1, the time between the third video segment and the fourth video segment may also be referred to as a second time point. In the video data 1, a time between the second video segment and the third video segment may be referred to as a third time point. In the video data 1, the time between the fourth video segment and the fifth video segment may be referred to as a fourth time point.
In addition, the first time point, the third time point, the second time point and the fourth time point are adjacent in sequence. In the video data 1, each time point may correspond to at least one frame of video. For example, the video frames corresponding to the first time point may include a last frame of a first video segment and a first frame of a second video segment. For another example, the video frames corresponding to the first time point may further include the last frames of the first video segment and the first frames of the second video segment. The video frames corresponding to other time points are the same, and are not described herein again.
An embodiment of the present application further provides an electronic device, which may include: a memory and one or more processors. The memory is coupled to the processor. The memory is for storing computer program code comprising computer instructions. The processor, when executing the computer instructions, may cause the electronic device to perform the steps performed by the handset in the embodiments described above. Of course, the electronic device includes, but is not limited to, the above-described memory and one or more processors. For example, the structure of the electronic device may refer to the structure of a mobile phone shown in fig. 1.
The embodiment of the present application further provides a chip system, which may be applied to the electronic device in the foregoing embodiment. As shown in FIG. 14, the system-on-chip includes at least one processor 2201 and at least one interface circuit 2202. The processor 2201 may be a processor in the electronic device described above. The processor 2201 and the interface circuit 2202 may be interconnected by wires. The processor 2201 may receive and execute computer instructions from the memory of the electronic device described above via the interface circuit 2202. The computer instructions, when executed by the processor 2201, can cause the electronic device to perform the steps performed by the mobile phone in the above embodiments. Of course, the chip system may further include other discrete devices, which is not specifically limited in this embodiment of the present application.
In some embodiments, it is clear to those skilled in the art from the foregoing description of the embodiments that, for convenience and simplicity of description, the above division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application, in essence or part of the technical solutions contributing to the prior art, or all or part of the technical solutions, may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard drive, read only memory, random access memory, magnetic or optical disk, and the like.
The above description is only a specific implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any changes or substitutions within the technical scope disclosed in the embodiments of the present application should be covered within the scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (14)

1. A method for determining a transition special effect is applied to an electronic device, and comprises the following steps:
the electronic equipment displays a first interface, wherein the first interface comprises a first identifier indicating a first shooting template; the first shooting template comprises first music, and the first music corresponds to a first transition special effect;
the electronic equipment receives a selection operation of a user on the first identifier;
the electronic equipment responds to the selection operation and displays a second interface; the second interface is a recording preview interface; the second interface comprises a first control for indicating the start of shooting;
the electronic equipment receives a first operation of a user on the first control;
the electronic equipment responds to the first operation and starts to record first video data;
after the first video data is recorded, the electronic equipment displays a third interface; the third interface is used for displaying second video data; the second video data includes: a video frame of the first video data, the first music, the first transition special effect, and a second transition special effect; the first transition special effect is superposed on a video frame corresponding to a first time point in the first video data; the second transition special effect is superposed on a video frame corresponding to a second time point in the first video data, and the second time point is positioned behind the first time point; the second transition effect is one of a plurality of preset transition effects.
2. The method of claim 1, wherein before the electronic device displays the third interface, the method further comprises:
the electronic equipment determines a second transition special effect from the multiple preset transition special effects based on the matching weight;
the matching weight corresponds to each preset transition special effect, and the matching weight is a quantization ratio parameter of the adaptation degree between the first music and the preset transition special effect; the plurality of preset transition effects include the first transition effect.
3. The method of claim 1, wherein the second video data further comprises a third transition special effect corresponding to the first music, wherein the third transition special effect is superimposed on a video frame corresponding to a third time point in the first video data, and wherein the third time point is located between the first time point and the second time point.
4. The method of claim 3, wherein the second video data further comprises a fourth transition special effect, wherein the fourth transition special effect is superimposed on a video frame corresponding to a fourth time point in the first video data, and wherein the fourth time point is located after the second time point; the fourth transition special effect is one of the multiple preset transition special effects, or one of the first transition special effect, the second transition special effect and the third transition special effect.
5. The method of claim 4, wherein the first time point, the second time point, the third time point, and the fourth time point are video frame division points in the first video data.
6. The method of claim 4, wherein the first music further corresponds to a maximum number of transitions; before the electronic device displays the third interface, the method further comprises:
the electronic equipment determines that the number of the types of the first transition special effect, the second transition special effect and the third transition special effect is equal to the maximum number of the types of the transitions;
the electronic equipment determines a fourth transition special effect from the first transition special effect, the second transition special effect and the third transition special effect based on the matching weight;
each preset transition special effect corresponds to one matching weight, and the matching weight is a quantization ratio parameter of the adaptation degree between the first music and the preset transition special effect; the plurality of preset transition effects comprise the first transition effect and a third transition effect.
7. The method of claim 4, wherein the first music further corresponds to a maximum number of transitions; before the electronic device displays the third interface, the method further comprises:
the electronic equipment determines that the number of the types of the first transition special effect, the second transition special effect and the third transition special effect is less than the maximum number of the types of the transitions;
the electronic equipment determines a fourth transition special effect from the multiple preset transition special effects based on the matching weight;
the matching weight corresponds to each preset transition special effect, and the matching weight is a quantization ratio parameter of the adaptation degree between the first music and the preset transition special effect; the plurality of preset transition effects include the first transition effect and a third transition effect.
8. The method of claim 7, wherein transition effects belonging to the same category exist among the first transition effect, the second transition effect, and the third transition effect.
9. The method according to any one of claims 1 to 8, wherein when the first video data is a video shot with a landscape screen, the plurality of preset transition effects comprises: rotating transition, folding transition, fuzzy transition, melting transition, black transition, white transition, amplifying transition, reducing transition, upward transition and downward transition;
when the first video data is a video shot by a vertical screen, the multiple preset transition special effects comprise: left transition, right transition, rotational transition, stacked transition, blurred transition, melt transition, black transition, white transition, zoom in transition, and zoom out transition.
10. A method for determining a transition special effect is applied to an electronic device, and comprises the following steps:
the electronic device displays a fourth interface, the fourth interface comprising a thumbnail of third video data;
the electronic equipment receives a user selection operation on a thumbnail of the third video data;
the electronic equipment responds to the selection operation and displays a fifth interface; wherein the fifth interface is a video editing interface of the third video data; the fifth interface comprises a one-key large-film control;
the electronic equipment receives a second operation of the one-key large control by the user;
the electronic equipment responds to the second operation and determines a first shooting template matched with the third video data; the first shooting template comprises first music, and the first music corresponds to a first transition special effect;
after determining the first shooting template, the electronic equipment displays a sixth interface; the sixth interface is used for displaying fourth video data; the fourth video data includes: a video frame of the third video data, the first music, the first transition special effect, and a second transition special effect; the first transition special effect is superposed on a video frame corresponding to a first time point in the third video data; the second transition special effect is superposed on a video frame corresponding to a second time point in the third video data, and the second time point is positioned behind the first time point; the second transition effect is one of a plurality of preset transition effects.
11. The method of claim 10, wherein before the electronic device displays a sixth interface, the method further comprises:
the electronic equipment determines a second transition special effect from the multiple preset transition special effects based on the matching weight;
the matching weight corresponds to each preset transition special effect, and the matching weight is a quantization ratio parameter of the adaptation degree between the first music and the preset transition special effect; the plurality of preset transition effects include the first transition effect.
12. The method of claim 10, wherein the first capture template corresponds to a first genre; the electronic device determines a first shooting template matched with the third video data, and the first shooting template comprises:
the electronic equipment determines that the third video data is matched with the first style by using a preset artificial intelligence model; the electronic device determining that the first shooting template matches the third video data;
or, the electronic device randomly determines the first shooting template from a plurality of preset shooting templates.
13. An electronic device, characterized in that the electronic device comprises one or more processors and memory; the memory coupled to the processor, the memory for storing computer program code, the computer program code comprising computer instructions, which when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-12.
14. A computer storage medium comprising computer instructions that, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-12.
CN202210057018.XA 2021-06-16 2022-01-18 Transition special effect determination method and electronic equipment Pending CN115484425A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN2021106767093 2021-06-16
CN202110676709 2021-06-16
CN2021114346109 2021-11-29
CN202111434610 2021-11-29

Publications (1)

Publication Number Publication Date
CN115484425A true CN115484425A (en) 2022-12-16

Family

ID=84420802

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210057018.XA Pending CN115484425A (en) 2021-06-16 2022-01-18 Transition special effect determination method and electronic equipment

Country Status (1)

Country Link
CN (1) CN115484425A (en)

Similar Documents

Publication Publication Date Title
WO2006112490A1 (en) Electronic device
WO2022252660A1 (en) Video capturing method and electronic device
CN113840070B (en) Shooting method, shooting device, electronic equipment and medium
CN115689963B (en) Image processing method and electronic equipment
WO2023151609A1 (en) Time-lapse photography video recording method and apparatus, and electronic device
WO2023134583A1 (en) Video recording method and apparatus, and electronic device
WO2023160241A1 (en) Video processing method and related device
CN110442277B (en) Method for displaying preview window information and electronic equipment
WO2022156703A1 (en) Image display method and apparatus, and electronic device
CN115484423A (en) Transition special effect adding method and electronic equipment
CN114697530B (en) Photographing method and device for intelligent view finding recommendation
CN115484425A (en) Transition special effect determination method and electronic equipment
CN115484387A (en) Prompting method and electronic equipment
CN115484400B (en) Video data processing method and electronic equipment
CN115225756A (en) Method for determining target object, shooting method and device
CN115734032A (en) Video editing method, electronic device and storage medium
WO2022262537A1 (en) Transition processing method for video data and electronic device
CN115484392B (en) Video shooting method and electronic equipment
CN115484390B (en) Video shooting method and electronic equipment
CN114285963B (en) Multi-lens video recording method and related equipment
CN115623319B (en) Shooting method and electronic equipment
WO2024088074A1 (en) Method for photographing moon and electronic device
CN114390205B (en) Shooting method and device and electronic equipment
WO2023226699A9 (en) Video recording method and apparatus, and storage medium
WO2023231696A1 (en) Photographing method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination