WO2022262537A1 - 一种视频数据的转场处理方法及电子设备 - Google Patents

一种视频数据的转场处理方法及电子设备 Download PDF

Info

Publication number
WO2022262537A1
WO2022262537A1 PCT/CN2022/094793 CN2022094793W WO2022262537A1 WO 2022262537 A1 WO2022262537 A1 WO 2022262537A1 CN 2022094793 W CN2022094793 W CN 2022094793W WO 2022262537 A1 WO2022262537 A1 WO 2022262537A1
Authority
WO
WIPO (PCT)
Prior art keywords
interface
electronic device
video
video data
frame
Prior art date
Application number
PCT/CN2022/094793
Other languages
English (en)
French (fr)
Inventor
牛思月
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Priority to US18/257,018 priority Critical patent/US20240106967A1/en
Priority to EP22824018.0A priority patent/EP4240011A4/en
Publication of WO2022262537A1 publication Critical patent/WO2022262537A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/915Television signal processing therefor for field- or frame-skip recording or reproducing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/036Insert-editing
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes

Definitions

  • the present application relates to the field of terminal technologies, and in particular to a video data transition processing method and electronic equipment.
  • electronic devices such as mobile phones and tablet computers are generally equipped with multiple cameras, such as front cameras, rear cameras, wide-angle cameras, and the like. Multiple cameras are convenient for users to use electronic equipment to shoot video works.
  • Embodiments of the present application provide a video data transition processing method and an electronic device, which are used to improve the efficiency of human-computer interaction in video editing.
  • an embodiment of the present application provides a video data transition processing method, the method is applied to an electronic device, and the method includes: the electronic device displays a first interface; wherein the first interface includes The first thumbnail image of the first video data; the first video data includes a first transition special effect; the first transition special effect is superimposed on the first video frame of multiple consecutive frames in the first video data; The electronic device receives a user's first operation on the first thumbnail; the electronic device displays a second interface in response to the first operation; the second interface is a video editing interface for the first video data ; The second interface includes a one-key blockbuster control; after receiving the user's second operation on the one-key blockbuster control, the electronic device displays a third interface; the third interface is used to display the second video data ; The second video data includes the video frame of the first video data, a multi-frame replacement frame, the first music, and a second transition effect corresponding to the first music; the second transition effect is superimposed on the On the multi-frame replacement frame, the replacement frame
  • the electronic device can automatically edit and process the first video data to obtain the second video data in response to the user's operation on the one-key blockbuster control.
  • the first music is configured in the second video data.
  • the electronic device can replace the first transition effect with a second transition effect that matches the first music.
  • the second video The transition effects appearing in the data can be matched with the first music, which improves the degree of adaptation between music and content in the second video data, improves user satisfaction with the second video data, and reduces the possibility of rework.
  • the entire operation of triggering the creation of the second video data is simple, which effectively improves the efficiency of human-computer interaction for creating video data.
  • the method before the electronic device displays the first interface, the method further includes: the electronic device displays a fourth interface; the fourth interface is a viewfinder preview interface provided by a camera application; the The fourth interface includes a first control indicating to start shooting video; the electronic device receives a third operation of the first control by the user; the electronic device displays a fifth interface in response to the third operation, and starts recording The first video data; wherein, the fifth interface is a video recording interface in the first lens mode; the fifth interface includes a second control indicating to switch lens modes; when recording to the first video data At the first time point, the electronic device displays a sixth interface in response to the user's fourth operation on the second control, and determines that the video frame corresponding to the first time point is the first video frame; wherein , the sixth interface is a video recording interface in the second lens mode; the sixth interface includes a third control indicating to stop shooting; the electronic device receives a fifth operation of the third control by the user; the The electronic device displaying the first interface includes: the electronic device responding
  • the first video data may be a video captured by the electronic device in a normal mode.
  • the electronic device receives an operation indicating to switch the lens mode at the first time point, not only can directly switch the lens mode, but also can determine the video frame corresponding to the first time point in the first video data as the first In this way, when editing and processing the first video data, the video frames affected by the lens switching can be processed, the viewability of the second video data can be improved, and the human-computer interaction efficiency of editing video can be improved.
  • the method further includes: the electronic device superimposes the first video frame on the first video frame. Transition effects.
  • the video clips collected before and after the switching of the lens mode are connected by the first transition special effect, so as to improve the video frequency due to the switching of the lens mode.
  • the problem of incoherence caused by the content can improve the quality of the film.
  • the method before the electronic device displays the first interface, the method further includes: the electronic device displays a main interface; the main interface includes an icon of a gallery application; The sixth operation of the icon of the gallery application; the electronic device displaying the first interface includes: the electronic device responding to the sixth operation, displaying the first interface, the first interface being the gallery The application interface provided by the application.
  • the first video data may also be video data already stored in the gallery, that is, it may be a video shot by other devices, or a video that has been created once.
  • the user can process a variety of video materials through the electronic device, and the operations required for processing are simple, and the efficiency of human-computer interaction in video creation is improved.
  • the method before the electronic device displays the third interface, the method further includes: in response to the second operation, the electronic device determines a first Effect template; the first effect template includes the first music; the electronic device deletes the first video frame in the first video data; the electronic device deletes the first video frame in the first video data
  • the second video frame freezes to obtain the replacement frame used to replace the first video frame; the second video frame is the video frame of the adjacent previous frame of the first video frame, or is the first video frame A video frame next to a video frame; the electronic device superimposes the second transition effect on the replacement frame.
  • the first effect template corresponds to the first style; determining the first effect template from a plurality of preconfigured effect templates includes: the electronic device using a preset artificial intelligence model It is determined that the first video data matches the first style; the electronic device determines the first effect template from the effect templates belonging to the first style; or, the electronic device determines the first effect template from multiple preset Among the effect templates set, the first shooting template is randomly determined.
  • the method before the electronic device displays the third interface, the method further includes: determining, by the electronic device, the second transition special effect corresponding to the first music.
  • the electronic device determining the second transition effect corresponding to the first music includes: determining, by the electronic device, the second transition effect corresponding to the first music from various preset transition effects The second transition effect with associated identification between a piece of music.
  • the electronic device determining the second transition effect corresponding to the first music includes: determining, by the electronic device, from various preset transition effects based on matching weights The second transition effect; wherein, each of the preset transition effects corresponds to a matching weight, and the matching weight is the degree of adaptation between the first music and the preset transition effects The quantization ratio parameter of .
  • the second transition effect is relatively related to the first music, and the type is also random, so that while ensuring that the transition effect is adapted to the first music, the diversification of the transition effect is increased, The viewability of the second video data is improved.
  • the second video data further includes: a third transition effect; the third transition effect is added to the video frame corresponding to the second time point in the first video data ;
  • the third transition effect is one of multiple preset transition effects; the multiple preset transition effects include the second transition effect.
  • an electronic device provided by an embodiment of the present application, includes one or more processors and a memory; the memory is coupled to the processor, and the memory is used to store computer program codes, the computer program codes include computer instructions, When one or more processors execute computer instructions, the one or more processors are configured to: display a first interface; wherein, the first interface includes a first thumbnail image of the first video data; the first interface A piece of video data includes a first transition effect; the first transition effect is superimposed on a plurality of consecutive first video frames in the first video data; receiving a first user operation on the first thumbnail; In response to the first operation, a second interface is displayed; the second interface is a video editing interface for the first video data; the second interface includes a one-key blockbuster control; After the second operation of the block control, a third interface is displayed; the third interface is used to display the second video data; the second video data includes video frames of the first video data, multi-frame replacement frames, first Music and a second transition effect corresponding to the first music
  • the one or more processors are further configured to: display a fourth interface; the fourth interface is a viewfinder preview interface provided by the camera application; the first interface The fourth interface includes a first control indicating to start shooting video; receiving a third operation of the first control by the user; in response to the third operation, displaying a fifth interface and starting to record the first video data; wherein, The fifth interface is a video recording interface in the first lens mode; the fifth interface includes a second control indicating to switch the lens mode; when recording to the first time point of the first video data, in response to the user The fourth operation on the second control is to display a sixth interface, and determine that the video frame corresponding to the first time point is the first video frame; wherein, the sixth interface is the second camera mode A video recording interface; the sixth interface includes a third control indicating to stop shooting; receiving a fifth user operation on the third control;
  • the one or more processors are further configured to: display the first interface in response to the fifth operation; the first interface is also a framing preview interface provided by the camera application.
  • the one or more processors are further configured to: superimpose the first video frame on the first video frame field effects.
  • the one or more processors are further configured to: display a main interface; the main interface includes an icon of a gallery application;
  • the sixth operation of icons: displaying a first interface includes: the electronic device responding to the sixth operation and displaying the first interface, where the first interface is an application interface provided by the gallery application.
  • the one or more processors are further configured to: determine the first effect from a plurality of preconfigured effect templates in response to the second operation template; the first effect template includes the first music; the first video frame in the first video data is deleted; the second video frame in the first video data is frozen to obtain the The replacement frame that replaces the first video frame; the second video frame is a video frame adjacent to the first video frame, or a frame adjacent to the first video frame A video frame; superimposing the second transition effect on the replacement frame.
  • the one or more processors are further configured to: use a preset artificial intelligence model to determine that the first video data matches the first style;
  • the first effect template is determined from among the effect templates; or, the first shooting template is randomly determined from a plurality of preset effect templates.
  • the one or more processors are further configured to: determine the second transition effect corresponding to the first music.
  • the one or more processors are further configured to: determine the second transition that has an associated identifier with the first music from a variety of preset transition effects special effects.
  • the one or more processors are further configured to: determine the second transition effect from various preset transition effects based on matching weights; wherein, each of the The preset transition effect corresponds to a matching weight, and the matching weight is a quantitative ratio parameter of the degree of adaptation between the first music and the preset transition effect.
  • the second video data further includes: a third transition effect; the third transition effect is added to the video frame corresponding to the second time point in the first video data ;
  • the third transition effect is one of multiple preset transition effects; the multiple preset transition effects include the second transition effect.
  • a computer storage medium provided by an embodiment of the present application includes computer instructions.
  • the computer instructions When the computer instructions are run on the electronic device, the electronic device executes the above-mentioned first aspect and its possible embodiments. method.
  • the present application provides a computer program product, which, when the computer program product runs on the above-mentioned electronic device, causes the electronic device to execute the method described in the above-mentioned first aspect and its possible embodiments.
  • FIG. 1 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
  • FIG. 2 is one of the example diagrams of the display interface provided by the embodiment of the present application.
  • Fig. 3 is the second example diagram of the display interface provided by the embodiment of the present application.
  • Figure 4 conceptually shows an example of the impact on video data 1 after the lens mode is switched from the front-to-back mode to the picture-in-picture mode;
  • Figure 5 conceptually shows an example diagram of processing video data 1 after the lens mode is switched from the front-to-back mode to the picture-in-picture mode;
  • Fig. 6 is the third example diagram of the display interface provided by the embodiment of the present application.
  • FIG. 7A conceptually shows one of the example diagrams of processing video data 1 in the scene where the lens mode is switched from front-to-back mode to back-to-back mode;
  • FIG. 7B conceptually shows the second example of processing video data 1 in the scene where the lens mode is switched from front-to-back mode to back-to-back mode;
  • FIG. 8 is the fourth example diagram of the display interface provided by the embodiment of the present application.
  • Fig. 9 is the fifth exemplary diagram of the display interface provided by the embodiment of the present application.
  • Fig. 10 is the sixth example diagram of the display interface provided by the embodiment of the present application.
  • Fig. 11 is the seventh example diagram of the display interface provided by the embodiment of the present application.
  • FIG. 12 is a flow chart of the steps of the video data transition processing method provided by the embodiment of the present application.
  • Fig. 13 conceptually shows an example diagram of replacing the original transition effect in the video data 1;
  • Fig. 14 is the eighth example diagram of the display interface provided by the embodiment of the present application.
  • FIG. 15 is a schematic diagram of a chip system provided by an embodiment of the present application.
  • first and second are used for descriptive purposes only, and cannot be understood as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, a feature defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of this embodiment, unless otherwise specified, “plurality” means two or more.
  • An embodiment of the present application provides a video data transition processing method, which can be applied to an electronic device with multiple cameras.
  • the electronic device can automatically process video data in response to user operations, such as adding transition effects, configuring video music, and the like.
  • user operations such as adding transition effects, configuring video music, and the like.
  • it reduces the operational complexity of editing video data and improves the efficiency of human-computer interaction in video creation.
  • the electronic device in the embodiment of the present application may be a mobile phone, a tablet computer, a smart watch, a desktop, a laptop, a handheld computer, a notebook computer, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook , and cellular phones, personal digital assistants (personal digital assistant, PDA), augmented reality (augmented reality, AR) ⁇ virtual reality (virtual reality, VR) equipment, etc.
  • PDA personal digital assistant
  • augmented reality augmented reality, AR
  • VR virtual reality
  • the specific form is not particularly limited.
  • FIG. 1 is a schematic structural diagram of an electronic device 100 provided in an embodiment of the present application.
  • the electronic device 100 may include: a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, a display screen 194, and a subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the above-mentioned sensor module 180 may include sensors such as pressure sensor, gyroscope sensor, air pressure sensor, magnetic sensor, acceleration sensor, distance sensor, proximity light sensor, fingerprint sensor, temperature sensor, touch sensor, ambient light sensor and bone conduction sensor.
  • sensors such as pressure sensor, gyroscope sensor, air pressure sensor, magnetic sensor, acceleration sensor, distance sensor, proximity light sensor, fingerprint sensor, temperature sensor, touch sensor, ambient light sensor and bone conduction sensor.
  • the structure shown in this embodiment does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory.
  • the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input and output
  • subscriber identity module subscriber identity module
  • SIM subscriber identity module
  • USB universal serial bus
  • the interface connection relationship between the modules shown in this embodiment is only for schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED active matrix organic light emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed
  • quantum dot light emitting diodes quantum dot light emitting diodes (quantum dot light emitting diodes, QLED),
  • the electronic device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display screen 194 and the application processor.
  • the ISP is used for processing the data fed back by the camera 193 .
  • the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin color.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 293 .
  • Camera 193 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • the electronic device 100 may include N cameras 193 , where N is a positive integer greater than 1.
  • the aforementioned N cameras 193 may include: one or more front cameras and one or more rear cameras.
  • the mobile phone includes at least one front camera.
  • the front camera is configured on the front side of the mobile phone, for example, the front camera 201 shown in (a) of FIG. 2 .
  • the phone includes at least one rear camera.
  • the rear camera is arranged on the back side of the mobile phone. This way, the front and rear cameras face different directions.
  • the electronic device may enable at least one of the N cameras 139 to take pictures and generate corresponding photos or videos.
  • a front camera of the electronic device 100 is used alone for shooting.
  • a single rear camera of the electronic device 100 is used for shooting.
  • two front-facing cameras are enabled for shooting at the same time.
  • a front-facing camera and a rear-facing camera are enabled for shooting at the same time.
  • enabling a single camera 139 for shooting may be referred to as enabling a single-camera mode, such as a proactive mode (also known as a single front mode) and a rear-camera mode (also known as a single rear mode).
  • enabling multiple cameras 139 to shoot at the same time can be collectively referred to as enabling a multi-camera mode, such as front-to-front mode, front-to-back mode, back-to-back mode, and picture-in-picture mode.
  • the electronic device may render and combine image frames collected by the front-facing camera and the rear-facing camera.
  • the above rendering merging may be splicing image frames collected by different cameras.
  • the image frames collected by different cameras can be spliced up and down.
  • the image frames collected by different cameras can be spliced left and right.
  • the image frames collected by one camera may be embedded in the image frames collected by another camera. Then, it is coded to generate a photo.
  • the front camera collects a video stream and caches it.
  • the rear camera captures a video stream and caches it.
  • the electronic device 100 renders and merges the two buffered video streams frame by frame, that is, renders and merges the video frames whose acquisition time points are the same or match in the two video streams. After that, encoding is performed to generate a video file.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos in various encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG moving picture experts group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be realized through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • the audio module 170 may also be used to encode and decode audio signals.
  • the audio module 170 may be set in the processor 110 , or some functional modules of the audio module 170 may be set in the processor 110 .
  • Speaker 170A also referred to as a "horn", is used to convert audio electrical signals into sound signals. In this way, the electronic device 100 can play audio data, such as video music and the like.
  • the pressure sensor is used to sense the pressure signal and convert the pressure signal into an electrical signal.
  • a pressure sensor may be located on the display screen 194 .
  • the gyro sensor can be used to determine the motion posture of the electronic device 100 . When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to recognize the posture of the electronic device 100, and be applied to applications such as horizontal and vertical screen switching.
  • Touch sensor also known as "touch panel”.
  • the touch sensor can be arranged on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called “touch screen”.
  • the touch sensor is used to detect a touch operation on or near it. The touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the shot video can be edited by operating the electronic device, such as configuring video music, adding animation special effects, adding transition special effects, and the like.
  • the video after the second creation will be more vivid and rich, and it also conforms to the user's creative intention.
  • adding transition effects can not only make the transition of video content more natural, but also enrich the content presented in the video.
  • an embodiment of the present application provides a method for processing transitions of video data.
  • the method in the embodiment of the present application will be described below by taking the above-mentioned electronic device 100 as a mobile phone as an example.
  • the mobile phone includes a main interface.
  • the above-mentioned main interface is also called desktop 202 .
  • the main interface may be a user interface displayed after the mobile phone is unlocked.
  • the aforementioned main interface may include icons of installed application programs (Application, APP), such as the icon 203 of the camera APP.
  • Application Application
  • the mobile phone can receive the user's operation on the main interface, and start the APP indicated by the operation.
  • the mobile phone may receive a user's operation on the icon 203, such as a click operation, and start the camera APP in response to the operation.
  • an application interface provided by the camera APP may be displayed.
  • a viewfinder interface for performing a shooting function is displayed, that is, an interface 204 as shown in (b) of FIG. 2 .
  • the user can switch between different functional modes of the camera APP in the interface 204, such as a portrait functional mode, a photographing functional mode, a video recording functional mode, and a multi-lens video recording functional mode. That is, the mobile phone can receive the user's operation 1 in the interface 204, and the operation 1 is used to instruct the camera APP to switch between different function modes.
  • the interface 204 includes controls corresponding to multiple functional modes of the camera APP, such as a portrait control, a camera control, a video control, a multi-mirror video control, and the like.
  • the photographing control is in a selected state, which is used to prompt the user that the current viewfinder interface is used to execute the photographing function.
  • the mobile phone can receive the user's operation on any one of the portrait control, video control, and multi-mirror video control, and determine the switched function mode based on the operation, and display the viewfinder interface before executing the function mode. For example, when the mobile phone receives a user's operation on the portrait control, such as a click operation, the viewfinder interface before performing the portrait shooting function may be displayed. At the same time, the portrait control is in the selected state. When the mobile phone receives the user's operation on the recording control, for example, when the operation is clicked, the viewfinder interface before performing the recording function can be displayed. At the same time, the recording control is in the selected state.
  • the viewfinder interface before performing the multi-mirror video function can be displayed, that is, as shown in Figure 2 In the interface 205 shown in (c), at the same time, the multi-mirror video control is in the selected state.
  • the above-mentioned interface 205 is an example of the fourth interface, and in some other embodiments, the viewfinder interface before performing the conventional video recording function may also be called the fourth interface.
  • both the recording function and the multi-camera recording function can record video data, and the difference between them is that when the recording starts, the enabled lens modes are different.
  • the mobile phone Under the video recording function, the mobile phone can respond to the user's operation and enable single-camera mode such as single-front mode or single-rear mode to shoot video.
  • the mobile phone Under the multi-camera video recording function, the mobile phone can respond to the user's operation and enable multi-camera lens modes such as front-to-back mode, back-to-back mode, or picture-in-picture to shoot videos.
  • the method provided by the embodiment of the present application is not only applicable to the video data captured under the video recording function, but also applicable to the video data captured under the multi-mirror video recording function, and the realization principle is the same. In the subsequent embodiments, the method provided in the embodiments of the present application is mainly introduced by taking the multi-camera video recording function as an example.
  • the viewfinder interface (that is, interface 205 ) displayed by the mobile phone includes a plurality of viewfinder frames, such as viewfinder frame 206 and viewfinder frame 207 .
  • the arrangement position relationship of the viewfinder frame 206 and the viewfinder frame 207 is related to the posture of the mobile phone. For example, in a scene where the gyroscope sensor of the mobile phone recognizes that the mobile phone is in a vertical screen state, the viewfinder frame 206 and the viewfinder frame 207 are arranged up and down. In a scenario where the gyro sensor of the mobile phone recognizes that the mobile phone is in a landscape orientation, the viewfinder frame 206 and the viewfinder frame 207 are arranged left and right.
  • the viewfinder frame 206 and the viewfinder frame 207 respectively correspond to cameras.
  • the viewfinder frame 206 corresponds to the camera 1 (eg, the rear camera a), so the viewfinder frame 206 can be used to display the video stream uploaded by the camera 1 .
  • the viewfinder frame 207 corresponds to the camera 2 (for example, a front camera), so that the viewfinder frame 207 can be used to display the video stream uploaded by the camera 2 .
  • the camera corresponding to each viewfinder frame eg, viewfinder frame 206 and viewfinder frame 207
  • the camera corresponding to each viewfinder frame can be adjusted according to the user's operation. After the camera corresponding to each viewfinder frame changes, it means that the lens mode used by the mobile phone also changes accordingly.
  • the mobile phone can receive the user's operation 2, and the operation 2 is used to trigger the mobile phone to directly start video shooting without selecting any special effects.
  • This operation 2 may also be referred to as the third operation.
  • the interface 205 includes a control 208 for instructing to start shooting, that is, a first control.
  • the mobile phone receives the user's third operation on the control 208.
  • a viewfinder interface that is recording a video can be displayed, such as the fifth interface, for example, the interface 209 shown in (d) in FIG. 2 .
  • the interface 209 is a recording and framing interface in the first lens mode, for example, it is a recording and framing interface corresponding to the front and rear modes.
  • Interface 209 also includes frame 206 and frame 207 .
  • the interface 209 of the mobile phone can display the video streams collected in real time by the front camera and the rear camera a.
  • the mobile phone can also render and merge the video streams captured by the front camera and the rear camera a, and then encode, generate and save video data.
  • the video frame of the video data will gradually increase.
  • the mobile phone may receive the user's operation 3 in the interface 209, and the above operation 3 may be an operation indicating to switch the camera mode.
  • the mobile phone can enable different cameras or combinations of different cameras to collect video streams, so that users can create videos with various scenes and rich content.
  • the interface 209 may include a control for instructing to switch the lens mode, which is also called a second control, such as the control 301 shown in (a) of FIG. 3 .
  • the icon of the control 301 is used to indicate the currently enabled lens mode.
  • the window 302 lists multiple optional lens modes, such as front-to-back mode, rear-to-back mode, picture-in-picture mode, single-back mode, single-back mode, and the like. In window 302, the front and back mode is selected.
  • the mobile phone may receive a user's selection operation of the rear-to-rear mode, the picture-in-picture mode, the single-rear mode or the single-rear mode, and switch the used lens mode in response to the selection operation.
  • the above-mentioned operation for realizing switching the lens mode may be referred to as a fourth operation.
  • the mobile phone may switch the lens mode to the picture-in-picture mode. That is, as (c) in FIG. 3 , the mobile phone can switch the display interface 303 .
  • the interface 303 is an example of the sixth interface. It can be understood that the sixth interface refers to the corresponding video recording interface after switching the camera mode (also called the second camera mode).
  • the interface 303 also includes a viewing frame 206 and a viewing frame 207 . Wherein, the viewfinder frame 206 will continue to be used to display the video stream uploaded by the rear camera a, and the viewfinder frame 207 will also continue to be used to display the video stream uploaded by the front camera.
  • the viewfinder frame 207 shrinks. Instead, viewfinder frame 206 increases. In addition, the viewfinder frame 207 is superimposed on the viewfinder frame 206 .
  • the camera parameters of the front camera and rear camera a are also adjusted. After adjusting the camera parameters, the video streams captured by the front camera and rear camera a may not be uploaded in time. This will cause a pause segment to appear in the captured video data 1 . That is, as shown in (c) in FIG. 3 , the viewfinder frame 206 and the viewfinder frame 207 of the interface 303 will appear a short black screen.
  • FIG. 4 conceptually shows the impact on the video data 1 after the lens mode is switched (from the front-to-back mode to the picture-in-picture mode) during the video shooting process of the mobile phone.
  • the relative time axis of video data 1 between time 00:00 and 00:05, both the front camera and rear camera a can upload video streams normally, and at the same time, the mobile phone can also upload video streams based on the front camera and rear camera a.
  • the uploaded video stream is normally encoded to generate continuous multi-frame video frames, also called video segment 1.
  • the relative time axis is the time axis created based on the video data 1 .
  • the time 00:00 of the relative time axis corresponds to the acquisition time of the first video frame (also referred to as the first frame) of the video data 1 .
  • the mobile phone After time 00:05, the mobile phone detects an operation indicating to switch to the picture-in-picture mode, and this time 00:05 is an example of the first time point. This operation can temporarily affect the video stream return of the front camera and rear camera a, so the problem of missing video frames occurs between time 00:05 and time 00:06. After time 00:06, the front camera and the rear camera a return to normal, and they can be collected and encoded to obtain continuous video frames, which can also be called video segment 2.
  • the mobile phone may insert multiple frames to replace frame 1 between time 00:05 and time 00:06, so as to obtain a coherent flow of video data 1 .
  • the substitute frame 1 inserted after time 00:05 (that is, the first time point) may also be referred to as a video frame corresponding to the first time point.
  • the mobile phone may freeze the video frame 401 (the last frame in the video segment 1) to obtain the substitute frame 1 .
  • the screen content displayed in the substitute frame 1 is the same as that of the video frame 401 .
  • the mobile phone cancels the frame freeze of the video frame 401 .
  • the mobile phone may insert a preview configured image frame after the video frame 401, for example, a black image frame or a white image frame, until the camera receives the image again. After the returned video stream, stop inserting alternate frame 1. Understandably, the inserted image frames may also be collectively referred to as a substitute frame 1 .
  • the mobile phone may mark the video frame 401 . In this way, after the shooting of the video data 1 is completed, the mobile phone automatically inserts multiple frames to replace the frame 1 after the video frame 401 .
  • the substitute frame 1 may be the same image frame as the video frame 401, or may be a pre-configured white image frame or black image frame.
  • the mobile phone may receive the user's selection operation of the back-to-back mode. In this way, the mobile phone can switch the used lens mode to the rear mode in response to the selection operation. That is, as (b) in FIG. 6 , the mobile phone can switch the display interface 601 .
  • the interface 601 also includes a viewfinder frame 206 and a viewfinder frame 207 . Wherein, the viewfinder frame 206 is used to display the video stream uploaded by the rear camera a, and the viewfinder frame 207 is used to display the video stream uploaded by the rear camera b.
  • the lens mode can also be set to Switch to back-to-back mode.
  • the mobile phone activates the rear camera b, and turns off the front camera, so that the rear camera b corresponds to the viewfinder frame 207 superior.
  • the viewfinder frame 206 of the mobile phone continues to display the video stream uploaded by the rear camera a, while the viewfinder frame 207 displays the video stream uploaded by the rear camera b.
  • the video stream returned by the rear camera a is not affected, as shown in (b) in FIG. 6 , the image display of the viewfinder frame 206 is not affected during the lens switching.
  • the picture display of the viewfinder frame 207 will be affected, as shown in (b) in FIG. 6 , a black screen appears briefly in the viewfinder frame 207 .
  • the interface 601 is also an example of the sixth interface.
  • FIG. 7A conceptually shows an example diagram of generating video data 1 in the case of lens mode switching (switching from front-to-back mode to back-to-back mode) during the process of shooting video by the mobile phone.
  • the mobile phone may mark the video frame with dots after instructing to activate the rear camera b. After it is determined that the rear-facing camera b has returned the video stream normally, stop marking. In this way, as shown in (b) in FIG. 7A , the mobile phone can remove the video frame corresponding to the marked position, so that video clip 1 and video clip 2 can be obtained. Then, in order to ensure the continuity of the video data, as shown in (c) in Figure 7A, the mobile phone can also add a substitute frame 1 between the video segment 1 and the video segment 2, that is, corresponding to the first time point video frame, thus, video data 1 is obtained.
  • the last captured video frame can be frozen, for example, the video frame 701 in FIG. 7B is frame-freeze , to obtain the substitute frame 1, that is, the video frame corresponding to the first time point.
  • the frame freeze for the video frame 701 is cancelled, and the video data 1 is generated by normal encoding according to the video streams uploaded by the rear camera a and the rear camera b.
  • the addition of the substitute frame 1 mentioned in the above embodiment may be after the video shooting is completed, or during the video shooting process.
  • switching the lens mode will not interrupt the normal shooting of the video.
  • the mobile phone can display the interface 601 and shoot a video. In this way, the number of video frames corresponding to the video data 1 will continue to increase.
  • the mobile phone is enumerated to switch the lens mode from the front-to-back mode to the picture-in-picture mode, and from the front-to-back mode to the rear-to-back mode.
  • similar problems also exist in the switching between other lens modes, which can also be solved by inserting substitute frames as described in the previous example, and will not be repeated here.
  • the mobile phone may also receive an operation 4 in the interface 601 instructing the user to stop shooting. Then, the mobile phone may stop video shooting in response to the operation 4 .
  • the interface 601 further includes a control for instructing to pause shooting, which may also be called a third control, such as control 801 .
  • the mobile phone may receive a fifth operation of the user on the control 801, for example, a click operation. And in response to the click operation on the control 801, continue shooting is stopped, and the captured video data 1 is saved.
  • the interface 205 is displayed again. After the video data 1 (that is, the first video data) is captured and saved, the displayed interface 205 may also be referred to as the first interface. In this scenario, the first interface is actually a viewfinder preview interface provided by the camera application.
  • the mobile phone may exit the operation of the camera APP according to the user's instruction, such as an upward sliding operation on the interface 601, stop continuing to shoot, and save the captured video data 1 .
  • the mobile phone can also display the main interface again.
  • the mobile phone can display the captured video data 1 according to the user's operation 4 , which is convenient for the user to view or edit the video data 1 .
  • the interface 205 includes a thumbnail of the video data 1, such as an icon 802, which may also be called a first thumbnail.
  • the mobile phone may receive the user's operation on the icon 802, and in response to the operation, display a video editing interface, such as the interface 803 shown in (c) in FIG. 8 .
  • the interface 803 is used to display the video data 1 and may also be called the second interface.
  • the mobile phone can exit the operation of the camera APP according to the user's instruction, such as swipe up, and display the main interface of the mobile phone, that is, the desktop 202 again.
  • the desktop 202 also includes an icon of a gallery APP.
  • the mobile phone can receive the sixth operation for the icon 901 of the gallery APP, such as a click operation, and in response to this operation, display the image provided by the gallery APP.
  • the application interface is an interface 902 shown in (b) of FIG. 9 .
  • the mobile phone can directly receive the click operation on the icon 901 of the Gallery APP, and in response to this operation, display the application interface provided by the Gallery APP, as shown in (b) of Figure 9 ) interface 902 shown.
  • the interface 902 will display thumbnails of various picture resources and video resources, and these picture resources or video resources may be captured and stored by the mobile phone, for example, the thumbnail 903 of video data 1 (also referred to as the first thumbnail), or it can be a thumbnail of an image, video, etc. downloaded from the Internet, or it can also be a thumbnail of an image, video, etc. synchronized to the cloud.
  • the interface 902 for displaying video resource thumbnails may also be referred to as a first interface. In this scenario, the first interface is actually an application interface provided by the gallery application.
  • the mobile phone may receive a user's selection operation on any video thumbnail in the interface 902, and in response to the user's selection operation on any video thumbnail, the mobile phone may display a corresponding video editing interface.
  • the mobile phone can display the interface 803 shown in (c) in FIG. Can be called the second interface.
  • the mobile phone can play the corresponding video according to the user's operation on the video editing interface.
  • the interface 803 includes controls for instructing to play video data 1 , such as control 1001 .
  • the mobile phone receives the user's click operation on the control 1001 , the mobile phone plays the video data 1 in the interface 803 .
  • the substitute frame 1 is added to the video data 1, the picture may be still during the playback process.
  • the mobile phone adds a substitute frame to the video data 1, it can also superimpose a transition effect on multiple frames of the substitute frame 1, such as called transition effect 1 or the first transition effect.
  • the substitute frame 1 actually superimposed with the first transition effect may also be referred to as the first video frame.
  • transition effect 1 can be any type of transition effect pre-specified in the mobile phone, such as left shift transition, right shift transition, rotation transition, dissolve transition, blur transition, melting transition , black field transition, white field transition, zoom in transition, zoom out transition, up transition and down transition, etc.
  • an instruction manual is required.
  • the left transition and right transition are only applicable to the scene of vertical screen video shooting, while the up transition and down transition are only applicable to the scene of horizontal screen video shooting .
  • the upper transition effect 1 may be a transition randomly determined from the above-mentioned multiple types of transition effects.
  • the transition effect 1 can better connect the video clips before and after the replacement frame 1, that is, the transition between the video clip 1 and the video clip 2 can be better, and the viewing experience of the user can be improved. , but also increase the quality of video shooting.
  • the mobile phone can also mark the substitute frame 1 on which the transition effect is actually superimposed, so that the mobile phone can identify the position where the transition is added.
  • the mobile phone can automatically perform secondary creation on the video data 1 according to the user's operations on the video editing interface, such as configuring video music, adding transition effects, and the like.
  • the interface 803 also includes a control for instructing to edit video data 1 , such as a one-click blockbuster control 1101 .
  • the mobile phone receives the user's operation on the one-key blockbuster control 1101 , such as the second operation, the mobile phone can automatically edit the video data 1 .
  • the mobile phone can automatically edit the video data 1, which may include the following steps:
  • the mobile phone determines an effect template matching the video data 1.
  • effect templates of multiple styles can be pre-configured in the mobile phone, and each effect template corresponds to a piece of video music.
  • the effect template also corresponds to filters, special effects, transitions, and stickers.
  • the mobile phone can use an artificial intelligence model to analyze the picture content of the video data 1, and determine an effect template matching the video data 1, that is, the first effect template, also called the effect template 1.
  • the artificial intelligence model of the mobile phone searches for similar videos based on the picture content of the video data 1 . And get video music for similar videos. In this way, the corresponding effect template 1 is determined according to the acquired video music.
  • the artificial intelligence model of the mobile phone searches for similar videos based on the screen content of the video data 1 . Multiple effect templates belonging to the same style are determined according to style names of similar videos. Then, an effect template 1 is randomly determined from the determined plurality of effect templates. In some other embodiments, the mobile phone may also randomly determine an effect template as the effect template 1 .
  • the mobile phone processes the video data 1 according to the effect template 1.
  • the mobile phone can adjust the volume of the original audio track of the video data 1 to zero, and then add the video music of the shooting template 1 (that is, the first music) to the video data 1, so that the video music and the video data 1 The video screen fits.
  • the volume of the original audio track can also be adjusted to other decibel values according to the user's operation.
  • the mobile phone may add the filter, special effect, transition, and sticker corresponding to the effect template 1 to the video data 1 .
  • the mobile phone in the process of adding transition effects, can not only add new transition effects in video data 1, but also replace the original transition effects in video data 1 (for example, the transition effects superimposed on the replacement frame 1 1).
  • transition effects with higher adaptability are more suitable for the style of the effect template and the corresponding video music.
  • a transition effect with a lower degree of adaptation is relatively less suitable for the style of the effect template and the corresponding video music.
  • the mobile phone may first identify whether there is a transition effect 1 in the video data 1 . For example, it can be detected by detecting whether there is a mark in the video data 1 . If the mark is identified, the marked video frame (that is, the substitute frame 1 superimposed with the transition effect 1 ) is deleted. In this way, the video data 1 is divided into a video segment 1 and a video segment 2 . Then, the mobile phone generates multiple substitute frames 2 and multiple substitute frames 3 . Wherein, the substitute frame 2 may be the same as the last frame of the video clip 1, and the substitute frame 3 may be the same as the first frame of the video clip 2. The last frame of video clip 1 and the first frame of video clip 2 may be collectively referred to as the second video frame.
  • the substitute frame 2 may be an image frame obtained after the mobile phone freezes the last frame of the video segment 1 .
  • the above-mentioned substitute frame 2 may be an image frame obtained after the mobile phone freezes the first frame of the video segment 2 .
  • the total number of replacement frames 2 and 3 is the same as the number of deleted video frames, ensuring that the length of the final video data 1 is not affected. Then, the mobile phone determines the transition effect 2 according to the degree of adaptation between the effect template 1 and various transition effects, and superimposes the transition effect 2 between the replacement frame 2 and the replacement frame 3 to realize the connection of the video clip 1 and video clip 2.
  • the degree of adaptation between the effect template and the transition effect can be quantified as a matching weight.
  • the mobile phone can combine the matching weights between the effect template 1 and each transition effect, and randomly select the transition effect 2 that matches the effect template 1 from multiple types of transition effects. Understandably, a transition effect with a higher matching weight is relatively more likely to be selected as the transition effect 2 that matches the effect template 1 . The transition effect with lower matching weight is relatively more difficult to be selected as the transition effect 2 that matches the effect template 1.
  • the matching weights of effect templates and transition effects can be preconfigured. Exemplarily, as shown in Table 1 below:
  • the above table 1 exemplifies the corresponding relationship between different effect templates and video music, style, and matching weights of different transition effects.
  • the percentage value corresponding to each transition effect in the table is the matching weight between the transition effect and the effect template.
  • the style corresponding to effect template 1 is also called the first style.
  • the matching weight between the effect template and the dissolve transition is 50%, that is, the dissolve transition has a 50% probability of being selected as a matching transition effect.
  • the matching weight between the effect template and the blur transition is 0%, that is, the blur transition will not be selected as a matching transition effect.
  • the matching weight between this effect template and the melting transition is 0%, that is, the melting transition will not be selected as a matching transition effect.
  • the matching weight between the effect template and the upward transition is 50%, that is, in the scene where the mobile phone needs to process the horizontal screen video data 1, the upward transition has a 50% probability of being selected as the matching transition effect .
  • the matching weight between the effect template and the down transition is 50%.
  • the down transition has a 50% probability of being selected as the matching transition effect.
  • the matching weight between this effect template and the left-shift transition is 50%, that is, in the case of the vertical screen video data 1 that the mobile phone needs to process, the left-shift transition has a 50% probability of being selected as the matching transition effect .
  • the matching weight between the effect template and the right-shift transition is 50%.
  • the right-shift transition has a 50% probability of being selected as the matching transition effect.
  • the matching weight between the effect template and the black transition is 90%, that is, the black transition has a 90% probability of being selected as a matching transition effect.
  • the matching weight between the effect template and the white transition is 90%, that is, the white transition has a 90% probability of being selected as a matching transition effect.
  • the matching weight between the effect template and the zoom-in transition is 90%, that is, the zoom-in transition has a 90% probability of being selected as a matching transition effect.
  • the matching weight between the effect template and the zoom-out transition is 90%, that is, the zoom-out transition has a 90% probability of being selected as a matching transition effect.
  • the matching weight between the effect template and the rotation transition is 30%, that is, the rotation transition has a 30% probability of being selected as a matching transition effect.
  • the mobile phone can randomly select a transition effect 2 to replace the transition effect 1 by using the matching weights corresponding to each transition effect.
  • This selection method not only has high flexibility, but also can ensure that there is a high probability of a high correlation between the selected transition effect 2 and the effect template 1.
  • the mobile phone adds a new transition effect scene in the video data 1, it can also use the matching weights corresponding to each transition effect to randomly determine the transition effect 3 (also called the third transition effect 3) that matches the effect template 1. special effect), and add the transition effect 3 to the video data 1, for example, to the video frame corresponding to the second time point in the video data 1.
  • the video frame corresponding to the second time point may be: in the video data 1, video frames located before and after the second time point. In this way, the style that can be presented by the processed video data 1 can be closer to the expected effect of the effect model.
  • the effect template 1 may also have associated identifiers with one or more transition effects.
  • the transition effects with the associated identifiers may be preferentially selected as transition effects. field effects2.
  • the mobile phone after processing the video data 1 with the matching effect template 1, can also change the effect template used when processing the video data 1 or separately change the used video music according to the user's operation.
  • the mobile phone may display a preview interface, such as interface 1401 , which is called a third interface.
  • the third interface is used to display video data 2, also called second video data, where video data 2 is a video obtained after processing effect templates on the basis of video data 1.
  • the second video data includes video frames in video data 1, substitute frames (eg, substitute frame 2 and substitute frame 3), video music corresponding to effect template 1 (eg, referred to as the first music) and transition effects 2 (Also known as the second transition effect).
  • the mobile phone can receive the user's operation 6 on the interface 1401 , for example, the operation of clicking the style control 1402 on the interface 1401 .
  • an interface 1403 is displayed.
  • the interface 1403 is a guide interface guiding the user to select an effect template.
  • the interface 1403 includes multiple template windows indicating different effect templates. For example, window 1404, window 1405, window 1406, and window 1407.
  • the above-mentioned window 1404 is used to indicate the effect template named Hello Summer
  • window 1405 is used to indicate the effect template named sunny
  • window 1406 is used to indicate the effect template named HAPPY
  • window 1407 is used to indicate the effect template named Xiaomeimei effect template.
  • the template window of the hello summer effect template is selected, indicating that effect template 1 is the hello summer effect template.
  • the mobile phone can determine the effect template 2 selected by the user according to the user's operations on other template windows. For example, after the mobile phone receives the user's click operation on the window 1405, the preview window 1408 in the interface 1403 may display a sample of the sunny effect template. In this way, if the mobile phone receives the user's operation on the control 1409 in the interface 1403, it can determine that the sunny effect template is the selected effect template 2. Afterwards, the mobile phone can use the effect template 2 to process the original video data 1 to obtain the video data 1 conforming to the style of the effect template 2, and as shown in (c) in FIG. 14, the interface 1401 is displayed again. Video data 1 processed based on effect template 2.
  • the mobile phone receives the user's operation on the control 1409 when it does not receive the user's operation indicating to select another effect template.
  • the mobile phone can determine that the user indicates to use the effect template 1 to reprocess the original video data 1 .
  • the matching weights between each transition effect and the effect template 1 can still be used to re-determine the transition effect 2 and the transition effect 3 randomly, and use them to reprocess the original video data 1 .
  • the rerandomized transition effects 2 and 3 may be different from the transition effects determined by using the effect template 1 for the first time. In this way, the visual effect of the video data 1 obtained after reprocessing will also be different, which improves the diversity of one-click filming.
  • the mobile phone may receive the user's operation 7 on the interface 1401 , for example, the operation of clicking the music control 1410 on the interface 1401 . And in response to the operation 7, different video music is replaced.
  • the video music to be replaced can be the same style of music corresponding to the effect template 2, or a random piece of music, which is not limited.
  • the interface 1401 also includes a control 1411 indicating confirmation.
  • the mobile phone receives the user's operation on the control 1411, such as after the click operation, saves the processed video data 1, for example, the video data 1 processed based on the effect template is also called video data2.
  • the mobile phone can also display a video editing interface corresponding to the video data 2, such as the interface 1412 shown in (d) in FIG. 14 .
  • the interface 1412 can display video data 2 .
  • the mobile phone can play the video data 2 according to the user's operation on the interface 1412 .
  • the interface 1401 may also include a control indicating to undo the effect template, for example, the control 1413 shown in (a) of FIG. 14 .
  • the mobile phone may receive the user's click operation on the control 1413, delete the video data 1 processed based on the effect template, and display the interface 803 again.
  • the interface 803 still includes the one-key blockbuster control 1101 . If the mobile phone receives the user's operation on the one-key blockbuster 1101 again, it can determine a matching effect template again, and use the newly determined effect template to process the video data 1 again.
  • the processing process can refer to the foregoing embodiments, and will not be repeated here. repeat.
  • the mobile phone can make the effect templates determined by the one-click blockbuster two adjacent times different, increasing the diversity of the second creation of video data.
  • the embodiment of the present application also provides an electronic device, and the electronic device may include: a memory and one or more processors.
  • the memory is coupled to the processor.
  • the memory is used to store computer program code comprising computer instructions.
  • the electronic device can be made to perform various steps performed by the mobile phone in the foregoing embodiments.
  • the electronic device includes, but is not limited to, the aforementioned memory and one or more processors.
  • the structure of the electronic device may refer to the structure of the mobile phone shown in FIG. 1 .
  • the chip system includes at least one processor 2201 and at least one interface circuit 2202 .
  • the processor 2201 may be a processor in the aforementioned electronic device.
  • the processor 2201 and the interface circuit 2202 may be interconnected through wires.
  • the processor 2201 can receive and execute computer instructions from the memory of the above-mentioned electronic device through the interface circuit 2202 .
  • the electronic device can be made to perform various steps performed by the mobile phone in the above-mentioned embodiments.
  • the chip system may also include other discrete devices, which is not specifically limited in this embodiment of the present application.
  • Each functional unit in each embodiment of the embodiment of the present application may be integrated into one processing unit, or each unit may physically exist separately, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage
  • the medium includes several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) or a processor to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: flash memory, removable hard disk, read-only memory, random access memory, magnetic disk or optical disk, and other various media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

本申请提供一种视频数据的转场处理方法及电子设备,涉及终端技术领域。解决编辑视频的人机交互效率低的问题。具体方案为:显示第一界面;其中,第一界面包括第一视频数据的第一缩略图;第一视频数据包括第一转场特效;第一转场特效叠加于第一视频数据中连续的多帧第一视频帧上;接收用户对第一缩略图的第一操作;响应于第一操作,显示第二界面;第二界面包括一键大片控件;在接收到用户对一键大片控件的第二操作之后,显示第三界面;第三界面用于显示第二视频数据;第二视频数据包括第一视频数据的视频帧、多帧替代帧、第一音乐及第一音乐对应的第二转场特效;第二转场特效叠加于多帧替代帧上,替代帧用于替换第一视频数据中的第一视频帧。

Description

一种视频数据的转场处理方法及电子设备
本申请要求于2021年6月16日提交国家知识产权局、申请号为202110676709.3、申请名称为“一种基于故事线模式的用户视频创作方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请还要求于2021年11月29日提交国家知识产权局、申请号为202111439351.9、申请名称为“一种视频数据的转场处理方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请还要求于2022年01月18日提交国家知识产权局、申请号为202210056943.0、申请名称为“一种视频数据的转场处理方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,尤其涉及一种视频数据的转场处理方法及电子设备。
背景技术
随着电子技术的发展,手机、平板电脑等电子设备一般都配置有多个摄像头,如前置摄像头、后置摄像头、广角摄像头等等。多个摄像头方便用户利用电子设备进行视频作品的拍摄。
在用户利用电子设备完成拍摄视频之后,还可以通过添加特效、配置音乐等方式编辑该视频,得到观赏性更高的视频作品。目前,用户手动对视频进行编辑的过程中,依然存在人机交互效率比较低的问题。
发明内容
本申请实施例提供一种视频数据的转场处理方法及电子设备,用于提升编辑视频的人机交互效率。
为达到上述目的,本申请采用如下技术方案:
第一方面,本申请实施例提供的一种视频数据的转场处理方法,所述方法应用于电子设备,所述方法包括:所述电子设备显示第一界面;其中,所述第一界面包括第一视频数据的第一缩略图;所述第一视频数据包括第一转场特效;所述第一转场特效叠加于所述第一视频数据中连续的多帧第一视频帧上;所述电子设备接收用户对所述第一缩略图的第一操作;所述电子设备响应于所述第一操作,显示第二界面;所述第二界面是所述第一视频数据的视频编辑界面;所述第二界面包括一键大片控件;在接收到用户对所述一键大片控件的第二操作之后,所述电子设备显示第三界面;所述第三界面用于显示第二视频数据;所述第二视频数据包括所述第一视频数据的视频帧、多帧替代帧、第一音乐及所述第一音乐对应的第二转场特效;所述第二转场特效叠加于所述多帧替代帧上,所述替代帧用于替换所述第一视频数据中的所述第一视频帧。
在上述实施例中,对于已保存的视频数据,如,第一视频数据,电子设备可以响应于用户对一键大片控件的操作,自动编辑并处理第一视频数据,得到第二视频数据。该第二视频数据中配置有第一音乐。在第一视频数据中本存在转场特效,如第一转场 特效的情况下,电子设备可以将第一转场特效替换为与第一音乐匹配的第二转场特效,这样,第二视频数据中出现的转场特效可以与第一音乐匹配,提升第二视频数据中音乐和内容的适配度,提升用户对第二视频数据的满意度,降低返工的可能。另外,整个触发创作第二视频数据的操作简单,有效提升创作视频数据的人机交互效率。
在一些可能的实施例中,在所述电子设备显示第一界面之前,所述方法还包括:所述电子设备显示第四界面;所述第四界面是相机应用提供的取景预览界面;所述第四界面包括指示启动拍摄视频的第一控件;所述电子设备接收用户对所述第一控件的第三操作;所述电子设备响应于所述第三操作,显示第五界面,并开始录制所述第一视频数据;其中,所述第五界面是第一镜头模式下的视频录制界面;所述第五界面包括指示切换镜头模式的第二控件;在录制到所述第一视频数据的第一时间点时,所述电子设备响应于用户对所述第二控件的第四操作,显示第六界面,并确定所述第一时间点对应的视频帧为所述第一视频帧;其中,所述第六界面是第二镜头模式下的视频录制界面;所述第六界面包括指示停止拍摄的第三控件;所述电子设备接收用户对所述第三控件的第五操作;所述电子设备显示第一界面,包括:所述电子设备响应于所述第五操作,显示所述第一界面;所述第一界面也是所述相机应用提供的取景预览界面。
在上述实施例中,第一视频数据可以是电子设备在常规模式下拍摄的视频。在拍摄该视频的过程,电子设备在第一时间点,接收到指示切换镜头模式的操作,不仅可以直接切换镜头模式,还可以将第一视频数据中第一时间点对应的视频帧确定为第一视频帧,这样,编辑处理第一视频数据时,可以处理受镜头切换影响的视频帧,提高第二视频数据的可观赏性,提高编辑视频的人机交互效率。
在一些可能的实施例中,在所述电子设备接收用户对所述第三控件的第五操作之后,所述方法还包括:所述电子设备在所述第一视频帧上叠加所述第一转场特效。
在上述实施例中,通过在第一视频数据的第一时间点处添加第一转场特效,利用第一转场特效衔接镜头模式切换前后采集到的视频片段,改善由于镜头模式切换,给视频内容带来的不连贯问题,提高成片质量。
在一些可能的实施例中,在所述电子设备显示第一界面之前,所述方法还包括:所述电子设备显示主界面;所述主界面包括图库应用的图标;所述电子设备接收用户对所述图库应用的图标的第六操作;所述电子设备显示第一界面,包括:所述电子设备响应于所述第六操作,显示所述第一界面,所述第一界面为所述图库应用提供的应用界面。
在上述实施例中,第一视频数据还可以是已存储于图库的视频数据,也就是,可以是其他设备拍摄的视频,或者,已经经过一次创作之后的视频。这样,用户通过电子设备可以处理多样的视频素材,且处理所需操作简单,提高视频创作的人机交互效率。
在一些可能的实施例中,在所述电子设备显示第三界面之前,所述方法还包括:所述电子设备响应于所述第二操作,从多个预配置的效果模板中确定出第一效果模板;所述第一效果模板包括所述第一音乐;所述电子设备将所述第一视频数据中的所述第一视频帧删除;所述电子设备将所述第一视频数据中的第二视频帧定格,得到用于替 代所述第一视频帧的所述替代帧;所述第二视频帧是所述第一视频帧的相邻前一帧视频帧,或者,是所述第一视频帧的相邻后一帧视频帧;所述电子设备将所述第二转场特效叠加于所述替代帧之上。
在一些可能的实施例中,所述第一效果模板对应第一风格;所述从多个预配置的效果模板中确定出第一效果模板,包括:所述电子设备利用预置的人工智能模型确定所述第一视频数据与所述第一风格匹配;所述电子设备从属于所述第一风格的效果模板中,确定出所述第一效果模板;或者,所述电子设备从多个预置的所述效果模板中,随机确定出所述第一拍摄模板。
在一些可能的实施例中,在所述电子设备显示第三界面之前,所述方法还包括:所述电子设备确定与所述第一音乐对应的所述第二转场特效。
在一些可能的实施例中,所述电子设备确定与所述第一音乐对应的所述第二转场特效,包括:所述电子设备从多种预置转场特效中,确定与所述第一音乐之间具有关联标识的所述第二转场特效。
在上述实施例中,第二转场特效与第一音乐之间具有绝对关联,这样,确保第二视频数据中的第二转场特效与第一音乐适配,提高第二视频数据的可观赏性。
在一些可能的实施例中,所述电子设备确定与所述第一音乐对应的所述第二转场特效,包括:所述电子设备基于匹配权重,从多种预置转场特效中,确定出所述第二转场特效;其中,每种所述预置转场特效对应有一所述匹配权重,所述匹配权重是所述第一音乐与所述预置转场特效之间适配度的量化比值参数。
在上述实施例中,该第二转场特效与第一音乐之间具有相对关联,并且类型也随机,这样,确保转场特效与第一音乐适配的同时,增加转场特效的多样化,提高第二视频数据的可观赏性。
在一些可能的实施例中,所述第二视频数据中还包括:第三转场特效;所述第三转场特效添加于所述第一视频数据中第二时间点所对应的视频帧上;所述第三转场特效是多种预置转场特效中的一种;所述多种预置转场特效包括所述第二转场特效。
第二方面,本申请实施例提供的一种电子设备,电子设备包括一个或多个处理器和存储器;所述存储器与处理器耦合,存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器执行计算机指令时,所述一个或多个处理器,用于:显示第一界面;其中,所述第一界面包括第一视频数据的第一缩略图;所述第一视频数据包括第一转场特效;所述第一转场特效叠加于所述第一视频数据中连续的多帧第一视频帧上;接收用户对所述第一缩略图的第一操作;响应于所述第一操作,显示第二界面;所述第二界面是所述第一视频数据的视频编辑界面;所述第二界面包括一键大片控件;在接收到用户对所述一键大片控件的第二操作之后,显示第三界面;所述第三界面用于显示第二视频数据;所述第二视频数据包括所述第一视频数据的视频帧、多帧替代帧、第一音乐及所述第一音乐对应的第二转场特效;所述第二转场特效叠加于所述多帧替代帧上,所述替代帧用于替换所述第一视频数据中的所述第一视频帧。
在一些可能的实施例中,在显示第一界面之前,所述一个或多个处理器,还用于:显示第四界面;所述第四界面是相机应用提供的取景预览界面;所述第四界面包括指 示启动拍摄视频的第一控件;接收用户对所述第一控件的第三操作;响应于所述第三操作,显示第五界面,并开始录制所述第一视频数据;其中,所述第五界面是第一镜头模式下的视频录制界面;所述第五界面包括指示切换镜头模式的第二控件;在录制到所述第一视频数据的第一时间点时,响应于用户对所述第二控件的第四操作,显示第六界面,并确定所述第一时间点对应的视频帧为所述第一视频帧;其中,所述第六界面是第二镜头模式下的视频录制界面;所述第六界面包括指示停止拍摄的第三控件;接收用户对所述第三控件的第五操作;
所述一个或多个处理器,还用于:响应于所述第五操作,显示所述第一界面;所述第一界面也是所述相机应用提供的取景预览界面。
在一些可能的实施例中,在接收用户对所述第三控件的第五操作之后,所述一个或多个处理器,还用于:在所述第一视频帧上叠加所述第一转场特效。
在一些可能的实施例中,在显示第一界面之前,所述一个或多个处理器,还用于:显示主界面;所述主界面包括图库应用的图标;接收用户对所述图库应用的图标的第六操作;显示第一界面,包括:所述电子设备响应于所述第六操作,显示所述第一界面,所述第一界面为所述图库应用提供的应用界面。
在一些可能的实施例中,在显示第三界面之前,所述一个或多个处理器,还用于:响应于所述第二操作,从多个预配置的效果模板中确定出第一效果模板;所述第一效果模板包括所述第一音乐;将所述第一视频数据中的所述第一视频帧删除;将所述第一视频数据中的第二视频帧定格,得到用于替代所述第一视频帧的所述替代帧;所述第二视频帧是所述第一视频帧的相邻前一帧视频帧,或者,是所述第一视频帧的相邻后一帧视频帧;将所述第二转场特效叠加于所述替代帧之上。
在一些可能的实施例中,所述一个或多个处理器,还用于:利用预置的人工智能模型确定所述第一视频数据与所述第一风格匹配;从属于所述第一风格的效果模板中,确定出所述第一效果模板;或者,从多个预置的所述效果模板中,随机确定出所述第一拍摄模板。
在一些可能的实施例中,在显示第三界面之前,所述一个或多个处理器,还用于:确定与所述第一音乐对应的所述第二转场特效。
在一些可能的实施例中,所述一个或多个处理器,还用于:从多种预置转场特效中,确定与所述第一音乐之间具有关联标识的所述第二转场特效。
在一些可能的实施例中,所述一个或多个处理器,还用于:基于匹配权重,从多种预置转场特效中,确定出所述第二转场特效;其中,每种所述预置转场特效对应有一所述匹配权重,所述匹配权重是所述第一音乐与所述预置转场特效之间适配度的量化比值参数。
在一些可能的实施例中,所述第二视频数据中还包括:第三转场特效;所述第三转场特效添加于所述第一视频数据中第二时间点所对应的视频帧上;所述第三转场特效是多种预置转场特效中的一种;所述多种预置转场特效包括所述第二转场特效。
第三方面,本申请实施例提供的一种计算机存储介质,包括计算机指令,当所述计算机指令在电子设备上运行时,使得电子设备执行上述第一方面及其可能的实施例中所述的方法。
第四方面,本申请提供一种计算机程序产品,当计算机程序产品在上述电子设备上运行时,使得电子设备执行上述第一方面及其可能的实施例中所述的方法。
可以理解地,上述各个方面所提供的电子设备、计算机存储介质以及计算机程序产品均应用于上文所提供的对应方法,因此,其所能达到的有益效果可参考上文所提供的对应方法中的有益效果,此处不再赘述。
附图说明
图1为本申请实施例提供的一种电子设备的结构示意图;
图2为本申请实施例提供的显示界面的示例图之一;
图3为本申请实施例提供的显示界面的示例图之二;
图4概念性地展示了镜头模式从前后模式切换为画中画模式后,对视频数据1产生的影响示例图;
图5概念性地展示了镜头模式从前后模式切换为画中画模式后,处理视频数据1的示例图;
图6为本申请实施例提供的显示界面的示例图之三;
图7A概念性地展示了镜头模式从前后模式切换为后后模式的场景下,处理视频数据1的示例图之一;
图7B概念性地展示了镜头模式从前后模式切换为后后模式的场景下,处理视频数据1的示例图之二;
图8为本申请实施例提供的显示界面的示例图之四;
图9为本申请实施例提供的显示界面的示例图之五;
图10为本申请实施例提供的显示界面的示例图之六;
图11为本申请实施例提供的显示界面的示例图之七;
图12为本申请实施例提供的视频数据的转场处理方法的步骤流程图;
图13概念性地展示了替换视频数据1中原有的转场特效的示例图;
图14为本申请实施例提供的显示界面的示例图之八;
图15为本申请实施例提供的一种芯片系统的组成示意图。
具体实施方式
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
下面将结合附图对本实施例的实施方式进行详细描述。
本申请实施例提供了一种视频数据的转场处理方法,该方法可以应用于具有多个摄像头的电子设备。采用本申请实施例所提供的方法,电子设备可以响应用户的操作,自动处理视频数据,如,添加转场特效、配置视频音乐等。在确保添加转场特效与视频音乐匹配的情况下,降低编辑视频数据的操作复杂性,提高创作视频的人机交互效率。
示例性的,本申请实施例中的电子设备可以是手机、平板电脑、智能手表、桌面型、膝上型、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal  computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)\虚拟现实(virtual reality,VR)设备等包括多个摄像头的设备,本申请实施例对该电子设备的具体形态不作特殊限制。
下面将结合附图对本申请实施例的实施方式进行详细描述。请参考图1,为本申请实施例提供的一种电子设备100的结构示意图。如图1所示,电子设备100可以包括:处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。
其中,上述传感器模块180可以包括压力传感器,陀螺仪传感器,气压传感器,磁传感器,加速度传感器,距离传感器,接近光传感器,指纹传感器,温度传感器,触摸传感器,环境光传感器和骨传导传感器等传感器。
可以理解的是,本实施例示意的结构并不构成对电子设备100的具体限定。在另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
可以理解的是,本实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在另一些实施例中,电子设备100也可以采用上 述实施例中不同的接口连接方式,或多种接口连接方式的组合。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。该显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头293中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括N个摄像头193,N为大于1的正整数。
示例性的,上述N个摄像头193可以包括:一个或多个前置摄像头和一个或多个后置摄像头。例如,以上述电子设备100是手机为例。手机包括至少一个前置摄像头。该前置摄像头配置于手机的前侧,如,图2中的(a)所示的前置摄像头201。另外,手机包括至少一个后置摄像头。该后置摄像头设置于手机的背侧。这样,前置摄像头和后置摄像头朝向不同的方向。
在一些实施例中,电子设备可以启用上述N个摄像头139中至少一个摄像头进行拍摄,并生成对应的照片或视频。例如,单独使用电子设备100的一个前置摄像头进行拍摄。再如,单独使用电子设备100的一个后置摄像头进行拍摄。再例如,同时启用两个前置摄像头进行拍摄。再例如,同时启用两个后置摄像头进行拍摄。再例如,同时启用一个前置摄像头和一个后置摄像头进行拍摄等。
可以理解的,单独启用一个摄像头139进行拍摄,可以称为启用了单摄模式,如,前摄模式(又称为单前模式)、后摄模式(又称为单后模式)。同时启用多个摄像头139进行拍摄,可以统称为启用了多摄模式,如,前前模式、前后模式、后后模式、画中画模式。
以同时启用一个前置摄像头和一个后置摄像头进行举例。在同时启用一个前置摄像头和一个后置摄像头进行拍照后,电子设备可以将前置摄像头和后置摄像头采集到的图像帧进行渲染合并。其中,上述渲染合并可以是将不同摄像头采集的图像帧进行拼接。如,采用前后模式进行竖屏拍照后,可以将不同摄像头采集的图像帧进行上下拼接。再如,采用后后模式进行横屏拍照后,可以将不同摄像头采集的图像帧进行左右拼接。再如,采用画中画模式进行拍照后,可以将一个摄像头采集到的图像帧镶嵌在另一个摄像头采集到的图像帧中。然后,进行编码,生成照片。
另外,在同时启用一个前置摄像头和一个后置摄像头进行视频拍摄后,前置摄像头采集一路视频流,并缓存。后置摄像头采集一路视频流,并缓存。然后,电子设备100对缓存的两路视频流逐帧进行渲染合并处理,也即,渲染合并两路视频流中采集时间点相同或匹配的视频帧。之后,进行编码,生成视频文件。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。这样,电子设备100可以播放音频数据,如,视频音乐等。
压力传感器用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器可以设置于显示屏194。陀螺仪传感器可以用于确定电子设备100的运动姿态。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备100姿态,应用于横竖屏切换等应用。触摸传感器,也称“触控面板”。触摸传感器可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。
为了下述各实施例的描述清楚简洁,首先给出相关概念或技术的简要介绍:
在用户使用电子设备拍摄的视频之后,可以通过操作电子设备的方式,对所拍摄的视频进行编辑,如,配置视频音乐、加入动画特效、添加转场特效等。这样,经过二次创造的视频会更加生动、丰富,也符合用户的创作意图。其中,添加转场特效不仅能让视频内容过渡更加自然,还能够使视频呈现的内容更加丰富。
然而,相关技术中,在视频中已有转场特效的情况下,如果用户手动更改原有转场特效,操作过程很复杂。如果用户不更改原有转场特效,那么原有转场特效就可能和用户选择的视频音乐风格不符合的问题。显然,在确保视频可观赏性的前提下,添加转场特效的人机交互效率。
针对这一问题,本申请实施例提供了一种视频数据的转场处理方法。下面以上述电子设备100是手机为例,对本申请实施例的方法进行说明。
如图2中的(a)所示,手机包括主界面。上述主界面,又称为桌面202。该主界面可以是手机解锁后显示的用户界面。上述主界面可以包括已安装应用程序(Application,APP)的图标,如,相机APP的图标203。
在一些实施例中,手机可以接收用户在主界面的操作,并启动该操作所指示的APP。示例性地,如图2中的(a)所示,手机可以接收用户对图标203的操作,如点击操作,并响应于该操作,启动相机APP。在相机APP启动之后,可以显示相机APP所提供的应用界面。例如,显示用于执行拍摄功能的取景界面,也即,如图2中的(b)所示的界面204。
在一些实施例中,用户可以在界面204中切换相机APP的不同功能模式,如,人像功能模式、拍照功能模式、录像功能模式和多镜录像功能模式等。也就是,手机可以接收用户在界面204中的操作1,该操作1用于指示相机APP切换不同的功能模式。
示例性地,在界面204中包括相机APP的多个功能模式所对应的控件,如,人像控件、拍照控件、录像控件、多镜录像控件等。其中,拍照控件处于选中状态,用于提示用户当前的取景界面用于执行拍摄功能。
在显示界面204期间,手机可以接收用户对人像控件、录像控件、多镜录像控件中任一控件的操作,并基于该操作确定切换后的功能模式,并显示执行该功能模式前的取景界面。例如,在手机接收到用户对人像控件的操作,如,点击操作时,可以显示执行人像拍摄功能前的取景界面。与此同时,人像控件处于被选中的状态。在手机接收到用户对录像控件的操作,如,点击操作时,可以显示执行录像功能前的取景界面。与此同时,录像控件处于被选中的状态。另外,如图2中的(b)所示,在手机接收到用户对多镜录像控件的操作,如,点击操作时,可以显示执行多镜录像功能前的取景界面,也即,如图2中的(c)所示的界面205,与此同时,多镜录像控件处于被选中的状态。可以理解的,上述界面205是第四界面的一个示例,在另一些实施例中,执行常规的录像功能前的取景界面也可以称为第四界面。
其中,录像功能和多镜录像功能都可以录制视频数据,二者的区别在于:录制开始时,所启用的镜头模式不同。在录像功能下,手机可以响应于用户的操作,启用单前模式或单后模式等单摄镜头模式进行视频的拍摄。而在多镜录像功能下,手机可以响应于用户的操作,启用前后模式、后后模式或画中画等多摄镜头模式进行视频的拍摄。本申请实施例所提供的方法不仅适用于录像功能下拍摄的视频数据,也适用于多镜录像功能下拍摄的视频数据,且实现原理相同。后续实施例中,主要以采用多镜录像功能为例,介绍本申请实施例所提供的方法。
执行多镜录像功能前,如图2中的(c)所示,手机所显示的取景界面(也即,界面205)中包括多个取景框,如,取景框206和取景框207。其中,取景框206和取景 框207的排布位置关系与手机的姿态相关。例如,在手机的陀螺仪传感器识别出手机处于竖屏状态的场景下,上述取景框206和取景框207上下排布。在手机的陀螺仪传感器识别出手机处于横屏状态的场景下,上述取景框206和取景框207左右排布。
另外,取景框206和取景框207分别对应有摄像头。例如,取景框206对应摄像头1(如,后置摄像头a),这样,取景框206可以用于显示摄像头1上传的视频流。取景框207对应摄像头2(如,前置摄像头),这样,取景框207可以用于显示摄像头2上传的视频流。可以理解地,各取景框(如,取景框206和取景框207)所对应的摄像头,可以依据用户的操作进行调整。各取景框所对应的摄像头变化后,意味着手机所使用的镜头模式也随之改变,后续实施例中会介绍镜头模式切换的几种场景,在此不赘述。
另外,在界面205中,手机可接收用户的操作2,该操作2用于触发手机在不选择任何特效的情况下直接开始视频拍摄。该操作2又可称为第三操作。示例性的,界面205中包括用于指示启动拍摄的控件208,也即,第一控件。手机接收到用户对控件208的第三操作,如,点击操作时,可以显示正在录制视频的取景界面,如称为第五界面,例如,图2中的(d)所示的界面209。其中,界面209是第一镜头模式下的录制取景界面,如,是前后模式所对应的录制取景界面。
界面209也包括取景框206和取景框207。这样,手机的界面209可以显示前置摄像头和后置摄像头a实时采集到的视频流。与此同时,手机还可以将前置摄像头和后置摄像头a采集的视频流进行渲染合并,之后编码,生成视频数据,并保存。在拍摄过程中,视频数据的视频帧会逐渐增加。
另外,在视频拍摄过程中,手机可以在界面209中接收用户的操作3,上述操作3可以是指示切换镜头模式的操作。手机可以响应于该操作3,启用不同的摄像头或不同的摄像头组合进行视频流的采集,以便于用户创造出布景多样、内容丰富的视频。
示例性地,界面209中可以包括用于指示切换镜头模式的控件,又称为第二控件,如,图3中的(a)所示的控件301。其中,控件301的图标用于指示当前启用的镜头模式。在手机接收到用户对控件301的操作之后,如图3中的(b)所示,可以在界面209中显示镜头模式选择窗,如,窗口302。其中,该窗口302中罗列有多个可供选择的镜头模式,如,前后模式、后后模式、画中画模式、单后模式、单后模式等。在窗口302中,前后模式处于选中状态。在此场景下,手机可以接收用户对后后模式、画中画模式、单后模式或单后模式的选择操作,并响应于该选择操作,切换所使用的镜头模式。上述实现切换镜头模式的操作,可称为第四操作。
例如,手机接收到用户对画中画模式的选择操作时,手机可以将镜头模式切换为画中画模式。也就是,如图3中的(c),手机可以切换显示界面303。其中,界面303是第六界面的一个示例。可以理解的,第六界面是指切换镜头模式(又称为第二镜头模式)之后,对应的视频录制界面。该界面303中也包括取景框206和取景框207。其中,取景框206继续用于显示后置摄像头a上传的视频流,而取景框207也将继续用于显示前置摄像头上传的视频流。
另外,在切换过程中,取景框207缩小。而,取景框206增大。另外,取景框207叠于取景框206之上。在此过程,前置摄像头和后置摄像头a的相机参数也会调整。 在调整相机参数之后,前置摄像头和后置摄像头a采集到的视频流可能出现上传不及时的问题。这将致使拍摄到的视频数据1中出现停顿片段。也就是,如图3中的(c)所示,界面303的取景框206和取景框207会出现短暂的黑屏。当然,在相机参数配置完成后,如图3中的(d)所示,界面303的取景框207实时显示前置摄像头上传的视频流,取景框206实时显示后置摄像头a上传的视频流。
参考图4,图4概念性地展示了手机拍摄视频的过程中,镜头模式切换(从前后模式切换为画中画模式)后,对视频数据1的影响。在视频数据1的相对时间轴上,在时刻00:00到00:05之间,前置摄像头和后置摄像头a均可以正常上传视频流,同时,手机也可以依据前置摄像头和后置摄像头a上传的视频流正常地进行编码,生成连续的多帧视频帧,又称为视频片段1。其中,相对时间轴是基于视频数据1创建的时间轴。该相对时间轴的时刻00:00对应于视频数据1的第一帧视频帧(又称为首帧)的采集时间。
在时刻00:05之后,手机检测到指示切换至画中画模式的操作,该时刻00:05是第一时间点的一个示例。该操作可以短暂地影响到前置摄像头和后置摄像头a的视频流回传,故,在时刻00:05至时刻00:06之间出现缺视频帧的问题。而时刻00:06之后,前置摄像头和后置摄像头a又恢复正常,又可以采集并编码得到连续的视频帧,又可称为视频片段2。
在本申请实施例中,如图5所示,手机可以在时刻00:05至时刻00:06之间,插入多帧替代帧1,以使得到视频数据1连贯流程。在时刻00:05(也即,第一时间点)之后插入的替代帧1,又可称为第一时间点所对应的视频帧。
在一些示例中,在手机确定用户指示切换到画中画模式之后,手机可以将视频帧401(视频片段1中的最后一帧)定格,得到替代帧1。在此场景下,替代帧1所显示的画面内容与视频帧401相同。在重新接收到各摄像头回传的视频流之后,手机取消针对视频帧401的帧定格。
在另一些示例中,在确定用户指示切换到画中画模式之后,手机可以在视频帧401之后插入预习配置的图像帧,比如,黑色的图像帧或者白色的图像帧,直至重新接收到各摄像头回传的视频流之后,停止插入替代帧1。可理解地,所插入的图像帧也可以统称为替代帧1。
在其他示例中,在确定用户指示切换到画中画模式之后,手机可以对视频帧401进行标记。这样,在完成视频数据1的拍摄后,手机再自动于视频帧401之后插入多帧替代帧1。其中,该替代帧1可以是与视频帧401相同的图像帧,也可以是预先配置的白色图像帧或黑色图像帧。
再例如,在界面209中包括窗口302的情况下,如图6中的(a)所示,手机可以接收到用户对后后模式的选择操作。这样,手机可以响应于该选择操作,将使用的镜头模式切换为后后模式。也就是,如图6中的(b),手机可以切换显示界面601。该界面601中也包括取景框206和取景框207。其中,取景框206用于显示后置摄像头a所上传的视频流,取景框207用于显示后置摄像头b所上传的视频流。
另外,除了在窗口302中选择后后模式之后,在显示界面209期间,也即,手机采用前后模式拍摄视频期间,手机接收到用户对控件602的操作,如点击操作时,也 可以将镜头模式切换为后后模式。
在切换为后后模式的过程中,取景框206和取景框207之间的相对位置和尺寸不变,手机启用后置摄像头b,并关闭前置摄像头,将后置摄像头b与取景框207对应上。这样,手机的取景框206继续显示后置摄像头a上传的视频流,而取景框207则显示后置摄像头b所上传的视频流。
在此过程,后置摄像头a回传视频流不受影响,如图6中的(b)所示,镜头切换期间取景框206的画面显示不受影响。但是,由于硬件反应延迟,前置摄像头关闭到后置摄像头b正常上传视频流之间存在时间间隙。在该时间间隙中,取景框207的画面显示会受影响,如图6中的(b)所示,取景框207出现短暂的黑屏。当然,在后置摄像头b可以正常上传视频流之后,如图6中的(c)所示,界面601的取景框207可以实时显示后置摄像头b上传的视频流,取景框206实时显示后置摄像头a上传的视频流。另外,该界面601也是第六界面的一个示例。
参考图7A,图7A概念性地展示了手机拍摄视频的过程中,出现镜头模式切换(从前后模式切换为后后模式)的情况下,生成视频数据1的示例图。
如图7A中的(a)所示,在视频数据1的相对时间轴上,在时刻00:05(第一时间点)至时刻00:06之间存在画面异常的多帧视频帧。这些画面异常的视频帧是由于镜头模式切换所导致的。
在一些实施例中,手机可以在指示启动后置摄像头b之后,对视频帧进行打点标记。在确定后置摄像头b已正常回传视频流之后,停止标记。这样,如图7A中的(b)所示,手机可以将标记位置所对应的视频帧剔除,这样,可以得到视频片段1和视频片段2。然后,为了确保视频数据的连贯性,如图7A中的(c)所示,手机还可以在视频片段1和视频片段2之间添加替代帧1,也即,也即第一时间点所对应的视频帧,从而,得到视频数据1。
在另一些实施例中,手机确定需启用新的摄像头(也即,后置摄像头b)之后,可以将已采集的最后一帧视频帧定格,如,将图7B中的视频帧701进行帧定格,得到替代帧1,也即第一时间点所对应的视频帧。在确定后置摄像头b已正常回传视频流之后,取消针对视频帧701的帧定格,依据后置摄像头a和后置摄像头b上传的视频流,正常编码,生成视频数据1。
可见,上述实施例中提到的添加替代帧1可以是在视频拍摄完成之后,也可以是在视频拍摄的过程中。
另外,切换镜头模式,不会中断视频的正常拍摄。例如,在手机切换为后后模式之后,手机可以显示界面601,并进行拍摄视频。这样,视频数据1所对应的视频帧数量将继续增加。
以上示例中,枚举了手机将镜头模式从前后模式切换为画中画模式,和从前后模式切换为后后模式。在实际应用中,其他镜头模式之间的切换,也存在类似的问题,也可以通过前述示例描述的,插入替代帧的方式进行解决,在此就不再赘述。
另外,在一些实施例中,手机还可以在界面601中接收用户指示停止拍摄的操作4。然后,手机可以响应于该操作4,停止视频的拍摄。
示例性地,如图8中的(a)所示,界面601中还包括用于指示暂停拍摄的控件, 又可称为第三控件,如,控件801。手机可以接收用户对控件801的第五操作,如,点击操作。并响应于针对控件801的点击操作,停止继续拍摄,并保存所拍摄的视频数据1。另外,如图8中的(b)所示,再次显示界面205。拍摄并保存视频数据1(也即,第一视频数据)之后,所显示的界面205又可称为第一界面。在此场景下,第一界面实际为相机应用提供的取景预览界面。
又示例性地,手机可以依据用户指示退出相机APP的操作,如,在界面601上的上滑操作,停止继续拍摄,并保存所拍摄的视频数据1。另外,手机还可以再次显示主界面。
在视频数据1保存之后,手机可以根据用户的操作4,显示所拍摄的视频数据1,方便用户查看或编辑视频数据1。
示例性地,在停止拍摄且手机显示界面205的场景下,该界面205中包括视频数据1的缩略图,如,图标802,又可称为第一缩略图。手机在显示界面205期间,可以接收用户对图标802的操作,并响应于该操作,显示视频编辑界面,如,图8中的(c)所示的界面803。该界面803用于显示视频数据1,又可称为第二界面。
又示例性地,在停止拍摄且手机显示界面205的场景下,手机可以依据用户指示退出相机APP的操作,如上划操作,再次显示手机的主界面,也即,桌面202。该桌面202中还包括图库APP的图标。这样,如图9中的(a)所示,在显示桌面202期间,手机可以接收针对图库APP的图标901的第六操作,如点选操作,并响应于该操作,显示图库APP所提供的应用界面,如图9的(b)所示的界面902。当然,在停止拍摄且手机显示主界面的场景下,手机可以直接接收针对图库APP的图标901的点选操作,并响应于该操作,显示图库APP所提供的应用界面,如图9的(b)所示的界面902。
在一些实施例中,界面902会显示各种图片资源和视频资源的缩略图,这些图片资源或者视频资源可以是手机拍摄并存储的,例如,视频数据1的缩略图903(又称为第一缩略图),或者也可以是从网上下载的图像、视频等的缩略图,或者也可以是同步到云端的图像、视频等的缩略图。在一些示例中,用于显示视频资源缩略图的界面902,也可以称为第一界面。在此场景下,第一界面实际为所述图库应用提供的应用界面。
在一些实施例中,手机可接收用户对界面902中任一视频的缩略图的选择操作,响应于用户对任一视频的缩略图的选择操作,手机可显示对应的视频编辑界面。例如,手机响应于用户对界面902中缩略图903的第一操作,如点击操作,可显示图9中的(c)示出的界面803,界面803是视频数据1对应的视频编辑界面,也可称为第二界面。
在一些实施例中,手机可以依据用户在上述视频编辑界面的操作,播放对应的视频。示例性地,如图10所示,界面803中包括用于指示播放视频数据1的控件,如控件1001。在手机接收到用户针对控件1001的点击操作时,手机在界面803中播放视频数据1。
可以理解的,在视频数据1中添加有替代帧1的情况下,在播放过程中,会出现画面静止的情况。在本申请实施例中,手机在视频数据1中添加替代帧的同时,还可 以在多帧替代帧1上叠加转场特效,如称为转场特效1或者称为第一转场特效。实际叠加了第一转场特效的替代帧1又可称为第一视频帧。
在一些示例中,转场特效1可以手机中预先指定的任意一类转场特效,如,左移转场、右移转场、旋转转场、叠化转场、模糊转场、融化转场、黑场转场、白场转场、放大转场、缩小转场、上移转场和下移转场等中的一种。另外,需要说明书,上述转场特效之中,左移转场、右移转场只适用于竖屏拍摄视频的场景,而上移转场和下移转场只适用于横屏拍摄视频的场景。在另一些示例中,上转场特效1可以手机从上述多类转场特效中随机确定出的转场。
这样,播放视频数据1的过程中,转场特效1可以更好地衔接替代帧1前后的视频片段,也即,视频片段1和视频片段2之间可以更好的过度,提高用户的观赏体验,也增提高视频的拍摄质量。
另外,在添加转场特效1的区域,手机还可以对实际叠加转场特效的替代帧1进行标记,以便于手机可以识别出添加转场的位置。
在另一些实施例中,手机可以依据用户在上述视频编辑界面的操作,自动对视频数据1进行二次创作,比如,配置视频音乐、添加转场特效等。示例性地,如图11所示,界面803中还包括用于指示编辑视频数据1的控件,如,一键大片控件1101。在手机接收到用户对一键大片控件1101的操作,如称为第二操作,手机可以自动编辑视频数据1。
在一些实施例中,如图12所示,手机可以自动编辑视频数据1,可以包括以下步骤:
S101,手机确定与视频数据1匹配的效果模板。
在一些实施例中,手机内预先可以配置多类风格的效果模板,每一个效果模板都对应有一首视频音乐。此外,效果模板还对应有滤镜、特效、转场、贴纸。
在一些实施例中,手机可以利用人工智能模型,分析视频数据1的画面内容,确定与视频数据1匹配的效果模板,也即,第一效果模板,又称为效果模板1。例如,手机的人工智能模型依据视频数据1的画面内容,查找相似视频。并获取相似视频所用的视频音乐。这样,再依据获取的视频音乐,确定出对应的效果模板1。再例如,手机的人工智能模型依据视频数据1的画面内容,查找相似视频。依据相似视频的风格名称,确定属于相同风格的多个效果模板。然后,从确定出的多个效果模板中,随机确定效果模板1。在另一些实施例中,手机也可以随机确定出一效果模板,作为效果模板1。
S102,手机依据效果模板1,处理视频数据1。
示例性地,手机可以将视频数据1的原始音轨的音量调整至零,再将拍摄模板1的视频音乐添(也即,第一音乐)加到视频数据1,使视频音乐与视频数据1的视频画面契合。在其他实施例中,还可以按照用户的操作,将原始音轨音量调整至其他分贝值。
又示例性地,手机可以将效果模板1所对应的滤镜、特效、转场、贴纸添加到视频数据1中。其中,在添加转场特效的过程中,手机不仅可以在视频数据1中新增转场特效,还可以替换视频数据1中原有的转场特效(如,叠加在替代帧1上的转场特 效1)。
可以理解得,效果模板与各类转场特效之间的适配度不同。通常适配度越高的转场特效,相对而言,越适合该效果模板的风格和所对应的视频音乐。适配度越低的转场特效,相对而言,越不适合该效果模板的风格和所对应的视频音乐。在本实施例中,通过替换转场特效1的方式,改善利用效果模版处理视频数据之后,原有的转场特效1与处理后的视频风格冲突的问题。
在一些实施例中,如图13所示,手机可以先识别视频数据1中是否存在转场特效1。例如,可以通过检测视频数据1中是否存在标记。在识别出标记的情况下,将被标记的视频帧(也就是,叠加了转场特效1的替代帧1)删除,这样,视频数据1被分为视频片段1和视频片段2。然后,手机生成多张替代帧2和多张替代帧3。其中,替代帧2可以与视频片段1的尾帧相同,而替代帧3与视频片段2的首帧相同。上述视频片段1的尾帧和视频片段2的首帧可以统称为第二视频帧。
在一些示例中,替代帧2可以是手机对视频片段1的尾帧进行定格处理后,得到的图像帧。上述替代帧2可以是手机对视频片段2的首帧进行定格处理后,得到的图像帧。另外,替代帧2和替代帧3的总数量与删除的视频帧的数量相同,确保不影响最终视频数据1的长度。然后,手机依据效果模板1与各类转场特效之间的适配度,确定出转场特效2,并将转场特效2叠加到替代帧2和替代帧3之间,实现衔接视频片段1和视频片段2。
在一些实施例中,效果模板与转场特效之间的适配度可以量化为匹配权重。这样,在确定转场特效2时,手机可以结合效果模板1与各转场特效之间匹配权重,从多类转场特效中,随机选出与效果模板1匹配的转场特效2。可理解地,匹配权重越高的转场特效,相对而言,更容易被选为与效果模板1匹配的转场特效2。匹配权重越低的转场特效,相对而言,更难被选为与效果模板1匹配的转场特效2。
在一些示例中,效果模板与转场特效的匹配权重可以预先配置。示例性地,如下表1所示:
表1
Figure PCTCN2022094793-appb-000001
Figure PCTCN2022094793-appb-000002
上述表1例举了,不同效果模板与视频音乐、风格、不同转场特效的匹配权重之间的对应关系。其中,表格中各转场特效所对应的百分比值,为该转场特效与效果模板之间的匹配权重。效果模板1所对应的风格又称为第一风格。
以表1中记录的你好夏天效果模板为例。该效果模板与叠化转场之间的匹配权重为50%,也即,叠化转场有50%的概率被选为匹配的转场特效。另外,该效果模板与模糊转场之间的匹配权重为0%,也即,模糊转场也不会被选为匹配的转场特效。该效果模板与融化转场之间的匹配权重为0%,也即,融化转场也不会被选为匹配的转场特效。该效果模板与上移转场之间的匹配权重为50%,也即,手机需处理的横屏视频数据1的场景下,上移转场有50%的概率被选为匹配的转场特效。该效果模板与下移转场之间的匹配权重为50%,当然,手机需处理的横屏视频数据1的场景下,下移转场有50%的概率被选为匹配的转场特效。该效果模板与左移转场之间的匹配权重为50%,也即,手机需处理的竖屏视频数据1的场景下,左移转场有50%的概率被选为匹配的转场特效。该效果模板与右移转场之间的匹配权重为50%,当然,手机需处理的竖屏视频数据1的场景下,右移转场有50%的概率被选为匹配的转场特效。该效果模板与黑场转场之间的匹配权重为90%,也即,黑场转场有90%的概率被选为匹配的转场特效。该效果模板与白场转场之间的匹配权重为90%,也即,白场转场有90%的概率被选为匹配的转场特效。该效果模板与放大转场之间的匹配权重为90%,也即,放大转场有90%的概率被选为匹配的转场特效。该效果模板与缩小转场之间的匹配权重为90%,也即,缩小转场有90%的概率被选为匹配的转场特效。该效果模板与旋转转场之间的匹配权重为30%,也即,旋转转场有30%的概率被选为匹配的转场特效。
也就是,手机可以利用各转场特效所对应的匹配权重,随机选出替换转场特效1的转场特效2。这种选择方式不仅灵活性高,又可以保障选出的转场特效2与效果模板1之间大概率存在高关联。另外,手机在视频数据1中新增转场特效场景下,也可以利用各转场特效所对应的匹配权重,随机确定出与效果模板1匹配的转场特效3(又称为第三转场特效),并将转场特效3添加于视频数据1中,如,添加于视频数据1 中第二时间点所对应的视频帧上。其中,第二时间点所对应的视频帧可以是:在视频数据1中,位于第二时间点前后的视频帧。如此,处理得到的视频数据1可呈现的风格可以更加接近效果模型的预期效果。
在另一些实施例中,效果模板1还可以与一个或多个转场特效之间存在关联标识,利用效果模板1处理视频数据1时,可以优先选用存在关联标识的转场特效,以作为转场特效2。
在另一些实施例中,在利用匹配的效果模板1处理视频数据1之后,手机还可以依据用户的操作,更换处理视频数据1时所用的效果模板,或单独更改所用的视频音乐。
示例性地,如图14中的(a)所示,在手机利用效果模板处理视频数据1之后,手机可以显示预览界面,如界面1401,如称为第三界面。该第三界面用于显示视频数据2,又称为第二视频数据,其中,视频数据2是在视频数据1的基础上,经过效果模板处理后得到的视频。该第二视频数据中包括视频数据1中的视频帧、替代帧(如,替代帧2和替代帧3)、效果模板1对应的视频音乐(如,称为第一音乐)和转场特效2(又称为第二转场特效)。手机可以接收用户在界面1401上的操作6,如,点击界面1401上的风格控件1402的操作。并响应该操作6,如图14中的(b)所示,显示界面1403。其中,界面1403是指引用户选择效果模板的引导界面。上述界面1403中包括多个指示不同效果模板的模板窗口。例如,窗口1404、窗口1405、窗口1406和窗口1407。其中,上述窗口1404用于指示命名为你好夏天的效果模板,窗口1405用于指示命名为sunny的效果模板,窗口1406用于指示命名为HAPPY的效果模板,窗口1407用于指示命名为小美好的效果模板。在这些效果模板中,你好夏天效果模板的模板窗口处于选中状态,指示效果模板1为你好夏天效果模板。这样,手机可以依据用户对其他模板窗口的操作,确定出用户选择的效果模板2。例如,手机接收到用户对窗口1405的点击操作后,界面1403中的预览窗口1408可以显示sunny效果模板的样片。这样,手机如果接收到用户对界面1403中的控件1409的操作,可以确定将sunny效果模板是选出的效果模板2。之后,手机可以利用效果模板2处理原始的视频数据1,得到符合效果模板2风格的视频数据1,并如图14中的(c)所示,再次显示界面1401,此时的界面1401中包括基于效果模板2处理后的视频数据1。
另外,在显示界面1403期间,手机并未接收到用户指示选择其他效果模板的操作的情况下,接收到用户对控件1409的操作。手机可以确定用户指示利用效果模板1重新处理原始的视频数据1。在重新处理的过程中,依然可以利用各转场特效与效果模板1之间的匹配权重,重新随机确定转场特效2和转场特效3,并用于重新处理原始的视频数据1。可以理解的,重新随机出的转场特效2和转场特效3,可能与第一次利用效果模板1确定出的转场特效不同。这样,再次处理后得到的视频数据1的视觉效果也会不同,提升一键成片的多样性。
又示例性地,手机可以接收用户在界面1401上的操作7,如,点击界面1401上的音乐控件1410的操作。并响应该操作7,更换不同的视频音乐。所更换的视频音乐可以是与效果模板2对应相同风格的音乐,也可以随机的一首音乐,对此不作限定。
另外,在界面1401中还包括指示确定的控件1411。如图14中的(c)所示,手 机接收到用户对控件1411的操作,如点击操作之后,保存处理后的视频数据1,如,将基于效果模板处理后的视频数据1又称为视频数据2。此外,手机还可以显示视频数据2所对应的视频编辑界面,如图14中的(d)所示的界面1412。该界面1412可以显示视频数据2。手机可以依据用户在界面1412中的操作,播放视频数据2。
在可能的实施例中,界面1401中还可以包括指示撤销效果模板的控件,如,图14中的(a)所示的控件1413。手机可接收用户针对控件1413的点击操作,删除基于效果模板处理后的视频数据1,并再次显示界面803。在手机再次显示界面803的情况下,交界803中依然包括一键大片控件1101。若手机再次接收到用户对一键大片1101的操作时,则可以再次确定一匹配的效果模板,利用新确定出的效果模板再次处理视频数据1,处理过程可参考前述实施例,在此不再赘述。另外,针对同一段视频数据1,手机可以使相邻两次通过一键大片确定的效果模板不同,提升二次创作视频数据的多样性。
本申请实施例还提供一种电子设备,该电子设备可以包括:存储器和一个或多个处理器。该存储器和处理器耦合。该存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令。当处理器执行计算机指令时,可使得电子设备执行上述实施例中手机执行的各个步骤。当然,该电子设备包括但不限于上述存储器和一个或多个处理器。例如,该电子设备的结构可以参考图1所示的手机的结构。
本申请实施例还提供一种芯片系统,该芯片系统可以应用于前述实施例中的电子设备。如图15所示,该芯片系统包括至少一个处理器2201和至少一个接口电路2202。该处理器2201可以是上述电子设备中的处理器。处理器2201和接口电路2202可通过线路互联。该处理器2201可以通过接口电路2202从上述电子设备的存储器接收并执行计算机指令。当计算机指令被处理器2201执行时,可使得电子设备执行上述实施例中手机执行的各个步骤。当然,该芯片系统还可以包含其他分立器件,本申请实施例对此不作具体限定。
在一些实施例中,通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请实施例各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:快闪存储器、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的 介质。
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限于此,任何在本申请实施例揭露的技术范围内的变化或替换,都应涵盖在本申请实施例的保护范围之内。因此,本申请实施例的保护范围应以所述权利要求的保护范围为准。

Claims (12)

  1. 一种视频数据的转场处理方法,其特征在于,所述方法应用于电子设备,所述方法包括:
    所述电子设备显示第一界面;其中,所述第一界面包括第一视频数据的第一缩略图;所述第一视频数据包括第一转场特效;所述第一转场特效叠加于所述第一视频数据中连续的多帧第一视频帧上;
    所述电子设备接收用户对所述第一缩略图的第一操作;
    所述电子设备响应于所述第一操作,显示第二界面;所述第二界面是所述第一视频数据的视频编辑界面;所述第二界面包括一键大片控件;
    在接收到用户对所述一键大片控件的第二操作之后,所述电子设备显示第三界面;所述第三界面用于显示第二视频数据;所述第二视频数据包括所述第一视频数据的视频帧、多帧替代帧、第一音乐及所述第一音乐对应的第二转场特效;所述第二转场特效叠加于所述多帧替代帧上,所述替代帧用于替换所述第一视频数据中的所述第一视频帧。
  2. 根据权利要求1所述的方法,其特征在于,
    在所述电子设备显示第一界面之前,所述方法还包括:
    所述电子设备显示第四界面;所述第四界面是相机应用提供的取景预览界面;所述第四界面包括指示启动拍摄视频的第一控件;
    所述电子设备接收用户对所述第一控件的第三操作;
    所述电子设备响应于所述第三操作,显示第五界面,并开始录制所述第一视频数据;其中,所述第五界面是第一镜头模式下的视频录制界面;所述第五界面包括指示切换镜头模式的第二控件;
    在录制到所述第一视频数据的第一时间点时,所述电子设备响应于用户对所述第二控件的第四操作,显示第六界面,并确定所述第一时间点对应的视频帧为所述第一视频帧;其中,所述第六界面是第二镜头模式下的视频录制界面;所述第六界面包括指示停止拍摄的第三控件;
    所述电子设备接收用户对所述第三控件的第五操作;
    所述电子设备显示第一界面,包括:所述电子设备响应于所述第五操作,显示所述第一界面;所述第一界面也是所述相机应用提供的取景预览界面。
  3. 根据权利要求2所述的方法,其特征在于,在所述电子设备接收用户对所述第三控件的第五操作之后,所述方法还包括:
    所述电子设备在所述第一视频帧上叠加所述第一转场特效。
  4. 根据权利要求1所述的方法,其特征在于,
    在所述电子设备显示第一界面之前,所述方法还包括:所述电子设备显示主界面;所述主界面包括图库应用的图标;所述电子设备接收用户对所述图库应用的图标的第六操作;
    所述电子设备显示第一界面,包括:所述电子设备响应于所述第六操作,显示所述第一界面,所述第一界面为所述图库应用提供的应用界面。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,在所述电子设备显示第三界 面之前,所述方法还包括:
    所述电子设备响应于所述第二操作,从多个预配置的效果模板中确定出第一效果模板;所述第一效果模板包括所述第一音乐;
    所述电子设备将所述第一视频数据中的所述第一视频帧删除;
    所述电子设备将所述第一视频数据中的第二视频帧定格,得到用于替代所述第一视频帧的所述替代帧;所述第二视频帧是所述第一视频帧的相邻前一帧视频帧,或者,是所述第一视频帧的相邻后一帧视频帧;
    所述电子设备将所述第二转场特效叠加于所述替代帧之上。
  6. 根据权利要求5所述的方法,其特征在于,所述第一效果模板对应第一风格;所述从多个预配置的效果模板中确定出第一效果模板,包括:
    所述电子设备利用预置的人工智能模型确定所述第一视频数据与所述第一风格匹配;所述电子设备从属于所述第一风格的效果模板中,确定出所述第一效果模板;
    或者,所述电子设备从多个预置的所述效果模板中,随机确定出所述第一拍摄模板。
  7. 根据权利要求1所述的方法,其特征在于,在所述电子设备显示第三界面之前,所述方法还包括:
    所述电子设备确定与所述第一音乐对应的所述第二转场特效。
  8. 根据权利要求7所述的方法,其特征在于,所述电子设备确定与所述第一音乐对应的所述第二转场特效,包括:
    所述电子设备从多种预置转场特效中,确定与所述第一音乐之间具有关联标识的所述第二转场特效。
  9. 根据权利要求7所述的方法,其特征在于,所述电子设备确定与所述第一音乐对应的所述第二转场特效,包括:
    所述电子设备基于匹配权重,从多种预置转场特效中,确定出所述第二转场特效;
    其中,每种所述预置转场特效对应有一所述匹配权重,所述匹配权重是所述第一音乐与所述预置转场特效之间适配度的量化比值参数。
  10. 根据权利要求1所述的方法,其特征在于,所述第二视频数据中还包括:第三转场特效;所述第三转场特效添加于所述第一视频数据中第二时间点所对应的视频帧上;所述第三转场特效是多种预置转场特效中的一种;所述多种预置转场特效包括所述第二转场特效。
  11. 一种电子设备,其特征在于,电子设备包括一个或多个处理器和存储器;所述存储器与处理器耦合,存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器执行计算机指令时,所述一个或多个处理器,用于执行如权利要求1-10任一项所述的方法。
  12. 一种计算机存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-10任一项所述的方法。
PCT/CN2022/094793 2021-06-16 2022-05-24 一种视频数据的转场处理方法及电子设备 WO2022262537A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US18/257,018 US20240106967A1 (en) 2021-06-16 2022-05-24 Video Data Transition Processing Method and Electronic Device
EP22824018.0A EP4240011A4 (en) 2021-06-16 2022-05-24 TRANSITION PROCESSING METHOD FOR VIDEO DATA AND ELECTRONIC DEVICE

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN202110676709.3 2021-06-16
CN202110676709 2021-06-16
CN202111439351.9 2021-11-29
CN202111439351 2021-11-29
CN202210056943.0 2022-01-18
CN202210056943.0A CN115484424A (zh) 2021-06-16 2022-01-18 一种视频数据的转场处理方法及电子设备

Publications (1)

Publication Number Publication Date
WO2022262537A1 true WO2022262537A1 (zh) 2022-12-22

Family

ID=84420486

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/094793 WO2022262537A1 (zh) 2021-06-16 2022-05-24 一种视频数据的转场处理方法及电子设备

Country Status (4)

Country Link
US (1) US20240106967A1 (zh)
EP (1) EP4240011A4 (zh)
CN (1) CN115484424A (zh)
WO (1) WO2022262537A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006279968A (ja) * 2006-04-05 2006-10-12 Hitachi Ltd 映像アクセス装置及び映像アクセスプログラムを記録した記録媒体
CN104184960A (zh) * 2014-08-19 2014-12-03 厦门美图之家科技有限公司 一种对视频文件进行特效处理的方法
JP2015219817A (ja) * 2014-05-20 2015-12-07 オリンパス株式会社 表示装置、表示方法、およびプログラム
WO2016124095A1 (zh) * 2015-02-04 2016-08-11 腾讯科技(深圳)有限公司 生成视频的方法、装置及终端
CN107888988A (zh) * 2017-11-17 2018-04-06 广东小天才科技有限公司 一种视频剪辑方法及电子设备
US20180176481A1 (en) * 2016-12-21 2018-06-21 Samsung Electronics Co., Ltd. Method for producing media file and electronic device thereof
CN111866404A (zh) * 2019-04-25 2020-10-30 华为技术有限公司 一种视频编辑方法及电子设备

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110072070B (zh) * 2019-03-18 2021-03-23 华为技术有限公司 一种多路录像方法及设备、介质
CN111835986B (zh) * 2020-07-09 2021-08-24 腾讯科技(深圳)有限公司 视频编辑处理方法、装置及电子设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006279968A (ja) * 2006-04-05 2006-10-12 Hitachi Ltd 映像アクセス装置及び映像アクセスプログラムを記録した記録媒体
JP2015219817A (ja) * 2014-05-20 2015-12-07 オリンパス株式会社 表示装置、表示方法、およびプログラム
CN104184960A (zh) * 2014-08-19 2014-12-03 厦门美图之家科技有限公司 一种对视频文件进行特效处理的方法
WO2016124095A1 (zh) * 2015-02-04 2016-08-11 腾讯科技(深圳)有限公司 生成视频的方法、装置及终端
US20180176481A1 (en) * 2016-12-21 2018-06-21 Samsung Electronics Co., Ltd. Method for producing media file and electronic device thereof
CN107888988A (zh) * 2017-11-17 2018-04-06 广东小天才科技有限公司 一种视频剪辑方法及电子设备
CN111866404A (zh) * 2019-04-25 2020-10-30 华为技术有限公司 一种视频编辑方法及电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4240011A4

Also Published As

Publication number Publication date
EP4240011A1 (en) 2023-09-06
EP4240011A4 (en) 2024-04-24
US20240106967A1 (en) 2024-03-28
CN115484424A (zh) 2022-12-16

Similar Documents

Publication Publication Date Title
CN113747085B (zh) 拍摄视频的方法和装置
WO2021223500A1 (zh) 一种拍摄方法及设备
CN111654629A (zh) 摄像头切换方法、装置、电子设备及可读存储介质
CN113099146B (zh) 一种视频生成方法、装置及相关设备
WO2022252660A1 (zh) 一种视频拍摄方法及电子设备
CN115002340A (zh) 一种视频处理方法和电子设备
US20230043815A1 (en) Image Processing Method and Electronic Device
CN108513069B (zh) 图像处理方法、装置、存储介质及电子设备
CN115689963B (zh) 一种图像处理方法及电子设备
EP4258632A1 (en) Video processing method and related device
CN114520886A (zh) 一种慢动作录像方法及设备
WO2022083325A1 (zh) 拍照预览方法、电子设备以及存储介质
CN108259767B (zh) 图像处理方法、装置、存储介质及电子设备
EP4273684A1 (en) Photographing method and electronic device
CN115484423A (zh) 一种转场特效添加方法及电子设备
WO2022262537A1 (zh) 一种视频数据的转场处理方法及电子设备
WO2023036007A1 (zh) 一种获取图像的方法及电子设备
CN116723383B (zh) 一种拍摄方法及相关设备
CN115484400B (zh) 一种视频数据处理方法及电子设备
EP4290874A1 (en) Video processing method and electronic device
CN114285963B (zh) 多镜头视频录制方法及相关设备
CN115623319B (zh) 一种拍摄方法及电子设备
CN116055863B (zh) 一种相机的光学图像稳定装置的控制方法及电子设备
CN115484425A (zh) 一种转场特效的确定方法及电子设备
WO2024082863A1 (zh) 图像处理方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22824018

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022824018

Country of ref document: EP

Effective date: 20230530

WWE Wipo information: entry into national phase

Ref document number: 18257018

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE