WO2022068511A1 - Video generation method and electronic device - Google Patents

Video generation method and electronic device Download PDF

Info

Publication number
WO2022068511A1
WO2022068511A1 PCT/CN2021/116047 CN2021116047W WO2022068511A1 WO 2022068511 A1 WO2022068511 A1 WO 2022068511A1 CN 2021116047 W CN2021116047 W CN 2021116047W WO 2022068511 A1 WO2022068511 A1 WO 2022068511A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
electronic device
scene type
segment
scene
Prior art date
Application number
PCT/CN2021/116047
Other languages
French (fr)
Chinese (zh)
Inventor
张韵叠
苏达
陈绍君
胡靓
徐迎庆
徐千尧
郭子淳
高家思
周雪怡
Original Assignee
华为技术有限公司
清华大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司, 清华大学 filed Critical 华为技术有限公司
Publication of WO2022068511A1 publication Critical patent/WO2022068511A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72469User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Definitions

  • the embodiments of the present application relate to the field of electronic technologies, and in particular, to a video generation method and an electronic device.
  • the present application provides a video generation method and electronic device, so as to conveniently and quickly generate a video, and the video has a coherent line of sight and high quality, which enhances the video's sense of footage and cinema, and improves the user's experience.
  • the present application provides a video generation method, including: an electronic device displays a first interface of a first application, and the first interface includes a first control and a second control; and the electronic device receives a first control that acts on the first control. After the first operation of the The third material generates the first video; after receiving the second operation acting on the second control, the electronic device determines that the arrangement order of the first material, the second material and the third material is the second order, and the second order is the same as the second order.
  • the three sequences are different; and according to the second sequence, a second video is generated from the first material, the second material and the third material.
  • the first material, the second material and the third material are different image materials stored in the electronic device, and the third order is the time sequence in which the first material, the second material and the third material are stored in the electronic device.
  • the method provided in the first aspect by identifying the scene type of the material, matching the appropriate video template, adjusting the arrangement order of the material based on the scene type that has been set for each segment in the video template, and combining every scene type in the video template
  • the camera movement, speed and transition that have been set for each clip can automatically generate a video with a coherent line of sight and high quality, without relying on the user's manual editing, which enhances the camera and cinematic sense of the video, and improves the user experience.
  • the first video is divided into a plurality of segments with the beat point of the music as a dividing line; the first material, the second material and the third material appear at least once in the first video, and in the first video
  • the materials appearing in any two adjacent segments of the second video are different; the first material, the second material and the third material appear at least once in the second video, and the materials appearing in any two adjacent segments of the second video are different . This ensures that the resulting video has a professional camera and cinematic feel.
  • the method further includes: displaying the second interface of the first application on the electronic device; after the electronic device receives the third operation acting on the second interface, displaying the first material, the second material and the first material Three materials generate the first video. Therefore, the electronic device can generate a video with a coherent line of sight and high quality based on the material selected by the user.
  • the method further includes: the electronic device determines from the first material, the second material, the third material and the fourth material to generate the first video from the first material, the second material and the third material;
  • the fourth material is an image material stored in the electronic device that is different from the first material, the second material and the third material.
  • the first interface further includes a third control; the method further includes: after the electronic device receives the fourth operation acting on the third control, displaying a third interface, where the third interface includes: configuring options of information, the configuration information includes: at least one parameter of duration, filter, frame, material or title; after the electronic device receives the fifth operation acting on the options of the configuration information, in the first order, based on the configuration information , and generate a third video from the first material, the second material, and the third material.
  • the types of videos are enriched, and the user's requirements for adjusting various parameters of the videos are satisfied.
  • the first interface further includes a fourth control; the method further includes: after the electronic device generates the first video, in response to a fourth operation acting on the fourth control, the electronic device saves the first video. Therefore, it is convenient for the user to watch and edit the generated video subsequently.
  • the method specifically includes: the electronic device determines the scene category type corresponding to the first material, the scene category type corresponding to the second material, and the scene category type corresponding to the third material; the electronic device is based on the scene category type corresponding to the first material.
  • the first segment is any segment in the first video template; the arrangement order of the materials corresponding to all the segments in the first video template is the first order; the electronic device is based on the scene type corresponding to the first material, the first The scene type corresponding to the second material, the scene type corresponding to the third material, and the scene type set for each clip in the second video template, determine the material that matches the scene type corresponding to the second clip, the second clip is any fragment in the second video template; and the arrangement order of the materials corresponding to all the fragments in the second video template is the second order; wherein, the first video template is different from the second video template, and the Each segment corresponds to each segment in the second video template, and each segment in the second video corresponds to each segment in the second video template.
  • the method further includes: the electronic device converts the first material, the second material into and the third material to generate a first video; the electronic device converts the first material, the second material and the third material into The material generates a second video.
  • the method specifically includes: the scene type corresponding to the first material in the electronic device is the same as the scene type corresponding to the first segment, or, the scene corresponding to the first material is of the same type.
  • the scene type is adjacent to the sequence of the scene type corresponding to the first segment according to the preset rule, the first material is determined as the material matching the scene type corresponding to the first segment; the electronic device is in the scene corresponding to the first material.
  • the category type is the same as the scene category type corresponding to the second clip, or, when the scene category type corresponding to the first material is adjacent to the scene category type corresponding to the second clip according to the preset rules, the first material is determined to be the same as the scene category type corresponding to the second clip.
  • the scene type corresponding to the two clips matches the material.
  • the method specifically includes: the scene type of the electronic device corresponding to the fourth material is the same as the scene type corresponding to the first clip or the scene type corresponding to the fourth material.
  • the type is adjacent to the sequence of the scene types corresponding to the first segment according to the preset rules, and when the duration of the fourth material is equal to the duration of the first segment, the fourth material is intercepted from the first material, and the fourth material is determined as The material that matches the scene type corresponding to the first clip; the scene type corresponding to the fourth material in the electronic device is the same as the scene type corresponding to the second clip, or the scene type corresponding to the fourth material is the same as the second type according to the preset rules.
  • the fourth material is intercepted from the second clip, and the fourth material is determined as the scene corresponding to the second clip A material whose type matches; wherein, the fourth material is part or all of the first material.
  • the types of scenes include: close-up, medium-range, and far-scenes;
  • the distant view the type of scene adjacent to the distant view is the close view.
  • the first application is a gallery application of the electronic device.
  • the present application provides an electronic device, including: a memory and a processor; the memory is used to store program instructions; the processor is used to call the program instructions in the memory to enable the electronic device to execute the first aspect and any possibility of the first aspect The video generation method in the design.
  • the present application provides a chip system, which is applied to an electronic device including a memory, a display screen and a sensor; the chip system includes: a processor; when the processor executes computer instructions stored in the memory, the electronic device executes the first A video generation method in one aspect and any possible design of the first aspect.
  • the present application provides a computer-readable storage medium on which a computer program is stored, and the computer program is executed by the processor to enable the electronic device to realize the video generation in the first aspect and any possible design of the first aspect. method.
  • the present application provides a computer program product, comprising: execution instructions, the execution instructions are stored in a readable storage medium, at least one processor of an electronic device can read the execution instructions from the readable storage medium, and at least one processor Executing the execution instruction enables the electronic device to implement the first aspect and the video generation method in any possible design of the first aspect.
  • FIG. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 2 is a block diagram of a software structure of an electronic device provided by an embodiment of the present application.
  • 3A-3T are schematic diagrams of human-computer interaction interfaces provided by an embodiment of the present application.
  • 4A-4J are schematic diagrams of the effect of using a mirror for a picture material provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of the effect of using different speeds for a picture material provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of the effect of using a transition for a picture material provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a scene type of a character type material provided by an embodiment of the present application.
  • 8A-8E are schematic diagrams of playback of a video generated based on a material provided by an embodiment of the application.
  • FIG. 9 is a schematic diagram of a video generation method provided by an embodiment of the present application.
  • “at least one” refers to one or more, and “multiple” refers to two or more.
  • “And/or”, which describes the association relationship of the associated objects, indicates that there can be three kinds of relationships, for example, A and/or B, which can indicate: the existence of A alone, the existence of A and B at the same time, and the existence of B alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the associated objects are an “or” relationship.
  • “At least one item(s) below” or similar expressions thereof refer to any combination of these items, including any combination of single item(s) or plural items(s).
  • At least one (a) of a single a, a single b or a single c can mean: a single a, a single b, a single c, a combination of a and b, a combination of a and c, a combination of b and c, or a combination of a, b and c, where a, b, c can be single or multiple.
  • first and second are used for descriptive purposes only and should not be construed to indicate or imply relative importance.
  • the electronic device 100 may include a processor 110 , an external memory interface 120 , an internal memory 121 , a universal serial bus (USB) interface 130 , a charging management module 140 , a power management module 141 , and a battery 142 , Antenna 1, Antenna 2, Mobile Communication Module 150, Wireless Communication Module 160, Audio Module 170, Speaker 170A, Receiver 170B, Microphone 170C, Headphone Interface 170D, Sensor Module 180, Key 190, Motor 191, Indicator 192, Camera 193 , a display screen 194, and a subscriber identification module (subscriber identification module, SIM) card interface 195 and the like.
  • a processor 110 may include a processor 110 , an external memory interface 120 , an internal memory 121 , a universal serial bus (USB) interface 130 , a charging management module 140 , a power management module 141 , and a battery 142 , Antenna 1, Antenna 2, Mobile Communication Module 150, Wireless Communication Module 160, Audio Module 1
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structures illustrated in this application do not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown, or some components may be combined, or some components may be split, or a different arrangement of components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • graphics processor graphics processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus that includes a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may contain multiple sets of I2C buses.
  • the processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate with each other through the I2C bus interface, so as to realize the touch function of the electronic device 100 .
  • the I2S interface can be used for audio communication.
  • the processor 110 may contain multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 .
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communications, sampling, quantizing and encoding analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is typically used to connect the processor 110 with the wireless communication module 160 .
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 110 communicates with the camera 193 through a CSI interface, so as to realize the photographing function of the electronic device 100 .
  • the processor 110 communicates with the display screen 194 through the DSI interface to implement the display function of the electronic device 100 .
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface may be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like.
  • the GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that conforms to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones to play audio through the headphones.
  • the interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in this application is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 charges the battery 142 , it can also supply power to the electronic device through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140 and supplies power to the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 .
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or videos through the display screen 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 110, and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR).
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2 .
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 194 is used to display images, videos, and the like.
  • Display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED diode AMOLED
  • flexible light-emitting diode flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • the electronic device 100 may include one or N display screens 194 , where N is a positive integer greater than one.
  • the electronic device 100 may implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 193 .
  • the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy and so on.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos of various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG Moving Picture Experts Group
  • MPEG2 moving picture experts group
  • MPEG3 MPEG4
  • MPEG4 Moving Picture Experts Group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100 .
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing the instructions stored in the internal memory 121 .
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area may store data (such as audio data, phone book, etc.) created during the use of the electronic device 100 and the like.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
  • the audio module 170 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 .
  • Speaker 170A also referred to as a "speaker" is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also referred to as "earpiece" is used to convert audio electrical signals into sound signals.
  • the voice can be answered by placing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through a human mouth, and input the sound signal into the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which can implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the earphone jack 170D is used to connect wired earphones.
  • the earphone interface 170D can be the USB interface 130, or can be a 3.5mm open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
  • the pressure sensor 180A may be provided on the display screen 194 .
  • the capacitive pressure sensor may be comprised of at least two parallel plates of conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
  • the electronic device 100 determines the intensity of the pressure according to the change in capacitance. When a touch operation acts on the display screen 194, the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example, when a touch operation whose intensity is less than the first pressure threshold acts on the short message application icon, the instruction for viewing the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, the instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the motion attitude of the electronic device 100 .
  • the angular velocity of electronic device 100 about three axes ie, x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the gyro sensor 180B detects the shaking angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to offset the shaking of the electronic device 100 through reverse motion to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenarios.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude through the air pressure value measured by the air pressure sensor 180C to assist in positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 can detect the opening and closing of the flip holster using the magnetic sensor 180D.
  • the electronic device 100 can detect the opening and closing of the flip according to the magnetic sensor 180D. Further, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, characteristics such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the electronic device 100 is stationary. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the electronic device 100 can measure the distance through infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 can use the distance sensor 180F to measure the distance to achieve fast focusing.
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the electronic device 100 emits infrared light to the outside through the light emitting diode.
  • Electronic device 100 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100 . When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100 .
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • Proximity light sensor 180G can also be used in holster mode, pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket, so as to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, accessing application locks, taking pictures with fingerprints, answering incoming calls with fingerprints, and the like.
  • the temperature sensor 180J is used to detect the temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold value, the electronic device 100 reduces the performance of the processor located near the temperature sensor 180J in order to reduce power consumption and implement thermal protection.
  • the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 caused by the low temperature.
  • the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194 , and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided through display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display screen 194.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the pulse of the human body and receive the blood pressure beating signal.
  • the bone conduction sensor 180M can also be disposed in the earphone, combined with the bone conduction earphone.
  • the audio module 170 can analyze the voice signal based on the vibration signal of the vocal vibration bone block obtained by the bone conduction sensor 180M, so as to realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M, and realize the function of heart rate detection.
  • the keys 190 include a power-on key, a volume key, and the like. Keys 190 may be mechanical keys. It can also be a touch key.
  • the electronic device 100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100 .
  • Motor 191 can generate vibrating cues.
  • the motor 191 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback.
  • touch operations acting on different applications can correspond to different vibration feedback effects.
  • the motor 191 can also correspond to different vibration feedback effects for touch operations on different areas of the display screen 194 .
  • Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging state, the change of the power, and can also be used to indicate a message, a missed call, a notification, and the like.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be contacted and separated from the electronic device 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195 .
  • the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card and so on. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the plurality of cards may be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 is also compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to realize functions such as call and data communication.
  • the electronic device 100 employs an eSIM, ie: an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 .
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiments of the present application take an Android system with a layered architecture as an example to exemplarily describe the software structure of the electronic device 100 .
  • the embodiment of the present application does not limit the type of the operating system of the electronic device. For example, Android system, Linux system, Windows system, iOS system, Hongmeng operating system (harmony operating system, Hongmeng OS), etc.
  • FIG. 2 is a block diagram of a software structure of an electronic device according to an embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the Android system is divided into four layers, from top to bottom, they are an application layer (APP), an application framework layer (APP framework), an Android runtime (Android runtime) and a system library (libraries), And the kernel layer (kernel).
  • APP application layer
  • APP framework application framework
  • Android runtime Android runtime
  • libraries system library
  • kernel layer kernel layer
  • the application layer can include a series of application packages.
  • the application package can include camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, game, chat, shopping, travel, instant messaging (such as SMS), smart home, Device control and other applications (application, APP).
  • application application, APP
  • household equipment may include lights, televisions, and air conditioners.
  • household equipment may also include anti-theft door locks, speakers, sweeping robots, sockets, body fat scales, desk lamps, air purifiers, refrigerators, washing machines, water heaters, microwave ovens, rice cookers, curtains, fans, TVs, set-top boxes, windows, etc.
  • the application package may also include: home screen (ie desktop), negative screen, control center, notification center and other applications.
  • the negative one screen also known as "-1 screen” refers to sliding the screen to the right on the main screen of the electronic device until it slides to the user interface (UI) of the leftmost split screen.
  • the negative screen can be used to place some quick service functions and notification messages, such as global search, quick entry to a certain page of the application (payment code, WeChat, etc.), instant information and reminders (courier information, expense information, commuting road conditions) , taxi travel information, schedule information, etc.) and attention to trends (football stands, basketball stands, stock information, etc.).
  • the control center is the slide-up message notification bar of the electronic device, that is, the user interface displayed by the electronic device when the user starts to perform the slide-up operation at the bottom of the electronic device.
  • the notification center is a drop-down message notification bar of the electronic device, that is, a user interface displayed by the electronic device when the user starts to perform downward operations on the top of the electronic device.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include window managers, content providers, view systems, telephony managers, resource managers, notification managers, and the like.
  • Window manager is used to manage window programs, such as managing window state, properties, view addition, deletion, update, window order, message collection and processing, etc.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc. And, the window manager is the entry for the outside world to access the window.
  • Content providers are used to store and retrieve data and make this data accessible to applications.
  • the data may include video, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
  • a display interface can consist of one or more views.
  • the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
  • the phone manager is used to provide the communication function of the electronic device 100 .
  • the management of call status including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications of applications running in the background, and notifications on the screen in the form of dialog windows. For example, text information is prompted in the status bar, a prompt sound is issued, the electronic device vibrates, and the indicator light flashes.
  • the Android runtime includes core libraries and a virtual machine.
  • the Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of the Android system.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
  • a system library can include multiple functional modules. For example: surface manager (surface manager), media library (media library), 3D graphics processing library (eg: OpenGLES), 2D graphics engine (eg: SGL), etc.
  • surface manager surface manager
  • media library media library
  • 3D graphics processing library eg: OpenGLES
  • 2D graphics engine eg: SGL
  • the Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
  • the software and hardware workflows of the electronic device 100 are exemplarily described below in conjunction with a scenario in which a smart speaker is used to play sound.
  • a corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into raw input events (including touch coordinates, timestamps of touch operations, etc.). Raw input events are stored at the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and the control corresponding to the click operation is the control of the smart speaker icon, for example, the smart speaker application calls the interface of the application framework layer to start the smart speaker application, and then starts the audio driver by calling the kernel layer. , the audio electrical signal is converted into a sound signal through the speaker 170A.
  • the structures illustrated in this application do not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown, or some components may be combined, or some components may be split, or a different arrangement of components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • Embodiments of the present application provide a method and electronic device for generating a video.
  • the electronic device identifies the scene type of the material, matches a suitable video template, adjusts the arrangement order of the material based on the scene type set in the video template, and combines
  • the camera movement, speed and transition set in the video template can automatically generate video, so that the generated video has a coherent line of sight and high quality, which enhances the sense of footage and film of the video, and improves the user experience. Adjust the video duration, filters, frame and other parameters to meet the actual user needs and enrich the types of videos.
  • the electronic device may be a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an augmented reality (AR)/virtual reality (VR) device, a notebook computer, an ultra-mobile personal computer (ultra-mobile personal computer) , UMPC), netbooks, personal digital assistants (personal digital assistants, PDAs), smart TVs, smart screens, high-definition TVs, 4K TVs, smart speakers, smart projectors, etc.
  • AR augmented reality
  • VR virtual reality
  • PDAs personal digital assistants
  • smart TVs smart screens
  • high-definition TVs 4K TVs
  • 4K TVs smart speakers
  • smart projectors etc.
  • the embodiments of this application do not impose any restrictions on the specific types of electronic devices .
  • the material can be understood as the picture material or video material stored in the electronic device. It should be noted that the picture material mentioned in the embodiments of the present application has the same meaning as the photo material.
  • the picture material may be photographed by an electronic device, downloaded by the electronic device from a server, or received by the electronic device from other electronic devices, which is not limited in this embodiment of the present application.
  • Scene recognition can be understood as the difference in the size of the range presented by the subject in the subject due to the difference in the distance between the subject and the subject.
  • the photographing body may be an electronic device, or may be a device in communication connection with the electronic device, which is not limited in this embodiment of the present application.
  • the division of scene types may include various implementation manners. It should be noted that the scene types mentioned in the embodiments of the present application refer to scene types.
  • the classification of scene types can be divided into three types, from near to far, they are close-range, medium-range, and long-range.
  • close shot refers to above the human chest
  • medium shot refers to above the human thigh
  • long shot refers to situations other than close and medium shots.
  • the classification of scene types may be five, from near to far, they are close-up, close-up, medium-range, panorama, and long-range.
  • close-up refers to the body above the shoulders
  • close-up refers to the body above the chest
  • medium shot refers to the body above the knee
  • panoramic refers to the entire human body and part of the surrounding environment
  • long-range refers to the environment where the subject is located.
  • the scene type corresponding to the video material may be regarded as a collection of the respective scene types of a plurality of picture materials.
  • the electronic device can record the start time and duration, or the start time and the end time, or the start time, the duration and the end time of each scene type.
  • electronic equipment adopts technologies such as face recognition, semantic recognition, salient feature recognition, semantic segmentation, etc., which can classify the scene type of the material, that is, determine the scene type of the picture material.
  • the electronic device determines the face recognition frame of any material based on the face recognition technology.
  • the electronic device determines that the scene of the material is a close-up of the face.
  • the electronic device determines that the scene of the material is a close-up scene of a face.
  • the specific values of the threshold value A1 and the threshold value A2 may be set according to factors such as empirical values and face recognition technical means.
  • Electronic devices can perform face recognition on any material based on face recognition technology.
  • the electronic device can use the area of the head*threshold to obtain the person recognition frame.
  • the electronic device determines that the scene of the material is a close-up of a person.
  • the electronic device determines that the scene of the material is a close-up scene of a person.
  • the specific values of the threshold value B1 and the threshold value B2 may be set according to factors such as empirical values.
  • the electronic device determines the semantic recognition result and the salient feature recognition result of any material based on the speech segmentation recognition technology and the salient feature recognition technology.
  • the electronic device determines that the scene of the material is a close-up of the food.
  • the electronic device determines that the scene of the material is a close-up scene of the food.
  • the specific values of the threshold C1 and the threshold C2 can be set according to factors such as empirical values.
  • the electronic device determines that the scene of the material is a close-up view of a non-person with a large aperture.
  • the electronic device determines the semantic recognition result and the salient feature recognition result of any material based on the speech segmentation recognition technology and the salient feature recognition technology.
  • the electronic device judges the scene of the material as a close-up view of a saliency flower .
  • the specific values of the threshold D1 and the threshold D2 may be set according to factors such as empirical values.
  • the electronic device determines that the scene of the material is a close-up view of a salient pet .
  • the specific values of the threshold E1 and the threshold E2 may be set according to factors such as empirical values.
  • Electronic devices can perform face recognition on any material based on face recognition technology.
  • the electronic device determines the material.
  • the scene is not the character medium scene.
  • the electronic device determines the salient feature recognition result of any material based on the salient feature recognition technology.
  • the electronic device determines the scene of the material as a significant distant view .
  • the electronic device determines the screen segmentation result of any material based on the semantic segmentation technology.
  • the electronic device determines that the scene of the material is a landscape and distant scene.
  • the threshold value G may be set to be greater than or equal to 90%, and the specific value of the threshold value G is not limited in this embodiment of the present application.
  • the preset targets can be landscape features, such as sea, sky, mountain peaks, etc.
  • the electronic device determines that the scene of the material is a medium scene.
  • a to E are the close-up range
  • G and H are long shots
  • the deterministic order is H>G
  • F and I are medium shots.
  • Mirror movement is also called sports lens, which mainly refers to the movement of the lens itself.
  • the movement of the mirror is related to the type of the material, that is, the movement of the mirror corresponding to the picture material and the movement of the mirror corresponding to the video material may be the same or different.
  • Transitions can be understood as transitions or transitions between paragraphs and paragraphs, scenes and scenes.
  • each paragraph (the smallest unit that constitutes a video is a shot, a sequence of shots formed by connecting shots together) has a single, relatively complete meaning, such as expressing an action process, expressing a correlation, expressing a meaning and so on. It is a complete narrative level in a video, just like an act in a drama or a chapter in a novel.
  • the paragraphs are connected together to form a complete video. Therefore, paragraph is the most basic structural form of video, and the structural level of video content is expressed through paragraphs.
  • a video template can be understood as the theme or style of the video.
  • the types of video templates may include but are not limited to: travel, parent-child, party, sports, food, scenery, retro, city, night, humanities, etc.
  • the parameters in any video template may include, but are not limited to: scene type, camera movement, speed, transition, and the like.
  • different video templates have different at least one parameter in scene type, camera movement, speed and transition.
  • the electronic device has a function of generating a video from stored materials, so that one or more picture materials and/or video materials in the electronic device generate a video.
  • the electronic device provides the user with a variety of entry ways to generate video, which facilitates the user to generate the video in a timely and rapid manner, and improves the convenience of the user.
  • the embodiments of the present application include, but are not limited to, the gallery application as an entry method for generating videos, and include but are not limited to the above three methods.
  • FIGS. 3A-3F are schematic diagrams of human-computer interaction interfaces provided by an embodiment of the present application.
  • the electronic device is a mobile phone as an example for exemplary illustration.
  • the handset may display a user interface 11 as exemplarily shown in Figure 3A.
  • the user interface 11 may be the home screen of the desktop, and the user interface 11 may include but not limited to: a status bar, a navigation bar, a calendar indicator, a weather indicator, and multiple application icons.
  • the application icon may include: the icon 301 of the gallery application, and the application icon may also include: such as the icon of the Huawei video application, the icon of the music application, the icon of the mobile phone manager application, the icon of the setting application, the icon of the Huawei mall application, the icon of the smart life Icons of applications, icons of sports health applications, icons of calling applications, icons of instant messaging applications, icons of browser applications, icons of camera applications, etc.
  • the mobile phone After detecting that the user performs an operation of opening the gallery application in the user interface 11 shown in FIG. 3A (for example, clicking the icon 301 of the gallery application), the mobile phone can display the user interface 12 exemplarily shown in FIG. 3B .
  • the user interface 12 is used for Displays the page corresponding to the album category in the Gallery app.
  • the user interface 12 may include: a control 3021, which is used to enter a display interface containing all picture materials and/or video materials in the mobile phone, and a control 3023, which is used to enter the photo album in the gallery application The display interface corresponding to the category.
  • the specific implementation manner of the user interface 12 may include various manners.
  • the user interface 12 is divided into two groups.
  • the first group consists of two parts.
  • the title of the first group is illustrated by taking the text "album” as an example in FIG. 3B .
  • a search box is used to provide the user with a way to search for picture material and/or video material according to keywords such as photos, people, places, etc.
  • the second part includes controls 3021, and controls for entering a display interface containing only video material.
  • the second group displays pictures obtained by means of screenshots or an application.
  • the title of the third part is illustrated in FIG. 3B by using the text “other albums (3)” and a rounded rectangular frame as an example.
  • the user interface 12 further includes: a control 3022 , a control 3024 and a control 3025 .
  • the control 3022 is used to enter the display interface corresponding to the photo category in the gallery application.
  • the control 3024 is used to enter the display interface corresponding to the moment category in the gallery application.
  • the control 3025 is used to enter the display interface corresponding to the discovery category in the gallery application.
  • the user interface 12 may further include: controls for implementing functions such as deleting an existing group in the user interface 12, changing the name of an existing group, and controls for adding a new group in the user interface 12 .
  • the mobile phone After detecting that the user performs an operation such as clicking on the control 3021 in the user interface 12 shown in FIG. 3B, the mobile phone can display the user interface 13 exemplarily shown in FIG. 3C, and the user interface 13 is all the picture materials and/or the mobile phone. Display interface for video footage.
  • the embodiments of the present application display the number of picture materials in the user interface 13, the display area of picture materials, the display position of picture materials, the display content of video materials, the display quantity of video materials, the display area of video materials, the display area of video materials, the display area of video materials, the The parameters such as the display position and the order of each type of material are not limited.
  • the user interface 13 displays: video material 3031 , picture material 3032 , picture material 3033 , video material 3034 , picture material 3035 , picture material 3036 , picture material 3037 and video material 3038.
  • the electronic device may select an image displayed in any frame in the video material as the screen displayed by the electronic device to the user. Therefore, in FIG. 3C , the images displayed by the video material 3031 , the video material 3034 and the video material 3038 are the images displayed by any frame in the respective video materials.
  • the mobile phone may display the user interface 14 exemplarily shown in FIG.
  • the interface 14 is used to display a display interface where the user selects the picture material and/or video material used to generate the video.
  • the specific implementation manner of the user interface 14 may include various.
  • the user interface 14 includes the user interface 13 and an editing interface overlaid on the user interface 13 .
  • a control can be displayed in the upper left corner of each picture material/video material for magnifying and displaying the picture material/video material ( Figure 3D Two diagonal and oppositely pointing arrows are used as an example for illustration), and a control for selecting the picture material/video material is displayed in the lower right corner of each picture material/video material (a circle is used in Fig. 3D The corner rectangle box is used as an example to illustrate).
  • a control for magnifying and displaying the picture material/video material can be displayed in the upper left corner of each picture material/video material (two diagonal and opposite arrows are used in Fig. 3D An example is shown for illustration), and a control for selecting the image material/video material is displayed in the lower right corner of each picture material/video material (a rounded rectangular box is used as an example for illustration in FIG. 3D).
  • the editing interface may include: a control 304, where the control 304 is used to create the picture material and/or video material selected by the user.
  • the editing interface may further include: controls for performing operations such as sharing, selecting all, deleting, and more on the picture material and/or video material selected by the user, which is not limited in this embodiment of the present application.
  • the mobile phone After detecting that the user performs an operation such as clicking on the control 304 in the user interface 14 shown in FIG. 3D, the mobile phone can display the window 305 exemplarily shown in FIG. 3E on the user interface 14 (FIG. "Puzzle" and a rounded rectangle as an example).
  • the mobile phone when the user selects a picture material, or a video material, or a picture material and a video material, after the user enters the text "video" in the window 305, such as clicking and other operations, the mobile phone can display a screen for performing operations on the new video. Edited user interface.
  • the mobile phone when the user selects a picture material, after the user enters the word "puzzle" in the window 305, such as clicking and other operations, the mobile phone can display a user interface for editing a new picture.
  • the mobile phone when the user selects a video material, or a picture material and a video material, after the user enters the text "Puzzle" in the window 305, such as clicking and other operations, the mobile phone cannot display a user interface for editing a new picture , the text "Puzzle does not support video" can be displayed to prompt the user to deselect the video material.
  • the mobile phone After the mobile phone detects that the user performs an operation such as clicking the text "video" in the window 305 shown in FIG. 3E, the mobile phone selects the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, and the picture based on the user's selection.
  • Material 3036, picture material 3037, and video material 3038, the type of the video template can be determined as parent-child type, so based on the parent-child type video template, the video material 3031, picture material 3032, picture material 3033, video material 3034, picture material selected by the user
  • the material 3035, the picture material 3036, the picture material 3037 and the video material 3038 generate a video and can display the user interface 15 exemplarily shown in FIG. 3F for displaying the video generated by the mobile phone.
  • the segments in the generated video correspond to segments in the parent-child video template.
  • Video material 3031, picture material 3032, picture material 3033, video material 3034, picture material 3035, picture material 3036, picture material 3037, and video material 3038 appear at least once in the generated video, and any two adjacent to each other in the generated video Clips do not place the same footage.
  • the electronic device can automatically generate a video based on the picture elements and/or video material selected by the user in the gallery application.
  • the user interface 15 is used to display controls for editing the generated video.
  • the user interface 15 may include: a preview area 306 , a progress bar 307 , a control 3081 , a control 3082 , a control 3083 , a control 3084 , a control 3085 , a control 30811 , a control 30812 , a control 30813 , a control 30814 , and a control 309 .
  • the preview area 306 is used to display the generated video, which is convenient for the user to watch and adjust the video.
  • the progress bar 307 is used to represent the duration of the video under any video template (FIG. 3F uses “00:00” to exemplarily represent the start time of the video, “00:32” to exemplarily represent the end time of the video, and a sliding bar exemplarily represents the progress of the video).
  • Control 3081 used to provide different types of video templates.
  • the control 30811 is used to represent the parent-child type of video template (Figure 3F uses the text "parent-child” and a bold rounded rectangle to exemplarily represent the parent-child type of video template), and the control 30812 is used to represent the travel type of video template ( Figure 3F).
  • 3F uses the text "travel” and a normally displayed rounded rectangular box to exemplarily represent a video template of travel type
  • control 30813 is used to represent a video template of food type (Figure 3F uses the text "food” and a normal display of rounded corners)
  • the rectangular box exemplarily represents the video template of gourmet type
  • the control 30814 is used to represent the video template of sports type (Fig.
  • 3F uses the word "sports" and a normally displayed rounded rectangular box exemplarily represents the video template of sports type). Therefore, when it is recognized that the material matches a certain type of video body template, the electronic device can also provide the user with generating other types of video templates other than this type, which is beneficial to meet various needs of the user.
  • the control 3082 is used for functions such as editing the frame of the video, changing the duration of the video, adding new pictures and/or videos in the video, and deleting pictures and/or videos in the video. Therefore, a video of a corresponding length and/or a corresponding material is generated based on user requirements, taking into account the flexibility of video generation.
  • Control 3083 used to change the music matched by the video template.
  • Control 3084 used to change the filter of the video.
  • the control 3085 is used to add text in the video, such as adding text at the beginning and end, etc.
  • the control 309 is used to save the generated video for convenient use or viewing of the saved video.
  • the electronic device may display the generated video to the user through the preview area 306 .
  • the electronic device since the type of the video template determined by the electronic device is the parent-child type, the electronic device displays the rounded rectangular box in the control 3081 in bold, which is convenient and quick to inform the user.
  • the user can perform operations such as selecting the type of video template, adjusting the frame of the video, adjusting the duration of the video, adding new picture material and/or video material to the video, selecting music that matches the video , select the filter of the video, add text in the video, etc., so that the electronic device can determine the video template that meets the user's wishes and generate the corresponding video.
  • the mobile phone can display the user interface 15 exemplarily shown in FIG. 3F, so that the user can click the control 30811, the control 30812 and the control Choose a video template from 30813.
  • the mobile phone can display the user interface 21 exemplarily shown in FIG. Factors such as frame size, duration, and materials included in playback.
  • the user interface 21 may include: a video playback area 3171 , a control 3172 , a control 3173 , a control 3174 , a control 3175 , a material playback area 3176 , and a control 3177 .
  • the video playing area 3171 is used to display the effect played by the video to be generated.
  • the control 3172 is used to enter the user interface for changing the frame of the video, where the frame of the video can be 16:9, 1:1, or 9:16, etc.
  • Control 3173 is used to enter the user interface for changing the duration of the video.
  • Control 3174 is used to enter the user interface for adding new material to the video.
  • Control 3175 is used to access material that already exists in the video.
  • the material playback area 3176 is used to display the playback effect of each material in the video.
  • Control 3177 is used to exit user interface 21 .
  • the mobile phone can display the user interface 22 exemplarily shown in FIG. 3H, and the user interface 22 is used to display the corresponding editing video. music.
  • the user interface 22 may include: a video playing area 3181 , a progress bar 3182 , a control 3183 , a control 3184 , and a control 3185 .
  • the video play area 3181 is used to display the effect played by the video to be generated.
  • the progress bar 3182 is used to display or change the playback progress of the video to be generated.
  • the control 3183 is used to display various types of video templates, such as the text "parent-child", “travel”, “food”, “sports” and other types.
  • the control 3184 is used to display the corresponding music under a certain type of video template, for example, "Song 1", “Song 2", and "Song 3" are displayed.
  • Control 3185 is used to exit user interface 22 .
  • the mobile phone can display the user interface 23 exemplarily shown in FIG. mirror.
  • the text "Parent-Child” is displayed in bold and there is a check mark in the display column corresponding to the text "Song 1" to indicate that the mobile phone currently selects a parent-child video template, and the music corresponding to the video template is Song 1 .
  • the corresponding rounded rectangular boxes respectively located before the words "song 1", “song 2", and “song 3" are used to display the images of the corresponding songs.
  • the specific display content of the image is not limited in this embodiment of the present application. For convenience of description, the embodiments of the present application take filling white as an example for illustration.
  • the user interface 23 may include: a video playing area 3191 , a progress bar 3192 , a control 3183 , a control 3184 , and a control 3185 .
  • the video play area 3191 is used to display the effect played by the video to be generated.
  • the progress bar 3192 is used to display or change the playback progress of the video to be generated.
  • the control 3183 is used to display each filter, such as the text "Filter 1", “Filter 2", “Filter 3", “Filter 4", “Filter 5" and other types.
  • different filters the video has different display effects, such as softening, black and white, color deepening and other effects.
  • Control 3194 is used to exit user interface 23 .
  • the text “Filter 1” displayed in bold may indicate that the filter of the video currently selected by the mobile phone is Filter 1 .
  • the mobile phone can display the user interface 24 exemplarily shown in FIG. mirror.
  • the user interface 24 may include: a video playing area 3201 , a control 3202 , a control 3203 , a control 3204 , and a control 3185 .
  • the video play area 3191 is used to display the effect played by the video to be generated.
  • Control 3202 is used to select to add a title to the intro or end credit.
  • the control 3193 is used to display each title, such as the text "Title 1", “Title 2", “Title 3", “Title 4", "Title 5" and so on. For any two different titles (eg, title 1 and title 2), if the content of title 1 and title 2 can be the same, then title 1 and title 2 can be displayed on any screen of the video with different playback effects.
  • the playback effect can be understood as the effect formed by changing parameters such as font, thickness, and color of the text in the title.
  • Heading 1 may be the text "Hours of the Weekend” and Heading 1 is in italics.
  • Title 2 is the text "Weekend Hours", and Title 1 is in Song Dynasty. If the contents of Title 1 and Title 2 are different, Title 1 and Title 2 may be displayed on any screen of the video with the same or different playback effects.
  • Title 1 may be the text "Hours of the Weekend.”
  • Title 2 is the text "Good day”.
  • Control 3194 is used to exit user interface 24 .
  • the bold display of the words "title 1" and “title 1" may indicate that the mobile phone currently chooses to add title 1 to the title 1 of the video.
  • the electronic device can provide the user with the function of manually editing the generated video, which is convenient for the user to configure parameters such as the duration, frame, video template, included materials and filters of the video based on their own wishes, thus enriching the style of the video.
  • the mobile phone can save the video after detecting that the user performs an operation such as clicking the control 309 in the user interface 15 shown in FIG. 3F .
  • FIGS. 3K-FIG. 3N are schematic diagrams of human-computer interaction interfaces provided by an embodiment of the present application.
  • the mobile phone may display the user interface 16 exemplarily shown in FIG. 3L, and the user interface 16 is used to display the gallery The page corresponding to the category found in the application.
  • the control 3023 is changed from the bold display to the normal display
  • the control 3025 is changed from the normal display to the bold display.
  • the mobile phone may display the user interface 16 exemplarily shown in FIG. 3L, and the user interface 16 is used to display the gallery The page corresponding to the category found in the application.
  • control 3025 is shown in bold.
  • the user interface 16 may include: a control 312, where the control 312 is used to enter the display page of the picture material and/or video material stored in the mobile phone.
  • the specific implementation manner of the user interface 16 may include various.
  • the user interface 16 is divided into five parts.
  • the first part includes a search box for providing the user with a way to search for picture material and/or video material according to keywords such as photos, people, places, etc.
  • the second part includes controls for entering into creating a new video by using a template method (Fig. 3L uses the text "template creation” and an icon as an example for illustration), and a control 312, and a control for entering into creating a new video using a puzzle method.
  • the control of the video (Fig. 3L uses the text "template creation” and an icon as an example for illustration).
  • the third section shows pictures divided by portraits.
  • the title of the third part is illustrated in FIG. 3L by taking the text “portrait” and the text “more” as examples.
  • the fourth part includes pictures and/or videos divided by location, as shown in Figure 3L, pictures and/or videos with the location of "Shenzhen", pictures and/or videos with the location of "Guilin City”, and pictures and/or videos with the location of Pictures and/or videos of "Chengdu City”.
  • the title of the fourth part is illustrated in FIG. 3L by taking the words "location” and the words "more” as examples.
  • Control 3022, Control 3023, Control 3024, and Control 3025 are displayed in the fifth section.
  • the user interface 16 may further include: controls for implementing editing of the user interface 16 such as adding a new group or deleting an existing group in the user interface 16 ( FIG. 3L uses three black dots as an example for illustration).
  • the mobile phone After detecting that the user performs an operation such as clicking on the control 312 in the user interface 16 shown in FIG. 3L, the mobile phone can display the user interface 17 exemplarily shown in FIG. The picture material and/or video material of the video.
  • the specific implementation manner of the user interface 17 may include various manners.
  • the user interface 17 includes a display area 313 and a window 314 overlying the display area 313 .
  • the display area 313 includes picture material and/or video material, and in the upper left corner of each picture material/video material is displayed a control for magnifying and displaying the picture material/video material (two oblique directions are used in FIG. arrows pointing to the opposite direction as an example), and a control for selecting the picture material/video material is displayed in the lower right corner of each picture material/video material (a rounded rectangular box is used as an example in FIG. 3M for illustration ).
  • the displayed number of picture materials in the display area 313, the display area of picture materials, the display position of picture materials, the display content of video materials, the display quantity of video materials, the display area of video materials, the display area of video materials, the The parameters such as the display position and the order of each type of material are not limited.
  • the display area 313 displays: video material 3031, picture material 3032, picture material 3033, video material 3034, picture material 3035, picture material 3036, picture material 3037 and video material 3038.
  • the description in Mode 1 will not be repeated here.
  • the window 314 may include: a control 3141 (in FIG. 3M, the icon “0/50” is used as an example for illustration, where "0" indicates that no picture material/video material is selected, and "50" indicates that there are 50 pictures in the mobile phone. material/video material), control 3141 is used to indicate the total number of picture material/video material stored in the mobile phone and the number of picture material/video material currently selected by the user, and control 3142, control 3142 is used to enter to start making new The display interface of the video, and the preview area 3143, the preview area 3143 is used to display the picture material and/or video material selected by the user.
  • a control 3141 in FIG. 3M, the icon “0/50” is used as an example for illustration, where "0" indicates that no picture material/video material is selected, and "50" indicates that there are 50 pictures in the mobile phone. material/video material
  • control 3141 is used to indicate the total number of picture material/video material stored in the mobile phone and the number of picture material/video material currently selected by
  • the mobile phone After detecting that the user performs an operation of selecting a picture material/video material in the display area 313 shown in FIG. 3M , the mobile phone can display the display changes based on the user operation in the user interface 17 exemplarily shown in FIG. 3N .
  • video material 3031, picture material 3032, picture material 3033, video material 3034, picture material 3035, picture material 3036, picture material 3037 and video material 3038 are used as examples For illustration
  • the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037 and the video material 3038 in the display area 313 in the user interface 17 are located in each
  • the control for selecting the picture material/video material in the lower right corner of each picture material/video material changes in display (in FIG. 3N, a check mark is added to a rounded rectangular box as an example for illustration).
  • the control 3141 in the user interface 17 displays that the number of picture materials/video materials selected by the user changes (Fig. 3N uses the icon “8/50" as an example for illustration, where “8” indicates that the user selects eight picture materials/video materials, "50” means that there are 50 picture materials/video materials in the mobile phone, which can be freely created to generate new videos).
  • the preview area 2143 in the user interface 17 displays that the selected picture material/video material changes (in FIG. 3N, the video material 3031, the picture material 3032, the picture material 3033 and the video material 3034 are displayed, and the picture material is displayed by dragging the slider bar.
  • 3035, picture material 3036, picture material 3037, and video material 3038 are taken as examples for illustration).
  • the mobile phone After the mobile phone detects that the user performs an operation of generating a new video in the user interface 17 shown in FIG. 3N (for example, clicking on the control 3142 in the user interface 17 ), the mobile phone selects the video material 3031 , the picture material 3032 , and the picture material 3033 based on the user’s selection.
  • the type of the video template can be determined to be parent-child type, so based on the parent-child type video template, the video material 3031, picture material 3032 selected by the user , picture material 3033 , video material 3034 , picture material 3035 , picture material 3036 , picture material 3037 , and video material 3038 generate a video, and may display the user interface 15 exemplarily shown in FIG. 3F .
  • the generated video For a specific implementation manner of the generated video mentioned here, reference may be made to the description of the generated video in Mode 1.
  • the electronic device can automatically generate a video based on the picture elements and/or video material selected by the user in the gallery application.
  • the electronic device can display the generated video to the user through the preview area 306 .
  • the user interface 15 is used to display controls for editing the generated video. Therefore, the electronic device can provide the user with the function of manually editing the generated video, which is convenient for the user to configure parameters such as the duration, frame, video template, included materials and filters of the video based on their own wishes, thus enriching the style of the video.
  • the mobile phone can save the video after detecting that the user performs an operation such as clicking the control 309 in the user interface 15 shown in FIG. 3F .
  • FIGS. 3O-FIG. 3Q and FIG. 3T are schematic diagrams of human-computer interaction interfaces provided by an embodiment of the present application.
  • the mobile phone may display the user interface 18 exemplarily shown in FIG. 3P, and the user interface 18 is used to display the gallery The page corresponding to the moment category in the application.
  • the control 3023 changes from the bold display to the normal display
  • the control 3024 changes from the normal display to the bold display.
  • the mobile phone may display the user interface 18 exemplarily shown in FIG. 3P after detecting the operation of opening the gallery application (such as clicking the icon 301 of the gallery application) instructed by the user, and the user interface 18 is used to display the gallery The page corresponding to the moment category in the application.
  • control 3024 is shown in bold.
  • the user interface 18 may include: a control 3151, where the control 3151 is used to enter a display page for creating a new video in the manner provided by the embodiment of the present application.
  • the specific implementation manner of the user interface 18 may include various.
  • the user interface 18 is divided into three parts.
  • the first part includes a search box for providing the user with a way to search for picture material and/or video material according to keywords such as photos, people, places, etc.
  • the second part includes a control 3152 (Figure 3P uses the text "hours of the weekend", the date "September 2020” and a picture material as examples), and the control 3152 is used to display the picture material within a period of time in the mobile phone Video 1 generated by and/or video material, and a control 3153 for displaying video 2 generated by picture material and/or video material in a period of time in the mobile phone (Fig. May 2020" and a picture material as an example), and a control 3154 for displaying video 3 generated from picture material and/or video material in a period of time in the mobile phone (Figure 3P uses the text "hours of the weekend” light", the date "April 2020” and a picture material for illustration). It should be noted that, the picture material/video material in Video 1, Video 2, and Video 3 may or may not be repeated, which is not limited in this embodiment of the present application.
  • the video 1, the video 2 and the video 3 are all generated by the electronic device according to the solution provided in this application.
  • Control 3022, Control 3023, Control 3024, and Control 3025 are displayed in the third section.
  • the title of the user interface 18 is illustrated by taking the word “moment” as an example in FIG. 3P .
  • the mobile phone may display the window 316 exemplarily shown in FIG. 3Q on the user interface 18.
  • the window 316 uses It is used to display picture material and/or video material that can generate movies or puzzles.
  • the mobile phone After the mobile phone detects that the user performs an operation such as clicking the text "create a movie" in the window 316 shown in FIG. 3Q, the mobile phone can display the user interface 17 exemplarily shown in FIG. 3M.
  • the specific implementation manner of the user interface 17 may refer to the content described above, which will not be repeated here.
  • the mobile phone After detecting that the user performs an operation of selecting a picture material/video material in the display area 313 shown in FIG. 3M , the mobile phone can display the display changes based on the user operation in the user interface 17 exemplarily shown in FIG. 3N .
  • the specific implementation of the display change of the user interface 17 may refer to the content described above, which will not be repeated here.
  • the mobile phone After the mobile phone detects that the user performs an operation of generating a new video in the user interface 17 shown in FIG. 3N (for example, clicking on the control 3142 in the user interface 17 ), the mobile phone selects the video material 3031 , the picture material 3032 , and the picture material 3033 based on the user’s selection.
  • the type of the video template can be determined to be parent-child type, so based on the parent-child type video template, the video material 3031, picture material 3032 selected by the user , picture material 3033 , video material 3034 , picture material 3035 , picture material 3036 , picture material 3037 , and video material 3038 generate a video, and may display the user interface 15 exemplarily shown in FIG. 3F .
  • the generated video For a specific implementation manner of the generated video mentioned here, reference may be made to the description of the generated video in Mode 1.
  • the mobile phone may display the user interface 19 exemplarily shown in FIG. 3T.
  • the user interface 19 may include a control 317, and the control 317 is used to enter an interface that can play the video 1, and the video 1 mentioned here is the video generated based on the solution of the present application.
  • the mobile phone may display the user interface 15 exemplarily shown in FIG. 3F .
  • the electronic device can automatically generate a video based on the picture elements and/or video material selected by the user in the gallery application.
  • the electronic device can display the generated video to the user through the preview area 306 .
  • the user interface 15 is used to display controls for editing the generated video. Therefore, the electronic device can provide the user with the function of manually editing the generated video, which is convenient for the user to configure parameters such as the duration, frame, video template, included materials and filters of the video based on their own wishes, thus enriching the style of the video.
  • the mobile phone can save the video after detecting that the user performs an operation such as clicking the control 309 in the user interface 15 shown in FIG. 3F .
  • control size control size, control position, display content, and jumping mode of the user interface mentioned in the first, second, and third modes
  • the mobile phone can save the generated video in the gallery application.
  • FIGS. 3R to 3S are schematic diagrams of a human-computer interaction interface provided by an embodiment of the present application.
  • the mobile phone After detecting that the user performs an operation of opening the gallery application in the user interface 11 shown in FIG. 3A (for example, clicking on the icon 301 of the gallery application), the mobile phone can display the user interface 12' exemplarily shown in FIG. 3R, and the user interface 12' The page used to display albums in the Gallery app.
  • the interface layout of the user interface 12 ′ is basically the same as that of the user interface 12 shown in FIG. 3B , and the specific implementation can refer to the description of the user interface 12 shown in FIG. 3B in the first mode, which will not be repeated here.
  • the number of videos stored in the user interface 12' is incremented by 1, so the user interface 12' in Figure 3R shows that the number of all photos has increased from "182" to "183" , the number of videos increased from "49" to "50".
  • the mobile phone After detecting that the user performs an operation such as clicking on the control 3021 in the user interface 12' shown in FIG. 3R, the mobile phone can display the user interface 13' exemplarily shown in FIG. 3S, and the user interface 13' is the pictures and videos in the mobile phone. display interface.
  • the interface layout of the user interface 13 is basically the same as that of the user interface 13 shown in FIG. 3C .
  • the pictures/videos stored in the user interface 13' move to the next as a whole. Therefore, according to the time sequence from near to far from the current moment, the user interface 13' in FIG.
  • the first clip displayed is the newly generated video 3039.
  • the mobile phone After detecting that the user performs an operation such as clicking on the video 3039 in the user interface 13' shown in FIG. 3S, the mobile phone can play the video 3039.
  • each video template may correspond to a piece of music.
  • the music corresponding to different video templates is different.
  • the electronic device can keep the music corresponding to each video template unchanged by default, or can change the music corresponding to each video template based on user selection, so that the electronic device can flexibly set according to the actual situation.
  • the music may be preset by the electronic device, or manually added by the user, which is not limited in this embodiment of the present application.
  • video templates are also related to movement, speed, and transitions. Generally, no matter whether the music corresponding to the video templates is the same or not, different video templates correspond to at least one of camera movement, speed and transition.
  • each segment of the music can match the set motion, speed and transition.
  • the motion and transition may be related to the type of material, the motion used for the video material may be the same or different from the motion used for the picture material, and the transition used for the video material may be the same or different from that used for the picture material.
  • the video material can usually be set with a playback effect corresponding to the speed.
  • FIG. 4A-FIG. 4J are schematic diagrams showing the effect of the picture material 3033 after the mirror movement is adopted.
  • the mobile phone stores the picture material 3033 exemplarily shown in FIG. 4A , where the picture material 3033 can be referred to the description of the embodiment in FIG. 3C , which is not repeated here.
  • the mobile phone When the mobile phone displays the picture pixels 3033 using the mirror movement effect moving diagonally, the mobile phone can change from displaying the interface 11 exemplarily shown in FIG. 4B to displaying the interface 12 exemplarily shown in FIG. 4C , wherein the interface 11 is a picture
  • the interface 12 is the area a2 of the picture material 3033
  • the area a1 and the area a2 are located at different positions of the picture material 3033.
  • the electronic device may also adopt the mirror movement effect of upward, upward, leftward, rightward, etc., which is not limited in this embodiment of the present application.
  • the mobile phone can change from displaying the interface 1 exemplarily shown in FIG. 4B to displaying the interface 13 exemplarily shown in FIG. 4D , wherein the interface 11 is the area of the picture material 3033 a1, the interface 13 is an enlarged view of the area a3 of the picture material 3033.
  • the electronic device may also use the zoomed mirror movement effect, which is not limited in this embodiment of the present application.
  • the electronic device can display the picture material 20 by moving the mirror from top to bottom.
  • the mobile phone may change from displaying the interface 21 exemplarily shown in FIG. 4F to displaying the interface 22 exemplarily shown in FIG. 4G , wherein the interface 21 is the area b1 of the picture material 20 , the interface 22 is the area b2 of the picture material 20 , And the area b1 and the area b2 are located at different positions of the picture material 20 .
  • the shape of the region composed of the region b1 and the region b2 may be set to be a square.
  • the area formed by the area b1 and the area b2 by the electronic device may include as many areas as possible corresponding to the characters and faces in the material.
  • the electronic device can display the picture material 30 by using a mirror movement effect that moves from left to right.
  • the mobile phone can change from displaying the interface 31 exemplarily shown in FIG. 4I to displaying the interface 32 exemplarily shown in FIG. 4J , wherein the interface 31 is the area c1 of the picture material 30 , the interface 32 is the area c2 of the picture material 303 , And the area c1 and the area c2 are located at different positions of the picture material 30 .
  • the shape of the region composed of the region c1 and the region c2 may be set to be a square.
  • the area formed by the area c1 and the area c2 by the electronic device may include as many areas as possible corresponding to the characters and faces in the material.
  • the video generated by the electronic device to maximize the display material, enrich the content of the video, and ensure the cinematic and picture sense brought by the video.
  • FIG. 5 shows a schematic diagram of the effect of the video material 3038 at different speeds.
  • the video material 3038 reference may be made to the description of the embodiment in FIG. 3C, which will not be repeated here.
  • the video generated by the electronic device based on the material plays the video material 3038 in both the time period from t0 to t1 and the time period from t2 to t3, and the time period from t2 to t3 is equal to the length of the time period from t0 to t1.
  • the speed of the electronic device playing the video material 3038 in the time period t0 to t1 is three times the speed of playing the video material 3038 in the time period t2 to t3.
  • the speed may also include a speed of any ratio, which is not limited in this embodiment of the present application.
  • FIG. 6 shows a schematic diagram of the effect of the transition between the picture material 3033 and the picture material 3032 .
  • the picture material 3033 and the picture material 3032 may refer to the description of the embodiment in FIG. 3C , which will not be repeated here.
  • the electronic device plays the picture material 3033 in the time period from t4 to t5, plays the picture material 3032 in the time period from t6 to t7, and the picture material 3033 uses the The transition effect of superimposing and blurring transitions to the picture material 3032, then the electronic device plays the picture material 3033 in the time period from t4 to t5, and plays the gradually enlarged picture material superimposed on the blurred picture material 3033 in the time period from t5 to t6. 3032. Play the picture material 3032 in the time period from t6 to t7.
  • the transition may also include effects such as focus blur, which is not limited in this embodiment of the present application.
  • the electronic device can realize the scene scheduling and shot scheduling of the material according to the set movement, speed and transition.
  • video templates are related to scene types. Generally, regardless of whether the music corresponding to the video templates is the same, different types of video templates correspond to different scene types; video templates of the same type correspond to the same scene type.
  • the electronic device When the user selects the music corresponding to the default video template, the electronic device does not need to adjust the duration of each segment of the video template, so that the music jam point can be realized.
  • the electronic device needs to perform beat detection on the music selected by the user, obtain the tempo of the music selected by the user, and then determine whether the duration of each segment of the video template is equal to the obtained beat Integer multiples of the tempo, adjust the durations of segments whose durations are not equal to the integer multiples of the obtained tempo, so that the duration of each segment in the video template is an integer multiple of the tempo.
  • the BPM (Beat Per Minute, beat number) detection method can be used to detect the beat of the music to obtain the tempo (bpm), wherein the electronic device uses digital signal processing (digital signal processing). , DSP) method to analyze the audio to get the beat of the music.
  • DSP digital signal processing
  • the usual algorithm divides the original audio into several segments, then obtains the spectrum through fast Fourier transform, and finally performs filter analysis based on the sound energy to obtain the beat of the music.
  • the scene type corresponding to each segment in each video template is based on actual experience in advance (such as the user's scene type corresponding to a single segment at certain locations and the scene type corresponding to multiple consecutive segments). are all perceptually strong) to set.
  • the embodiment of the present application may use the beat of the music as the dividing line, divide the entire piece of music into multiple segments, and match each segment with the set scene type.
  • each segment is an integer multiple of the tempo of the music, so as to realize the music timing of each segment.
  • the beat of music is the beat or beat, which refers to the combination rule of upbeat and downbeat, and specifically refers to the total length of notes in each measure in the score. These notes can be, for example, half notes, quarter notes, Eighth notes, etc.
  • a piece of music can be composed of multiple beats, and the beats of a piece of music are usually fixed.
  • the electronic device can adjust the arrangement order of the materials in various ways.
  • the electronic device may prioritize each segment.
  • the high-priority segments may include, but are not limited to, segments such as the beginning part, the chorus part, the ending part, or the accent part of the music.
  • the electronic device can give priority to satisfying the scene type set by the high-priority segment, and place the material corresponding to the scene type set by the high-priority segment to the high-priority segment, and then follow the settings of the remaining segments.
  • the scene type of the remaining material is placed in the remaining clips. At this time, the scene type of the remaining material and the scene type set in the remaining clips can be the same or different.
  • the electronic device may preferentially satisfy the scene type set by the segment at the front, and place the material corresponding to the type of the scene set by the segment at the front in the segment at the front, and then follow the The scene type set for the remaining clips, the remaining material is placed in the remaining clips.
  • the scene type of the remaining material and the scene type set for the remaining clips can be the same or different.
  • the electronic device may also preferentially satisfy the scene type set by the segment located at the front among the remaining segments.
  • the beat of the music can be used as the dividing line to divide the entire piece of music into multiple segments, and the multiple consecutive segments are matched with the set scene types, and the remaining segments are
  • the type of scene is not limited.
  • the multiple continuous segments may be segments such as the beginning part, the ending part, or the chorus part of the music.
  • the scene types corresponding to each of the multiple continuous segments are introduced.
  • A represents the type of scene corresponding to the close shot
  • B represents the type of scene corresponding to the medium shot
  • C represents the type of scene corresponding to the long shot.
  • the scene types corresponding to the five consecutive segments corresponding to the beginning and/or the end of the music may be CCCBA, respectively, so that the generated video has a suspenseful effect at the beginning, or has unfinished content at the end. continued effect.
  • the scene types corresponding to the four consecutive segments corresponding to the beginning and/or the end of the music may be ABBC, respectively, so that the generated video is prepared for the narration of the video at the beginning or at the end. Effect.
  • the scene types corresponding to the five consecutive segments corresponding to the segment after the beginning and/or the segment before the end of the music are respectively BBBBB, so that the generated video has a narration in the corresponding segment. Effect.
  • the scene types corresponding to the five consecutive segments corresponding to the chorus part of the music can be CCCCA respectively, so that the generated video has the effect of enhancing the narration of the video to a climax at the chorus.
  • embodiments of the present application include, but are not limited to, specific implementations of scene types corresponding to the above-mentioned multiple consecutive pieces of music.
  • the electronic device can adjust the arrangement order of the materials according to the scene type of the segment set at the beat point of the music.
  • the electronic equipment arranges the materials according to the sequence of scenes set in the video template, and adds the scene and lens sense of the materials according to the movement, speed and transition set in the video template, and generates a video template with
  • the video with the corresponding playback effect makes the generated video more expressive and intense in terms of the narrative of the film's plot, the expression of characters' thoughts and feelings, and the handling of character relationships, thereby enhancing the artistic appeal of the generated video.
  • the specific implementation of the video template is introduced by taking the parent-child type video template and the travel type video template as examples in conjunction with Table 1 and Table 2.
  • Tables 1 and 2 the types of scene types are taken as examples of the three types of close-up, medium-range and long-range shown in Figure 7.
  • A represents the type of scenery corresponding to the close-up
  • B represents the type of scenery corresponding to the medium-range
  • Type represents the type of scene corresponding to the vision.
  • the transition uses the effect of "white fading” and the effect of "fading in and out of title".
  • the transition uses the "white fade” effect.
  • the transition uses a "quick down” effect.
  • the transition adopts the effect of "up and down blurred oblique angle transition".
  • the transition uses a "stretch in” effect.
  • the transition adopts the effect of "left and right blur and push".
  • transitions use a “fast up” effect.
  • transitions use a “push up and focus blur/zoom behind the scenes” effect.
  • the transition uses the effect of "speed left”.
  • the transition adopts the effect of "rotation blur on the right axis”.
  • the transition uses a "rotate right” effect.
  • the transition uses the effect of "rotate to the left to blur”.
  • the transition adopts the effect of "quick left slide".
  • the transition uses the "perspective blur” effect.
  • the transition uses the effect of "blur and dissolve". For image materials, transitions do not use any effects.
  • the transition uses the effect of "speed left". For image materials, transitions do not use any effects.
  • the transition uses the effect of "fade out” and “rotate right”.
  • the transition adopts the effect of "whitening and fading out”.
  • the transition adopts the effect of "quick left slide".
  • transitions do not use any effects.
  • the transition uses a "blurred and dissolve" effect.
  • transitions do not use any effects.
  • the transition adopts the effect of "whitening and fading out”.
  • the transition adopts the effect of "whitening and fading out”.
  • the transition uses a “rotate left” effect.
  • the transition uses the “perspective blur” effect.
  • the transition uses a "left rotation" effect.
  • transitions do not use any effects.
  • the transition uses a "fast left” effect.
  • transitions do not use any effects.
  • transitions use a "fast left” effect. For image materials, transitions do not use any effects.
  • the transition uses a "stretch in" effect.
  • transitions do not use any effects.
  • the video template includes, but is not limited to, parameters related to scene type, camera movement, speed, and transition.
  • the video template can also adaptively adjust the movement of the video lens based on the frame of the material to achieve the best playback effect. For example, when generating a banner video, the electronic device can use the moving method of the lens from top to bottom to achieve the maximum regionalized display effect of the vertical material; when generating a vertical video, the electronic device can use the lens to move from left to bottom.
  • the right movement method realizes the maximum regional display effect of the banner material. Therefore, it is beneficial to maximize the display of materials in the video, enrich the content of the video, and ensure the cinematic and picture sense brought by the video.
  • each scene type in the video template corresponds to a segment, and the duration of the segment may be the same or different.
  • the electronic device may first place the material selected by the user based on the duration of each segment in the video template.
  • video footage can be placed in longer clips in preference to image footage.
  • the electronic device then adjusts the arrangement order of the placed materials based on the scene type corresponding to each clip, so that the scene type of the material matches the scene type of the clip, thereby ensuring that the material selected by the user appears at least once in the generated video and adjacent clips do not place the same footage.
  • the electronic device can select one or more segments from the video materials to repeat N times in the generated video, where N is a positive integer greater than 1 . If the length of the longer video still cannot be satisfied at this time, the electronic device may repeat all the arranged materials M times in the generated video, where M is a positive integer greater than 1.
  • the electronic device can usually set a minimum duration and a maximum duration for the duration of the music corresponding to the video template, so as to ensure that the material selected by the user appears at least once in the generated video.
  • the playback effect of the video 3039 in FIG. 3S is related to the video template.
  • the video 3039 will play differently depending on the video template.
  • the video 3039 may include: video material 3031, picture material Material 3032, picture material 3033, video material 3034, picture material 3035, picture material 3036, picture material 3037 and video material 3038.
  • the scene types corresponding to each of the multiple continuous segments are introduced.
  • A represents the type of scene corresponding to the close shot
  • B represents the type of scene corresponding to the medium shot
  • C represents the type of scene corresponding to the long shot.
  • the electronic device can identify that the scene type corresponding to the video material 3031 is BCBBB, the scene type of the picture material 3032 is B, the scene type of the picture material 3033 is B, and the scene type corresponding to the video material 3034
  • the category is CCCC
  • the scene type of the picture material 3035 is B
  • the scene type of the picture material 3036 is A
  • the scene type of the picture material 3037 is A
  • the scene type corresponding to the video material 3038 is BCCCC.
  • the electronic device Based on the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037 and the video material 3038, the electronic device recognizes that a parent-child video template can be used to generate the video 3039.
  • the electronic device will be based on video material 3031, picture material 3032, picture material 3033, video material 3034, picture material 3035, picture material 3036, picture material 3037 and the respective scene types of video material 3038, according to Table 1, the scene type corresponding to each segment of the music is given. , the picture material 3037 and the video material 3038 are respectively placed at the positions corresponding to the music, and a video 3039 is obtained.
  • the electronic device can set scene types of multiple continuous segments corresponding to music according to the video template to enhance the playback effect of the generated video, which is beneficial to enhance the video's sense of shots and movies.
  • the electronic device can set the scene type in the video template according to the actual situation and experience value, and the embodiment of the present application does not limit the setting method of the scene type in the video template.
  • FIGS. 8A-8E an example will be given to illustrate the playback effect of the video generated by the electronic device based on the material selected by the user.
  • FIG. 8A-FIG. 8E are schematic diagrams showing the playback sequence of each material when the electronic device plays the generated video.
  • the electronic device can learn that there is a 2x duration missing in all the materials.
  • Scene type C so that all the materials cannot accurately match the scene type in the video template. Since all the material needs to appear at least once, the electronic device can change the scene type CCCBA in the video template to CBCBA.
  • the electronic device adjusts the arrangement order of picture material 11, picture material 12, picture material 13, picture material 14 and picture material 15 to generate the video exemplarily shown in FIG. 8A.
  • the scene types corresponding to the videos exemplarily shown in FIG. 8A are respectively CBCBA.
  • the electronic device can know that all the materials can accurately match the scene type in the video template. Scene type. Therefore, based on the scene type CCCBA, the electronic device adjusts the arrangement order of picture material 21, picture material 22, picture material 23, picture material 24, and picture material 25 to generate the video exemplarily shown in FIG. 8B.
  • the scene types corresponding to the videos exemplarily shown in FIG. 8B are respectively CCCBA.
  • the scene type of the video material 32 is C, the duration of the video material 32 is greater than or equal to 4x, the scene type of the picture material 32 is A, and the scene type of the picture material 33 is C.
  • the electronic device can know that there is a missing one with a duration of 2x in all the materials.
  • Scene type C so that all the materials cannot accurately match the scene type in the video template. Since all the material needs to appear at least once, the electronic device can change the scene type CCCBA in the video template to CBCBA.
  • the electronic device adjusts the arrangement order of picture material 31, video material 31, video material 32, picture material 32, and picture material 33 based on the scene type CBCBA to generate the video exemplarily shown in FIG. 8C.
  • the playback order of picture material 31, video material 31, video material 32, picture material 32 and picture material 33 in the generated video is:
  • the scene types corresponding to the videos exemplarily shown in FIG. 8C are respectively CBCBA.
  • the scene type of the video material 42 is C, the duration of the video material 42 is greater than or equal to 4x, the scene type of the picture material 42 is A, and the scene type of the picture material 43 is C.
  • the electronic device Based on the scene type in the video template, and the scene types of picture material 41, video material 41, video material 42, picture material 42, and picture material 43, the electronic device can know that all the materials can accurately match the scene type in the video template. Scene type. Therefore, the electronic device adjusts the arrangement order of picture material 41, video material 41, video material 42, picture material 42 and picture material 43 based on the scene type CCCBA to generate the video exemplarily shown in FIG. 8D.
  • the playback sequence of the picture material 41, the video material 41, the video material 42, the picture material 42 and the picture material 43 in the generated video is:
  • the scene types corresponding to the videos exemplarily shown in FIG. 8D are respectively CCCBA.
  • the duration of the segment is equal to 2x
  • the duration of the segment corresponding to the scene type B in the video material 51 is equal to x
  • the scene type of the video material 52 is C
  • the duration of the video material 42 is greater than or equal to 4x
  • the other type is A.
  • the electronic device can know that all the materials can accurately match the scene type in the video template. Therefore, the electronic device adjusts the arrangement order of the picture material 51 , the video material 51 , the video material 52 and the picture material 52 based on the scene type CCCBA to generate the video exemplarily shown in FIG. 8E .
  • the playback sequence of the picture material 51, the video material 51, the video material 52 and the picture material 52 in the generated video is:
  • the scene types corresponding to the videos exemplarily shown in FIG. 8E are respectively CCCBA.
  • the electronic device can consider the playback effect of the video, the duration of the video, the scene type in the video, The use of the material, the quantity of the material, the scene type of the material, whether the material supports repeated use and other factors, the scene type of the material and the scene type in the video template are matched to a preset degree to generate the video. That is, the scene type corresponding to the video generated by the electronic device is completely or partially the same as the scene type in the video template.
  • the preset degree may be 100% (ie, exact matching) or 90% (ie, fuzzy matching), and usually the preset degree is greater than or equal to 50%.
  • the electronic device adjusts the arrangement order of the materials in the generated video based on the arrangement order of the scene types in the video template, and then combines the techniques of mirror movement, speed, and transition in the video template to generate a coherent line of sight. and high-quality video.
  • the video generation method of the embodiment of the present application enhances the sense of shots and movies of the video, which is beneficial to improve the user experience.
  • the embodiments of the present application may provide a video generation method.
  • FIG. 9 shows a schematic diagram of a video generation method provided by an embodiment of the present application.
  • the video generation method of the embodiment of the present application may include:
  • the electronic device displays a first interface of a first application, where the first interface includes a first control and a second control.
  • the electronic device After receiving the first operation acting on the first control, the electronic device determines that the arrangement order of the first material, the second material and the third material is the first order; The second material and the third material generate the first video.
  • the electronic device After receiving the second operation acting on the second control, the electronic device determines that the arrangement order of the first material, the second material and the third material is the second order, and the second order is different from the third order; In the second sequence, a second video is generated from the first material, the second material and the third material.
  • the first material, the second material and the third material are different image materials stored in the electronic device
  • the third order is the time sequence in which the first material, the second material and the third material are stored in the electronic device. The order is different from the third order.
  • first control refers to any one of the controls 30811, 30812, 30813, and 30814 exemplarily shown in FIG. 3F
  • second control refers to the control exemplarily shown in FIG. 3F 30811, control 30812, control 30813, and control 30814, the first control is different from the second control.
  • the first order and the second order may be the same or different, which is not limited in this embodiment of the present application.
  • the playback effects of the first video and the second video are different.
  • the first application is a gallery application of the electronic device.
  • the electronic device matches the appropriate video template by identifying the scene type of the material, and adjusts the arrangement order of the material based on the scene type that has been set for each segment in the video template.
  • the camera movement, speed and transition that have been set for each clip can automatically generate a video with a coherent line of sight and high quality, without relying on the user's manual editing, which enhances the camera and cinematic sense of the video, and improves the user experience.
  • the first video is divided into a plurality of segments with the beat point of the music as a dividing line; the first material, the second material and the third material appear at least once in the first video, and in any part of the first video
  • the materials appearing in two adjacent segments are different; the first material, the second material and the third material appear at least once in the second video, and the materials appearing in any two adjacent segments of the second video are different.
  • the method further includes: displaying the second interface of the first application by the electronic device; after the electronic device receives the third operation acting on the second interface, displaying the first material, the second material and the third material Generate the first video.
  • the method further includes: the electronic device determines from the first material, the second material, the third material and the fourth material to generate the first video from the first material, the second material and the third material; wherein, The fourth material is an image material stored in the electronic device that is different from the first material, the second material, and the third material.
  • the first interface further includes a third control; the method further includes: after the electronic device receives the fourth operation acting on the third control, displaying a third interface, where the third interface includes: the configuration information option, the configuration information includes: at least one parameter of duration, filter, frame, material or title; after the electronic device receives the fifth operation acting on the option of the configuration information, in the first order, based on the configuration information, The first material, the second material, and the third material generate a third video.
  • the third interface please refer to the description of the user interface 21 exemplarily shown in FIG. 3G , or the description of the user interface 22 exemplarily shown in FIG. 3H , or the description of the user interface 23 exemplarily shown in FIG. 3I , or, The description of the user interface 24 exemplarily shown in FIG. 3J will not be repeated here.
  • the electronic device can adjust parameters such as the duration, frame, whether to add new material, whether to delete existing material, etc. of the video 1 through the user interface 21 exemplarily shown in FIG. 3G .
  • the electronic device may adjust the music of the video 1 through the user interface 22 exemplarily shown in FIG. 3H .
  • the electronic device may adjust the filter of the video 1 through the user interface 23 exemplarily shown in FIG. 3I .
  • the electronic device can adjust whether to add a title to the video 1 through the user interface 24 exemplarily shown in FIG. 3J .
  • the first interface further includes a fourth control; the method further includes: after the electronic device generates the first video, in response to a fourth operation acting on the fourth control, the electronic device saves the first video.
  • the fourth control for a specific implementation manner of the fourth control, reference may be made to the description of the control 309 exemplarily shown in FIG. 3F , which is not repeated here.
  • the method specifically includes: the electronic device determines the scene type corresponding to the first material, the scene type corresponding to the second material, and the scene type corresponding to the third material; the electronic device determines the scene type corresponding to the first material based on the scene type corresponding to the first material.
  • the material that matches the scene type corresponding to the first segment determines the material that matches the scene type corresponding to the first segment,
  • the first segment is any segment in the first video template; the arrangement order of the materials corresponding to all the segments in the first video template is the first order; the electronic device is based on the scene type corresponding to the first material, the second material.
  • the corresponding scene type, the scene type corresponding to the third material, and the scene type set for each segment in the second video template determine the material that matches the scene type corresponding to the second segment, and the second segment is the first segment.
  • any one segment in the two video templates; and the arrangement order of the materials corresponding to all the segments in the second video template is the second order; wherein, the first video template is different from the second video template, and each The segments correspond to each segment in the second video template, and each segment in the second video corresponds to each segment in the second video template.
  • the method further includes: the electronic device converts the first material, the second material and the first material according to the first order and the motion effect, speed effect and transition effect set for each segment in the first video template.
  • the first video is generated from three materials; the electronic device generates the first, second and third materials according to the second order and the motion mirror effect, speed effect and transition effect set for each segment in the second video template Second video.
  • the above-mentioned solution can refer to the description mentioned above
  • the specific implementation of the mirror movement effect can refer to the description exemplarily shown in FIG. 5
  • the specific implementation of the speed effect can refer to the description exemplarily shown in FIG. 6
  • the specific implementation of the transition effect may refer to the description exemplarily shown in FIG. 7 , which will not be repeated here.
  • the method specifically includes: the scene type corresponding to the first material and the scene type corresponding to the first segment of the electronic device are the same, or the scene type corresponding to the first material is
  • the first material is determined to be a material matching the scene type corresponding to the first clip; the electronic device is in the scene type corresponding to the first material.
  • the scene type corresponding to the second clip is the same, or, when the scene type corresponding to the first material is adjacent to the scene type corresponding to the second clip according to the preset rules, the first material is determined to be the same as the second clip.
  • the corresponding scene type matches the material.
  • the method specifically includes: in the electronic device, the scene type corresponding to the fourth material is the same as the scene type corresponding to the first segment or the scene type corresponding to the fourth material is according to
  • the preset rule is adjacent to the sequence of the scene types corresponding to the first clip, and the duration of the fourth material is equal to the duration of the first clip, the fourth material is cut out from the first material, and the fourth material is determined to be the same as the first clip.
  • a material that matches the scene type corresponding to the clip; the scene type corresponding to the fourth material of the electronic device is the same as the scene type corresponding to the second clip or the scene type corresponding to the fourth material corresponds to the second clip according to preset rules.
  • the fourth material is cut out from the second material, and the fourth material is determined to match the scene type corresponding to the second segment. material; wherein, the fourth material is part or all of the first material.
  • the types of scenes include: close-up, medium-range, and distant, the types of scenes adjacent to the close-range are distant, and the types of scenes adjacent to the mid-range are close-range and distant.
  • the type of scene adjacent to the distant scene is the close scene.
  • the present application provides an electronic device, including: a memory and a processor; the memory is used for storing program instructions; the processor is used for calling the program instructions in the memory to make the electronic device execute the video generation method in the foregoing embodiments.
  • the present application provides a chip system, which is applied to an electronic device including a memory, a display screen and a sensor; the chip system includes: a processor; when the processor executes the computer instructions stored in the memory, the electronic device executes the foregoing The video generation method in the embodiment.
  • the present application provides a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processor to cause an electronic device to implement the video generation method in the foregoing embodiments.
  • the present application provides a computer program product, comprising: execution instructions, the execution instructions are stored in a readable storage medium, at least one processor of an electronic device can read the execution instructions from the readable storage medium, and at least one processor Executing the execution instruction enables the electronic device to implement the video generation method in the foregoing embodiment.
  • all or part of the functions may be implemented by software, hardware, or a combination of software and hardware.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in a computer-readable storage medium.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes an integration of one or more available media.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media (eg, solid state disks (SSDs)), and the like.
  • the process can be completed by instructing the relevant hardware by a computer program, and the program can be stored in a computer-readable storage medium.
  • the program When the program is executed , which may include the processes of the foregoing method embodiments.
  • the aforementioned storage medium includes: ROM or random storage memory RAM, magnetic disk or optical disk and other mediums that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Embodiments of the present application provide a video generation method and an electronic device. The method comprises: The electronic device displays a first interface of a first application. After receiving a first operation performed on a first widget, the electronic device determines an arrangement sequence of a first material, a second material and a third material to be a first sequence, which is different from a third sequence; and generates a first video using the first material, the second material and the third material according to the first sequence. After receiving a second operation performed on a second widget, the electronic device determines an arrangement sequence of the first material, the second material and the third material to be a second sequence, which is different from the third sequence; and generates a second video using the first material, the second material and the third material according to the second sequence. The third sequence is a time sequence which the first material, the second material and the third material are stored in the electronic device. The generated video has coherent pictures and high quality.

Description

视频生成方法和电子设备Video generation method and electronic device
本申请要求于2020年09月29日提交中国专利局、申请号为202011057180.9、申请名称为“视频生成方法和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202011057180.9 and the application title "Video Generation Method and Electronic Device" filed with the China Patent Office on September 29, 2020, the entire contents of which are incorporated into this application by reference.
技术领域technical field
本申请实施例涉及电子技术领域,尤其涉及一种视频生成方法和电子设备。The embodiments of the present application relate to the field of electronic technologies, and in particular, to a video generation method and an electronic device.
背景技术Background technique
随着短视频的流行趋势,用户在如手机等电子设备上快速生成视频的需求与日俱增。目前,电子设备生成的视频视线连贯性差且品质感低,无法满足用户对视频的画面感和影视化的高需求。因此,现亟需一种能够生成视线连贯且高品质感的视频的方法。With the popular trend of short videos, users' demands to generate videos quickly on electronic devices such as mobile phones are increasing day by day. Currently, videos generated by electronic devices have poor line-of-sight consistency and low quality, which cannot meet the high demands of users for video imagery and film and television. Therefore, there is an urgent need for a method capable of generating a video with a coherent line of sight and high quality.
发明内容SUMMARY OF THE INVENTION
本申请提供一种视频生成方法和电子设备,以方便且快速地生成视频,且视频的视线连贯且品质感高,加强了视频的镜头感和电影感,提升了用户的使用体验。The present application provides a video generation method and electronic device, so as to conveniently and quickly generate a video, and the video has a coherent line of sight and high quality, which enhances the video's sense of footage and cinema, and improves the user's experience.
第一方面,本申请提供一种视频生成方法,包括:电子设备显示第一应用的第一界面,第一界面中包括第一控件和第二控件;电子设备在接收到作用于第一控件上的第一操作之后,确定第一素材、第二素材和第三素材的排列顺序为第一顺序,第一顺序与第三顺序不同;并按照第一顺序,将第一素材、第二素材和第三素材生成第一视频;电子设备在接收到作用于第二控件上的第二操作之后,确定第一素材、第二素材和第三素材的排列顺序为第二顺序,第二顺序与第三顺序不同;并按照第二顺序,将第一素材、第二素材和第三素材生成第二视频。其中,第一素材、第二素材和第三素材为存储在电子设备中不同的图像素材,第三顺序为第一素材、第二素材和第三素材存储到电子设备中的时间顺序。In a first aspect, the present application provides a video generation method, including: an electronic device displays a first interface of a first application, and the first interface includes a first control and a second control; and the electronic device receives a first control that acts on the first control. After the first operation of the The third material generates the first video; after receiving the second operation acting on the second control, the electronic device determines that the arrangement order of the first material, the second material and the third material is the second order, and the second order is the same as the second order. The three sequences are different; and according to the second sequence, a second video is generated from the first material, the second material and the third material. The first material, the second material and the third material are different image materials stored in the electronic device, and the third order is the time sequence in which the first material, the second material and the third material are stored in the electronic device.
通过第一方面提供的方法,通过识别素材的景别类型,匹配合适的视频模板,基于视频模板中每个片段已设置的景别类型,对素材的排列顺序进行调整,并结合视频模板中每个片段已设置的运镜、速度和转场,可自动生成视线连贯且品质感高的视频,无需依赖用户的手动编辑,加强了视频的镜头感和电影感,提升了用户的使用体验。With the method provided in the first aspect, by identifying the scene type of the material, matching the appropriate video template, adjusting the arrangement order of the material based on the scene type that has been set for each segment in the video template, and combining every scene type in the video template The camera movement, speed and transition that have been set for each clip can automatically generate a video with a coherent line of sight and high quality, without relying on the user's manual editing, which enhances the camera and cinematic sense of the video, and improves the user experience.
在一种可能的设计中,第一视频以音乐的拍点为分界线划分为多个片段;在第一视频中第一素材、第二素材和第三素材至少出现一次,且在第一视频的任意两个相邻片段中出现的素材不同;在第二视频中第一素材、第二素材和第三素材至少出现一次,且在第二视频的任意两个相邻片段中出现的素材不同。从而,确保生成的视频能够具备专业的镜头感和电影感。In a possible design, the first video is divided into a plurality of segments with the beat point of the music as a dividing line; the first material, the second material and the third material appear at least once in the first video, and in the first video The materials appearing in any two adjacent segments of the second video are different; the first material, the second material and the third material appear at least once in the second video, and the materials appearing in any two adjacent segments of the second video are different . This ensures that the resulting video has a professional camera and cinematic feel.
在一种可能的设计中,方法还包括:电子设备显示第一应用的第二界面;电子设备在接收到作用于第二界面上的第三操作之后,将第一素材、第二素材和第三素材生成第一视频。从而,电子设备可基于用户选择的素材生成视线连贯且品质感高的视频。In a possible design, the method further includes: displaying the second interface of the first application on the electronic device; after the electronic device receives the third operation acting on the second interface, displaying the first material, the second material and the first material Three materials generate the first video. Therefore, the electronic device can generate a video with a coherent line of sight and high quality based on the material selected by the user.
在一种可能的设计中,方法还包括:电子设备从第一素材、第二素材、第三素材和第四素材中,确定将第一素材、第二素材和第三素材生成第一视频;其中,第四素材为存储在电子设备中与第一素材、第二素材和第三素材不同的图像素材。从而,电子设备可自动基于已存储的素材生成视频,满足用户的及时需求。In a possible design, the method further includes: the electronic device determines from the first material, the second material, the third material and the fourth material to generate the first video from the first material, the second material and the third material; The fourth material is an image material stored in the electronic device that is different from the first material, the second material and the third material. Thus, the electronic device can automatically generate a video based on the stored material to meet the user's timely needs.
在一种可能的设计中,第一界面中还包括第三控件;方法还包括:电子设备在接收到作用于第三控件的第四操作之后,显示第三界面,第三界面中包括:配置信息的选项,配置信息包括:时长、滤镜、画幅、素材或者标题中的至少一个参数;电子设备在接收到作用于配置信息的选项上的第五操作之后,按照第一顺序,基于配置信息,将第一素材、第二素材和第三素材生成第三视频。从而,丰富了视频的种类,满足用户调整视频的各个参数的需求。In a possible design, the first interface further includes a third control; the method further includes: after the electronic device receives the fourth operation acting on the third control, displaying a third interface, where the third interface includes: configuring options of information, the configuration information includes: at least one parameter of duration, filter, frame, material or title; after the electronic device receives the fifth operation acting on the options of the configuration information, in the first order, based on the configuration information , and generate a third video from the first material, the second material, and the third material. Thus, the types of videos are enriched, and the user's requirements for adjusting various parameters of the videos are satisfied.
在一种可能的设计中,第一界面中还包括第四控件;方法还包括:电子设备在生成第一视频之后,响应于作用于第四控件上的第四操作,保存第一视频。从而,方便用户后续观看和编辑已生成的视频。In a possible design, the first interface further includes a fourth control; the method further includes: after the electronic device generates the first video, in response to a fourth operation acting on the fourth control, the electronic device saves the first video. Therefore, it is convenient for the user to watch and edit the generated video subsequently.
在一种可能的设计中,方法具体包括:电子设备确定第一素材对应的景别类型、第二素材对应的景别类型和第三素材对应的景别类型;电子设备基于第一素材对应的景别类型、第二素材对应的景别类型、第三素材对应的景别类型和第一视频模板中的每个片段设置好的景别类型,确定与第一片段对应的景别类型匹配的素材,第一片段为第一视频模板中的任意一个片段;并将第一视频模板中的全部片段对应的素材的排列顺序为第一顺序;电子设备基于第一素材对应的景别类型、第二素材对应的景别类型、第三素材对应的景别类型和第二视频模板中的每个片段设置好的景别类型,确定与第二片段对应的景别类型匹配的素材,第二片段为第二视频模板中的任意一个片段;并将第二视频模板中的全部片段对应的素材的排列顺序为第二顺序;其中,第一视频模板与第二视频模板不同,第二视频中的每个片段与第二视频模板中的每个片段相对,第二视频中的每个片段与第二视频模板中的每个片段相对应。In a possible design, the method specifically includes: the electronic device determines the scene category type corresponding to the first material, the scene category type corresponding to the second material, and the scene category type corresponding to the third material; the electronic device is based on the scene category type corresponding to the first material. The scene type, the scene type corresponding to the second material, the scene type corresponding to the third material, and the scene type set for each segment in the first video template, determine the scene type that matches the scene type corresponding to the first segment. material, the first segment is any segment in the first video template; the arrangement order of the materials corresponding to all the segments in the first video template is the first order; the electronic device is based on the scene type corresponding to the first material, the first The scene type corresponding to the second material, the scene type corresponding to the third material, and the scene type set for each clip in the second video template, determine the material that matches the scene type corresponding to the second clip, the second clip is any fragment in the second video template; and the arrangement order of the materials corresponding to all the fragments in the second video template is the second order; wherein, the first video template is different from the second video template, and the Each segment corresponds to each segment in the second video template, and each segment in the second video corresponds to each segment in the second video template.
在一种可能的设计中,方法还包括:电子设备按照第一顺序以及第一视频模板中的每个片段设置好的运镜效果、速度效果和转场效果,将第一素材、第二素材和第三素材生成第一视频;电子设备按照第二顺序以及第二视频模板中的每个片段设置好的运镜效果、速度效果和转场效果,将第一素材、第二素材和第三素材生成第二视频。In a possible design, the method further includes: the electronic device converts the first material, the second material into and the third material to generate a first video; the electronic device converts the first material, the second material and the third material into The material generates a second video.
在一种可能的设计中,在第一素材为图片素材时,方法具体包括:电子设备在第一素材对应的景别类型与第一片段对应的景别类型相同,或者,第一素材对应的景别类型按照预设规则与第一片段对应的景别类型的排序相邻时,将第一素材确定为与第一片段对应的景别类型匹配的素材;电子设备在第一素材对应的景别类型与第二片段对应的景别类型相同,或者,第一素材对应的景别类型按照预设规则与第二片段对应的景别类型的排序相邻时,将第一素材确定为与第二片段对应的景别类型匹配的素材。In a possible design, when the first material is a picture material, the method specifically includes: the scene type corresponding to the first material in the electronic device is the same as the scene type corresponding to the first segment, or, the scene corresponding to the first material is of the same type. When the scene type is adjacent to the sequence of the scene type corresponding to the first segment according to the preset rule, the first material is determined as the material matching the scene type corresponding to the first segment; the electronic device is in the scene corresponding to the first material. The category type is the same as the scene category type corresponding to the second clip, or, when the scene category type corresponding to the first material is adjacent to the scene category type corresponding to the second clip according to the preset rules, the first material is determined to be the same as the scene category type corresponding to the second clip. The scene type corresponding to the two clips matches the material.
在一种可能的设计中,在第一素材为视频素材时,方法具体包括:电子设备在第四素材对应的景别类型与第一片段对应的景别类型相同或者第四素材对应的景别类型按照预设规则与第一片段对应的景别类型的排序相邻,且第四素材的时长等于第一片段的时长时,从第一素材中截取第四素材,并将第四素材确定为与第一片段对应的景别类型匹配的素材;电子设备在第四素材对应的景别类型与第二片段对应的景别类型相同或者第四素材对应 的景别类型按照预设规则与第二片段对应的景别类型的排序相邻,且第四素材的时长等于第二片段的时长时,从第二素材中截取第四素材,并将第四素材确定为与第二片段对应的景别类型匹配的素材;其中,第四素材为第一素材的部分或者全部。In a possible design, when the first material is a video material, the method specifically includes: the scene type of the electronic device corresponding to the fourth material is the same as the scene type corresponding to the first clip or the scene type corresponding to the fourth material. The type is adjacent to the sequence of the scene types corresponding to the first segment according to the preset rules, and when the duration of the fourth material is equal to the duration of the first segment, the fourth material is intercepted from the first material, and the fourth material is determined as The material that matches the scene type corresponding to the first clip; the scene type corresponding to the fourth material in the electronic device is the same as the scene type corresponding to the second clip, or the scene type corresponding to the fourth material is the same as the second type according to the preset rules. When the sequence of the scene types corresponding to the clips is adjacent, and the duration of the fourth material is equal to the duration of the second clip, the fourth material is intercepted from the second clip, and the fourth material is determined as the scene corresponding to the second clip A material whose type matches; wherein, the fourth material is part or all of the first material.
在一种可能的设计中,按照预设规则的排序,景别类型包括:近景、中景和远景,与近景相邻的景别类型为远景,与中景相邻的景别类型为近景和远景,与远景相邻的景别类型为近景。In a possible design, according to the ordering of preset rules, the types of scenes include: close-up, medium-range, and far-scenes; The distant view, the type of scene adjacent to the distant view is the close view.
在一种可能的设计中,第一应用为电子设备的图库应用。In a possible design, the first application is a gallery application of the electronic device.
第二方面,本申请提供一种电子设备,包括:存储器和处理器;存储器用于存储程序指令;处理器用于调用存储器中的程序指令使得电子设备执行第一方面及第一方面任一种可能的设计中的视频生成方法。In a second aspect, the present application provides an electronic device, including: a memory and a processor; the memory is used to store program instructions; the processor is used to call the program instructions in the memory to enable the electronic device to execute the first aspect and any possibility of the first aspect The video generation method in the design.
第三方面,本申请提供一种芯片系统,芯片系统应用于包括存储器、显示屏和传感器的电子设备;芯片系统包括:处理器;当处理器执行存储器中存储的计算机指令时,电子设备执行第一方面及第一方面任一种可能的设计中的视频生成方法。In a third aspect, the present application provides a chip system, which is applied to an electronic device including a memory, a display screen and a sensor; the chip system includes: a processor; when the processor executes computer instructions stored in the memory, the electronic device executes the first A video generation method in one aspect and any possible design of the first aspect.
第四方面,本申请提供一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器使得电子设备执行时实现第一方面及第一方面任一种可能的设计中的视频生成方法。In a fourth aspect, the present application provides a computer-readable storage medium on which a computer program is stored, and the computer program is executed by the processor to enable the electronic device to realize the video generation in the first aspect and any possible design of the first aspect. method.
第五方面,本申请提供一种计算机程序产品,包括:执行指令,执行指令存储在可读存储介质中,电子设备的至少一个处理器可以从可读存储介质读取执行指令,至少一个处理器执行执行指令使得电子设备实现第一方面及第一方面任一种可能的设计中的视频生成方法。In a fifth aspect, the present application provides a computer program product, comprising: execution instructions, the execution instructions are stored in a readable storage medium, at least one processor of an electronic device can read the execution instructions from the readable storage medium, and at least one processor Executing the execution instruction enables the electronic device to implement the first aspect and the video generation method in any possible design of the first aspect.
附图说明Description of drawings
图1为本申请一实施例提供的一种电子设备的结构示意图;FIG. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
图2为本申请一实施例提供的一种电子设备的软件结构框图;2 is a block diagram of a software structure of an electronic device provided by an embodiment of the present application;
图3A-图3T为本申请一实施例提供的人机交互界面示意图;3A-3T are schematic diagrams of human-computer interaction interfaces provided by an embodiment of the present application;
图4A-图4J为本申请一实施例提供的一张图片素材采用运镜的效果示意图;4A-4J are schematic diagrams of the effect of using a mirror for a picture material provided by an embodiment of the present application;
图5为本申请一实施例提供的一张图片素材采用不同速度的效果示意图;5 is a schematic diagram of the effect of using different speeds for a picture material provided by an embodiment of the present application;
图6为本申请一实施例提供的一张图片素材采用转场的效果示意图;FIG. 6 is a schematic diagram of the effect of using a transition for a picture material provided by an embodiment of the present application;
图7为本申请一实施例提供的人物类型的素材的景别类型的示意图;7 is a schematic diagram of a scene type of a character type material provided by an embodiment of the present application;
图8A-图8E为本申请一实施例提供的基于素材而生成的视频的播放示意图;8A-8E are schematic diagrams of playback of a video generated based on a material provided by an embodiment of the application;
图9为本申请一实施例提供的一种视频生成方法的示意图。FIG. 9 is a schematic diagram of a video generation method provided by an embodiment of the present application.
具体实施方式Detailed ways
本申请实施例中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,单独a, 单独b或单独c中的至少一项(个),可以表示:单独a,单独b,单独c,组合a和b,组合a和c,组合b和c,或组合a、b和c,其中a,b,c可以是单个,也可以是多个。此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性。In the embodiments of the present application, "at least one" refers to one or more, and "multiple" refers to two or more. "And/or", which describes the association relationship of the associated objects, indicates that there can be three kinds of relationships, for example, A and/or B, which can indicate: the existence of A alone, the existence of A and B at the same time, and the existence of B alone, where A, B can be singular or plural. The character "/" generally indicates that the associated objects are an "or" relationship. "At least one item(s) below" or similar expressions thereof refer to any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (a) of a single a, a single b or a single c can mean: a single a, a single b, a single c, a combination of a and b, a combination of a and c, a combination of b and c, or a combination of a, b and c, where a, b, c can be single or multiple. Furthermore, the terms "first" and "second" are used for descriptive purposes only and should not be construed to indicate or imply relative importance.
请参阅图1,图1为本申请一实施例提供的一种电子设备的结构示意图。如图1所示,电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。Please refer to FIG. 1 , which is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in FIG. 1 , the electronic device 100 may include a processor 110 , an external memory interface 120 , an internal memory 121 , a universal serial bus (USB) interface 130 , a charging management module 140 , a power management module 141 , and a battery 142 , Antenna 1, Antenna 2, Mobile Communication Module 150, Wireless Communication Module 160, Audio Module 170, Speaker 170A, Receiver 170B, Microphone 170C, Headphone Interface 170D, Sensor Module 180, Key 190, Motor 191, Indicator 192, Camera 193 , a display screen 194, and a subscriber identification module (subscriber identification module, SIM) card interface 195 and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
可以理解的是,本申请示意的结构并不构成对电子设备100的具体限定。在另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that the structures illustrated in this application do not constitute a specific limitation on the electronic device 100 . In other embodiments, the electronic device 100 may include more or fewer components than shown, or some components may be combined, or some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。The processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。The controller may be the nerve center and command center of the electronic device 100 . The controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。In some embodiments, the processor 110 may include one or more interfaces. The interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光 灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备100的触摸功能。The I2C interface is a bidirectional synchronous serial bus that includes a serial data line (SDA) and a serial clock line (SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces. For example, the processor 110 may couple the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate with each other through the I2C bus interface, so as to realize the touch function of the electronic device 100 .
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。The I2S interface can be used for audio communication. In some embodiments, the processor 110 may contain multiple sets of I2S buses. The processor 110 may be coupled with the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 . In some embodiments, the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。The PCM interface can also be used for audio communications, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。The UART interface is a universal serial data bus used for asynchronous communication. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 110 with the wireless communication module 160 . For example, the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function. In some embodiments, the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。The MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 . MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc. In some embodiments, the processor 110 communicates with the camera 193 through a CSI interface, so as to realize the photographing function of the electronic device 100 . The processor 110 communicates with the display screen 194 through the DSI interface to implement the display function of the electronic device 100 .
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。The GPIO interface can be configured by software. The GPIO interface can be configured as a control signal or as a data signal. In some embodiments, the GPIO interface may be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。The USB interface 130 is an interface that conforms to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like. The USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones to play audio through the headphones. The interface can also be used to connect other electronic devices, such as AR devices.
可以理解的是,本申请示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。It can be understood that the interface connection relationship between the modules illustrated in this application is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 . In other embodiments, the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。The charging management module 140 is used to receive charging input from the charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from the wired charger through the USB interface 130 . In some wireless charging embodiments, the charging management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 charges the battery 142 , it can also supply power to the electronic device through the power management module 141 .
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可 以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。The power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 . The power management module 141 receives input from the battery 142 and/or the charging management module 140 and supplies power to the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 . The power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, battery health status (leakage, impedance). In some other embodiments, the power management module 141 may also be provided in the processor 110 . In other embodiments, the power management module 141 and the charging management module 140 may also be provided in the same device.
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。 Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals. Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example, the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。The mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 . The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like. The mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation. The mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 . In some embodiments, at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 . In some embodiments, at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。The modem processor may include a modulator and a demodulator. Wherein, the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal. The demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and passed to the application processor. The application processor outputs sound signals through audio devices (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or videos through the display screen 194 . In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be independent of the processor 110, and may be provided in the same device as the mobile communication module 150 or other functional modules.
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。The wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR). The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 . The wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2 .
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation  satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。In some embodiments, the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology. The wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc. The GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。Display screen 194 is used to display images, videos, and the like. Display screen 194 includes a display panel. The display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light). emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on. In some embodiments, the electronic device 100 may include one or N display screens 194 , where N is a positive integer greater than one.
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。The electronic device 100 may implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。The ISP is used to process the data fed back by the camera 193 . For example, when taking a photo, the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise, brightness, and skin tone. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be provided in the camera 193 .
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。Camera 193 is used to capture still images or video. The object is projected through the lens to generate an optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. DSP converts digital image signals into standard RGB, YUV and other formats of image signals. In some embodiments, the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。A digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy and so on.
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos of various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。The NPU is a neural-network (NN) computing processor. By drawing on the structure of biological neural networks, such as the transfer mode between neurons in the human brain, it can quickly process the input information, and can continuously learn by itself. Applications such as intelligent cognition of the electronic device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。The external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100 . The external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。 处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。Internal memory 121 may be used to store computer executable program code, which includes instructions. The processor 110 executes various functional applications and data processing of the electronic device 100 by executing the instructions stored in the internal memory 121 . The internal memory 121 may include a storage program area and a storage data area. The storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like. The storage data area may store data (such as audio data, phone book, etc.) created during the use of the electronic device 100 and the like. In addition, the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。The audio module 170 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 .
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。 Speaker 170A, also referred to as a "speaker", is used to convert audio electrical signals into sound signals. The electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。The receiver 170B, also referred to as "earpiece", is used to convert audio electrical signals into sound signals. When the electronic device 100 answers a call or a voice message, the voice can be answered by placing the receiver 170B close to the human ear.
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。The microphone 170C, also called "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can make a sound by approaching the microphone 170C through a human mouth, and input the sound signal into the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which can implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。The earphone jack 170D is used to connect wired earphones. The earphone interface 170D can be the USB interface 130, or can be a 3.5mm open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。The pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals. In some embodiments, the pressure sensor 180A may be provided on the display screen 194 . There are many types of pressure sensors 180A, such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, and the like. The capacitive pressure sensor may be comprised of at least two parallel plates of conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the intensity of the pressure according to the change in capacitance. When a touch operation acts on the display screen 194, the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A. In some embodiments, touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example, when a touch operation whose intensity is less than the first pressure threshold acts on the short message application icon, the instruction for viewing the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, the instruction to create a new short message is executed.
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀 螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100 . In some embodiments, the angular velocity of electronic device 100 about three axes (ie, x, y, and z axes) may be determined by gyro sensor 180B. The gyro sensor 180B can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor 180B detects the shaking angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to offset the shaking of the electronic device 100 through reverse motion to achieve anti-shake. The gyro sensor 180B can also be used for navigation and somatosensory game scenarios.
气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。The air pressure sensor 180C is used to measure air pressure. In some embodiments, the electronic device 100 calculates the altitude through the air pressure value measured by the air pressure sensor 180C to assist in positioning and navigation.
磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备100是翻盖机时,电子设备100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。The magnetic sensor 180D includes a Hall sensor. The electronic device 100 can detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip machine, the electronic device 100 can detect the opening and closing of the flip according to the magnetic sensor 180D. Further, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, characteristics such as automatic unlocking of the flip cover are set.
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。The acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。Distance sensor 180F for measuring distance. The electronic device 100 can measure the distance through infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 can use the distance sensor 180F to measure the distance to achieve fast focusing.
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。电子设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备100附近有物体。当检测到不充分的反射光时,电子设备100可以确定电子设备100附近没有物体。电子设备100可以利用接近光传感器180G检测用户手持电子设备100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes. The light emitting diodes may be infrared light emitting diodes. The electronic device 100 emits infrared light to the outside through the light emitting diode. Electronic device 100 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100 . When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100 . The electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power. Proximity light sensor 180G can also be used in holster mode, pocket mode automatically unlocks and locks the screen.
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。The ambient light sensor 180L is used to sense ambient light brightness. The electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness. The ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures. The ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket, so as to prevent accidental touch.
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。The fingerprint sensor 180H is used to collect fingerprints. The electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, accessing application locks, taking pictures with fingerprints, answering incoming calls with fingerprints, and the like.
温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备100对电池142加热,以避免低温导致电子设备100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备100对电池142的输出电压执行升压,以避免低温导致的异常关机。The temperature sensor 180J is used to detect the temperature. In some embodiments, the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold value, the electronic device 100 reduces the performance of the processor located near the temperature sensor 180J in order to reduce power consumption and implement thermal protection. In other embodiments, when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 caused by the low temperature. In some other embodiments, when the temperature is lower than another threshold, the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不 同。Touch sensor 180K, also called "touch panel". The touch sensor 180K may be disposed on the display screen 194 , and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”. The touch sensor 180K is used to detect a touch operation on or near it. The touch sensor can pass the detected touch operation to the application processor to determine the type of touch event. Visual output related to touch operations may be provided through display screen 194 . In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display screen 194.
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。The bone conduction sensor 180M can acquire vibration signals. In some embodiments, the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice. The bone conduction sensor 180M can also contact the pulse of the human body and receive the blood pressure beating signal. In some embodiments, the bone conduction sensor 180M can also be disposed in the earphone, combined with the bone conduction earphone. The audio module 170 can analyze the voice signal based on the vibration signal of the vocal vibration bone block obtained by the bone conduction sensor 180M, so as to realize the voice function. The application processor can analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M, and realize the function of heart rate detection.
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。The keys 190 include a power-on key, a volume key, and the like. Keys 190 may be mechanical keys. It can also be a touch key. The electronic device 100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100 .
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。Motor 191 can generate vibrating cues. The motor 191 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback. For example, touch operations acting on different applications (such as taking pictures, playing audio, etc.) can correspond to different vibration feedback effects. The motor 191 can also correspond to different vibration feedback effects for touch operations on different areas of the display screen 194 . Different application scenarios (for example: time reminder, receiving information, alarm clock, games, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect can also support customization.
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。The indicator 192 can be an indicator light, which can be used to indicate the charging state, the change of the power, and can also be used to indicate a message, a missed call, a notification, and the like.
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。The SIM card interface 195 is used to connect a SIM card. The SIM card can be contacted and separated from the electronic device 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195 . The electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1. The SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card and so on. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 can also be compatible with different types of SIM cards. The SIM card interface 195 is also compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to realize functions such as call and data communication. In some embodiments, the electronic device 100 employs an eSIM, ie: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 .
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。其中,本申请实施例对电子设备的操作系统的类型不做限定。例如,Android系统、Linux系统、Windows系统、iOS系统、鸿蒙操作系统(harmony operating system,鸿蒙OS)等。The software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. The embodiments of the present application take an Android system with a layered architecture as an example to exemplarily describe the software structure of the electronic device 100 . Wherein, the embodiment of the present application does not limit the type of the operating system of the electronic device. For example, Android system, Linux system, Windows system, iOS system, Hongmeng operating system (harmony operating system, Hongmeng OS), etc.
请参阅图2,图2为本申请一实施例提供的一种电子设备的软件结构框图。如图2所示,分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层(APP),应用程序框架层(APP framework),安卓运行时(Android runtime)和系统库(libraries),以及内核层(kernel)。Please refer to FIG. 2 , which is a block diagram of a software structure of an electronic device according to an embodiment of the present application. As shown in Figure 2, the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces. In some embodiments, the Android system is divided into four layers, from top to bottom, they are an application layer (APP), an application framework layer (APP framework), an Android runtime (Android runtime) and a system library (libraries), And the kernel layer (kernel).
应用程序层可以包括一系列应用程序包。The application layer can include a series of application packages.
如图2所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,游戏,聊天,购物,出行,即时通信(如短信息),智能家居,设备 控制等应用程序(application,APP)。As shown in Figure 2, the application package can include camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, game, chat, shopping, travel, instant messaging (such as SMS), smart home, Device control and other applications (application, APP).
其中,智能家居应用可用于对具有联网功能的家居设备进行控制或管理。例如,家居设备可以包括电灯、电视和空调。又如,家居设备还可以包括防盗门锁、音箱、扫地机器人、插座、体脂秤、台灯、空气净化器、电冰箱、洗衣机、热水器、微波炉、电饭锅、窗帘、风扇、电视、机顶盒、门窗等。Among them, smart home applications can be used to control or manage home devices with networking capabilities. For example, household equipment may include lights, televisions, and air conditioners. For another example, household equipment may also include anti-theft door locks, speakers, sweeping robots, sockets, body fat scales, desk lamps, air purifiers, refrigerators, washing machines, water heaters, microwave ovens, rice cookers, curtains, fans, TVs, set-top boxes, windows, etc.
另外,应用程序包还可以包括:主屏幕(即桌面),负一屏,控制中心,通知中心等应用程序。In addition, the application package may also include: home screen (ie desktop), negative screen, control center, notification center and other applications.
其中,负一屏,又可称为“-1屏”,是指在电子设备的主屏幕向右滑动屏幕,直至滑动至最左侧分屏的用户界面(user interface,UI)。例如,负一屏可以用于放置一些快捷服务功能和通知消息,比如全局搜索、应用程序某个页面的快捷入口(付款码、微信等)、即时信息及提醒(快递信息、支出信息、通勤路况、打车出行信息、日程信息等)及关注动态(足球看台、篮球看台、股票信息等)等。控制中心为电子设备的上滑消息通知栏,即当用户在电子设备的底部开始进行向上滑动的操作时电子设备所显示出的用户界面。通知中心为电子设备的下拉消息通知栏,即当用户在电子设备的顶部开始进行向下操作时电子设备所显示出的用户界面。The negative one screen, also known as "-1 screen", refers to sliding the screen to the right on the main screen of the electronic device until it slides to the user interface (UI) of the leftmost split screen. For example, the negative screen can be used to place some quick service functions and notification messages, such as global search, quick entry to a certain page of the application (payment code, WeChat, etc.), instant information and reminders (courier information, expense information, commuting road conditions) , taxi travel information, schedule information, etc.) and attention to trends (football stands, basketball stands, stock information, etc.). The control center is the slide-up message notification bar of the electronic device, that is, the user interface displayed by the electronic device when the user starts to perform the slide-up operation at the bottom of the electronic device. The notification center is a drop-down message notification bar of the electronic device, that is, a user interface displayed by the electronic device when the user starts to perform downward operations on the top of the electronic device.
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。The application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer. The application framework layer includes some predefined functions.
如图2所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。As shown in Figure 2, the application framework layer may include window managers, content providers, view systems, telephony managers, resource managers, notification managers, and the like.
窗口管理器(window manager)用于管理窗口程序,如管理窗口状态、属性、视图(view)增加、删除、更新、窗口顺序、消息收集和处理等。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。并且,窗口管理器为外界访问窗口的入口。Window manager (window manager) is used to manage window programs, such as managing window state, properties, view addition, deletion, update, window order, message collection and processing, etc. The window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc. And, the window manager is the entry for the outside world to access the window.
内容提供器用于存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。Content providers are used to store and retrieve data and make this data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。The view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications. A display interface can consist of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。The phone manager is used to provide the communication function of the electronic device 100 . For example, the management of call status (including connecting, hanging up, etc.).
资源管理器(resource manager)为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。The resource manager (resource manager) provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。The notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc. The notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications of applications running in the background, and notifications on the screen in the form of dialog windows. For example, text information is prompted in the status bar, a prompt sound is issued, the electronic device vibrates, and the indicator light flashes.
安卓运行时包括核心库和虚拟机。安卓运行时负责Android系统的调度和管理。The Android runtime includes core libraries and a virtual machine. The Android runtime is responsible for the scheduling and management of the Android system.
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是Android系统的核心库。The core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of the Android system.
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。The application layer and the application framework layer run in virtual machines. The virtual machine executes the java files of the application layer and the application framework layer as binary files. The virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(media libraries),三维图形处理库(例如:OpenGLES),2D图形引擎(例如:SGL)等。A system library can include multiple functional modules. For example: surface manager (surface manager), media library (media library), 3D graphics processing library (eg: OpenGLES), 2D graphics engine (eg: SGL), etc.
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。The Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。The media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files. The media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。The 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
2D图形引擎是2D绘图的绘图引擎。2D graphics engine is a drawing engine for 2D drawing.
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。The kernel layer is the layer between hardware and software. The kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
下面结合利用智能音箱播放声音的场景,示例性说明电子设备100的软件和硬件的工作流程。The software and hardware workflows of the electronic device 100 are exemplarily described below in conjunction with a scenario in which a smart speaker is used to play sound.
当触摸传感器180K接收到触摸操作,相应的硬件中断被发给内核层。内核层将触摸操作加工成原始输入事件(包括触摸坐标,触摸操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。以该触摸操作是触摸单击操作,该单击操作所对应的控件为智能音箱图标的控件为例,智能音箱应用调用应用框架层的接口,启动智能音箱应用,进而通过调用内核层启动音频驱动,通过扬声器170A将音频电信号转换成声音信号。When the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is sent to the kernel layer. The kernel layer processes touch operations into raw input events (including touch coordinates, timestamps of touch operations, etc.). Raw input events are stored at the kernel layer. The application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and the control corresponding to the click operation is the control of the smart speaker icon, for example, the smart speaker application calls the interface of the application framework layer to start the smart speaker application, and then starts the audio driver by calling the kernel layer. , the audio electrical signal is converted into a sound signal through the speaker 170A.
可以理解的是,本申请示意的结构并不构成对电子设备100的具体限定。在另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that the structures illustrated in this application do not constitute a specific limitation on the electronic device 100 . In other embodiments, the electronic device 100 may include more or fewer components than shown, or some components may be combined, or some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
以下实施例中所涉及的技术方案均可以在具有上述硬件架构和软件架构的电子设备100中实现。The technical solutions involved in the following embodiments can all be implemented in the electronic device 100 having the above-mentioned hardware architecture and software architecture.
本申请实施例提供一种视频生成方法和电子设备,通过电子设备识别素材的景别类型,匹配合适的视频模板,基于视频模板中设置的景别类型,对素材的排列顺序进行调整,并结合视频模板中设置的运镜、速度和转场,能够自动生成视频,使得生成的视频视线连贯且品质感高,加强了视频的镜头感和电影感,提升了用户的使用体验,且用户可手动调整视频的时长、滤镜、画幅等参数,满足了实际的用户需求,丰富了视频的种类。Embodiments of the present application provide a method and electronic device for generating a video. The electronic device identifies the scene type of the material, matches a suitable video template, adjusts the arrangement order of the material based on the scene type set in the video template, and combines The camera movement, speed and transition set in the video template can automatically generate video, so that the generated video has a coherent line of sight and high quality, which enhances the sense of footage and film of the video, and improves the user experience. Adjust the video duration, filters, frame and other parameters to meet the actual user needs and enrich the types of videos.
其中,电子设备可以是手机、平板电脑、可穿戴设备、车载设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)、智能电视、智慧屏、高清电视、4K电视、智能音箱、智能投影仪等,本申请实施例对电子设备的具体类型不作任何限制。The electronic device may be a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an augmented reality (AR)/virtual reality (VR) device, a notebook computer, an ultra-mobile personal computer (ultra-mobile personal computer) , UMPC), netbooks, personal digital assistants (personal digital assistants, PDAs), smart TVs, smart screens, high-definition TVs, 4K TVs, smart speakers, smart projectors, etc. The embodiments of this application do not impose any restrictions on the specific types of electronic devices .
下面,对本申请实施例涉及的部分用语进行解释说明,以便于本领域技术人员理解。Hereinafter, some terms involved in the embodiments of the present application will be explained, so as to facilitate the understanding of those skilled in the art.
1、素材可以理解为电子设备中存储的图片素材或视频素材。需要说明的是,本申请 实施例提及的图片素材与照片素材的含义相同。图片素材可以为电子设备拍摄得到的,也可以为电子设备从服务器中下载得到的,也可以为电子设备从其他电子设备接收到的,本申请实施例对此不做限定。1. The material can be understood as the picture material or video material stored in the electronic device. It should be noted that the picture material mentioned in the embodiments of the present application has the same meaning as the photo material. The picture material may be photographed by an electronic device, downloaded by the electronic device from a server, or received by the electronic device from other electronic devices, which is not limited in this embodiment of the present application.
2、景别可以理解为由于拍摄体与被拍摄体的距离不同,而造成被拍摄体在拍摄体中所呈现出的范围大小的区别。其中,拍摄体可以为电子设备,也可以与电子设备有通信连接的设备,本申请实施例对此不做限定。2. Scene recognition can be understood as the difference in the size of the range presented by the subject in the subject due to the difference in the distance between the subject and the subject. The photographing body may be an electronic device, or may be a device in communication connection with the electronic device, which is not limited in this embodiment of the present application.
本申请实施例中,景别类型的划分可以包括多种实现方式。需要说明的是,本申请实施例提及的景别类型指的是景别的类型。In this embodiment of the present application, the division of scene types may include various implementation manners. It should be noted that the scene types mentioned in the embodiments of the present application refer to scene types.
在一些实施例中,景别类型的划分可以为三种,由近至远分别为近景、中景和远景。例如,近景指人体胸部以上,中景指人体大腿以上,远景指除了近景和中景之外的情况。In some embodiments, the classification of scene types can be divided into three types, from near to far, they are close-range, medium-range, and long-range. For example, close shot refers to above the human chest, medium shot refers to above the human thigh, and long shot refers to situations other than close and medium shots.
在另一些实施例中,景别类型的划分可以为五种,由近至远分别为特写、近景、中景、全景和远景。例如,特写指人体肩部以上,近景指人体胸部以上,中景指人体膝部以上,全景指人体的全部和周围部分环境,远景指被拍摄体所处环境。In other embodiments, the classification of scene types may be five, from near to far, they are close-up, close-up, medium-range, panorama, and long-range. For example, close-up refers to the body above the shoulders, close-up refers to the body above the chest, medium shot refers to the body above the knee, panoramic refers to the entire human body and part of the surrounding environment, and long-range refers to the environment where the subject is located.
本申请实施例中,视频素材对应的景别类型可看成为多张图片素材各自景别类型的集合。通常,电子设备可记录每个景别类型的起始时刻和时长,或者起始时刻和终止时刻,或者起始时刻、时长和终止时刻。并且,电子设备采用人脸识别、语义识别、显著性特征识别、语义分割等技术,可分门别类去判断素材的景别类型,即确定图片素材的景别类型。In this embodiment of the present application, the scene type corresponding to the video material may be regarded as a collection of the respective scene types of a plurality of picture materials. Usually, the electronic device can record the start time and duration, or the start time and the end time, or the start time, the duration and the end time of each scene type. In addition, electronic equipment adopts technologies such as face recognition, semantic recognition, salient feature recognition, semantic segmentation, etc., which can classify the scene type of the material, that is, determine the scene type of the picture material.
下面,结合实施例介绍电子设备判断任意一个素材的景别类型的具体实现方式。In the following, a specific implementation manner of determining the scene type of any material by an electronic device is described with reference to the embodiments.
A、人脸特写和人脸近景A, face close-up and face close-up
电子设备基于人脸识别技术,确定任意一个素材的人脸识别框。The electronic device determines the face recognition frame of any material based on the face recognition technology.
当该人脸识别框的面积大于阈值A1,则电子设备判断该素材的景别为人脸特写。When the area of the face recognition frame is larger than the threshold value A1, the electronic device determines that the scene of the material is a close-up of the face.
当该人脸识别框的面积大于阈值A2且小于阈值A1,则电子设备判断该素材的景别为人脸近景。When the area of the face recognition frame is larger than the threshold value A2 and smaller than the threshold value A1, the electronic device determines that the scene of the material is a close-up scene of a face.
其中,阈值A1和阈值A2的具体数值可根据经验值和人脸识别技术手段等因素进行设置。The specific values of the threshold value A1 and the threshold value A2 may be set according to factors such as empirical values and face recognition technical means.
B、人物特写和人物近景B. Character close-up and character close-up
电子设备基于人脸识别技术,对任意一个素材进行人脸识别。Electronic devices can perform face recognition on any material based on face recognition technology.
当识别结果表示没有人脸存在,且人的语义风格(如该素材中存在人的侧脸/背影)存在时,电子设备可用头部的面积*阈值得到人物识别框。When the recognition result indicates that there is no face, and the semantic style of the person (such as the profile/back of the person in the material) exists, the electronic device can use the area of the head*threshold to obtain the person recognition frame.
当该人物识别框的面积大于阈值B1,则电子设备判断该素材的景别为人物特写。When the area of the person identification frame is greater than the threshold B1, the electronic device determines that the scene of the material is a close-up of a person.
当该人物识别框的面积大于阈值B2且小于阈值B1,则电子设备判断该素材的景别为人物近景。When the area of the person identification frame is larger than the threshold B2 and smaller than the threshold B1, the electronic device determines that the scene of the material is a close-up scene of a person.
其中,阈值B1和阈值B2的具体数值可根据经验值等因素进行设置。The specific values of the threshold value B1 and the threshold value B2 may be set according to factors such as empirical values.
C、食物近景和食物特写C, food close-up and food close-up
电子设备基于语音分割识别技术和显著性特征识别技术,确定任意一个素材的语义识别结果和显著性特征识别结果。The electronic device determines the semantic recognition result and the salient feature recognition result of any material based on the speech segmentation recognition technology and the salient feature recognition technology.
当语义识别结果表示食物的面积大于阈值C1,显著性特征结果表示显著性的面积大于阈值C2,且食物的面积与显著性的面积重合,则电子设备判断该素材的景别为食物特写。When the semantic recognition result indicates that the area of the food is larger than the threshold value C1, the saliency feature result indicates that the area of significance is larger than the threshold value C2, and the area of the food coincides with the area of salience, the electronic device determines that the scene of the material is a close-up of the food.
当语义识别结果表示食物的面积大于阈值C1,且显著性特征结果表示显著性的面积小 于阈值C2,则电子设备判断该素材的景别为食物近景。When the semantic recognition result indicates that the area of the food is larger than the threshold C1, and the saliency feature result indicates that the saliency area is smaller than the threshold C2, the electronic device determines that the scene of the material is a close-up scene of the food.
其中,阈值C1和阈值C2的具体数值可根据经验值等因素进行设置。The specific values of the threshold C1 and the threshold C2 can be set according to factors such as empirical values.
D、非人物大光圈近景D, non-person large aperture close-up
在检测到任意一个素材自带大光圈模式的照片或者检测到素材存在大的虚焦图像时,电子设备判断该素材的景别为非人物大光圈近景。When it is detected that any material has a photo with a large aperture mode or that a large defocused image exists in the material, the electronic device determines that the scene of the material is a close-up view of a non-person with a large aperture.
E、显著性花近景和显著性宠物近景E. Close-up of prominent flowers and close-up of prominent pets
电子设备基于语音分割识别技术和显著性特征识别技术,确定任意一个素材的语义识别结果和显著性特征识别结果。The electronic device determines the semantic recognition result and the salient feature recognition result of any material based on the speech segmentation recognition technology and the salient feature recognition technology.
当语义识别结果表示花的面积大于阈值D1,显著性特征结果表示显著性的面积大于阈值D2,且花的面积与显著性的面积重合,则电子设备判断该素材的景别为显著性花近景。When the semantic recognition result indicates that the area of the flower is larger than the threshold D1, the saliency feature result indicates that the area of salience is larger than the threshold D2, and the area of the flower coincides with the area of saliency, the electronic device judges the scene of the material as a close-up view of a saliency flower .
其中,阈值D1和阈值D2的具体数值可根据经验值等因素进行设置。The specific values of the threshold D1 and the threshold D2 may be set according to factors such as empirical values.
当语义识别结果表示宠物的面积大于阈值E1,显著性特征结果表示显著性的面积大于阈值E2,且宠物的面积与显著性的面积重合,则电子设备判断该素材的景别为显著性宠物近景。When the semantic recognition result indicates that the area of the pet is greater than the threshold E1, the saliency feature result indicates that the area of salience is greater than the threshold E2, and the area of the pet coincides with the area of salience, the electronic device determines that the scene of the material is a close-up view of a salient pet .
其中,阈值E1和阈值E2的具体数值可根据经验值等因素进行设置。The specific values of the threshold E1 and the threshold E2 may be set according to factors such as empirical values.
F、人物中景F. Character in the middle shot
电子设备基于人脸识别技术,对任意一个素材进行人脸识别。Electronic devices can perform face recognition on any material based on face recognition technology.
在识别结果表示没有出现过符合人物近景的人脸或者分解结果,或者,没有出现完整人物完全进入画框(如下躯干离开画框边缘且人脸或者人头小于一阈值)时,电子设备判断该素材的景别为人物中景。When the recognition result indicates that there is no face or decomposition result that matches the close-up of the character, or, there is no complete character completely entering the frame (as follows: the torso leaves the edge of the frame and the face or head is smaller than a threshold), the electronic device determines the material. The scene is not the character medium scene.
G、显著性远景G. Significant prospect
电子设备基于显著性特征识别技术,确定任意一个素材的显著性特征识别结果。The electronic device determines the salient feature recognition result of any material based on the salient feature recognition technology.
在显著性结果存在,显著性结果表示显著性的面积小于阈值F(如素材为沙漠中一骆驼的图片素材,其中骆驼为显著性结果)时,电子设备判断该素材的景别为显著性远景。When a significant result exists and the significant area indicates that the significant area is smaller than the threshold value F (for example, the material is a picture material of a camel in the desert, where the camel is a significant result), the electronic device determines the scene of the material as a significant distant view .
H、风景远景H. Landscape
电子设备基于语义分割技术,确定任意一个素材的画面分割结果。The electronic device determines the screen segmentation result of any material based on the semantic segmentation technology.
在画面分割结果表示素材的面积大于阈值G为预设目标时,电子设备判断该素材的景别为风景远景。When the screen segmentation result indicates that the area of the material is greater than the threshold G as the preset target, the electronic device determines that the scene of the material is a landscape and distant scene.
其中,阈值G可以设置为大于等于90%,本申请实施例对阈值G的具体数值不做限定。预设目标可以为风景特征,如海、天空、山峰等。The threshold value G may be set to be greater than or equal to 90%, and the specific value of the threshold value G is not limited in this embodiment of the present application. The preset targets can be landscape features, such as sea, sky, mountain peaks, etc.
I、其他I. Other
在电子设备基于前述技术无法识别该素材的景别时,电子设备判断该素材的景别为中景。When the electronic device cannot identify the scene of the material based on the aforementioned technology, the electronic device determines that the scene of the material is a medium scene.
其中,A到E是近景范围,确定性顺序为人物特写=人脸特写>食物特写>人物近景=人脸近景>非人物大光圈近景>食物近景=显著性花近景和显著性宠物近景,G和H为远景,确定性顺序为H>G,F和I为中景。Among them, A to E are the close-up range, and the deterministic order is close-up of person=close-up of human face>close-up of food>close-up of person=close-up of human face>close-up of non-person with large aperture>close-up of food=close-up of prominent flowers and close-up of prominent pets, G and H are long shots, the deterministic order is H>G, and F and I are medium shots.
3、运镜也叫做运动镜头,主要是指镜头自身的运动。本申请实施例中,运镜与素材的类型相关,即图片素材对应的运镜与视频素材对应的运镜可相同或不同。3. Mirror movement is also called sports lens, which mainly refers to the movement of the lens itself. In this embodiment of the present application, the movement of the mirror is related to the type of the material, that is, the movement of the mirror corresponding to the picture material and the movement of the mirror corresponding to the video material may be the same or different.
4、转场可以理解为段落与段落、场景与场景之间的过渡或转换。其中,每个段落(构 成视频的最小单位是镜头,一个个镜头连接在一起形成的镜头序列)都具有某个单一的、相对完整的意思,如表现一个动作过程,表现一种相关关系,表现一种含义等等。它是视频中一个完整的叙事层次,就像戏剧中的幕,小说中的章节一样,一个个段落连接在一起,就形成了完整的视频。因此,段落是视频最基本的结构形式,视频在内容上的结构层次是通过段落表现出来的。4. Transitions can be understood as transitions or transitions between paragraphs and paragraphs, scenes and scenes. Among them, each paragraph (the smallest unit that constitutes a video is a shot, a sequence of shots formed by connecting shots together) has a single, relatively complete meaning, such as expressing an action process, expressing a correlation, expressing a meaning and so on. It is a complete narrative level in a video, just like an act in a drama or a chapter in a novel. The paragraphs are connected together to form a complete video. Therefore, paragraph is the most basic structural form of video, and the structural level of video content is expressed through paragraphs.
5、视频模板中设置的景别类型、运镜、速度以及转场5. The type of scene, movement, speed and transition set in the video template
视频模板可以理解为视频的主题或风格。其中,视频模板的类型可以包括但不限于:旅行、亲子、聚会、运动、美食、风光、复古、城市、夜幕、人文等。A video template can be understood as the theme or style of the video. The types of video templates may include but are not limited to: travel, parent-child, party, sports, food, scenery, retro, city, night, humanities, etc.
其中,任意一个视频模板中的参数可包括但不限于:景别类型、运镜、速度以及转场等。通常,不同的视频模板,对应的景别类型、运镜、速度以及转场中的至少一个参数不同。The parameters in any video template may include, but are not limited to: scene type, camera movement, speed, transition, and the like. Generally, different video templates have different at least one parameter in scene type, camera movement, speed and transition.
本申请实施例中,电子设备具备将存储的素材生成视频的功能,使得电子设备中的一个或者多个图片素材和/或视频素材生成视频。并且,电子设备向用户提供多种生成视频的入口方式,方便用户及时且快速地生成视频,提高了用户的便利性。In this embodiment of the present application, the electronic device has a function of generating a video from stored materials, so that one or more picture materials and/or video materials in the electronic device generate a video. In addition, the electronic device provides the user with a variety of entry ways to generate video, which facilitates the user to generate the video in a timely and rapid manner, and improves the convenience of the user.
下面,以电子设备的图库应用为生成视频的入口进行举例,结合方式一、方式二和方式三,对本申请实施例的电子设备将存储的素材生成视频的方法进行详细阐述。需要说明的是,本申请实施例包括不限于图库应用作为生成视频的入口方式,且包括但不限于上述三种方式。In the following, taking the gallery application of the electronic device as an example for generating a video, the method for generating a video from the stored material by the electronic device according to the embodiment of the present application will be described in detail in combination with the first, second and third modes. It should be noted that the embodiments of the present application include, but are not limited to, the gallery application as an entry method for generating videos, and include but are not limited to the above three methods.
方式一method one
请参阅图3A-图3F,图3A-图3F为本申请一实施例提供的人机交互界面示意图。为了便于说明,图3A-图3F中,以电子设备为手机为例进行示例性示意。Please refer to FIGS. 3A-3F . FIGS. 3A-3F are schematic diagrams of human-computer interaction interfaces provided by an embodiment of the present application. For convenience of description, in FIGS. 3A-3F , the electronic device is a mobile phone as an example for exemplary illustration.
手机可显示如图3A示例性所示的用户界面11。其中,用户界面11可以为桌面的主界面(Home screen),用户界面11可以包括但不限于:状态栏,导航栏,日历指示符,天气指示符,以及多个应用程序图标等。应用程序图标可以包括:图库应用的图标301,应用程序图标还可以包括:如华为视频应用的图标、音乐应用的图标、手机管家应用的图标、设置应用的图标、华为商场应用的图标、智慧生活应用的图标、运动健康应用的图标、通话应用的图标、即时通信应用的图标、浏览器应用的图标、相机应用的图标等。The handset may display a user interface 11 as exemplarily shown in Figure 3A. Wherein, the user interface 11 may be the home screen of the desktop, and the user interface 11 may include but not limited to: a status bar, a navigation bar, a calendar indicator, a weather indicator, and multiple application icons. The application icon may include: the icon 301 of the gallery application, and the application icon may also include: such as the icon of the Huawei video application, the icon of the music application, the icon of the mobile phone manager application, the icon of the setting application, the icon of the Huawei mall application, the icon of the smart life Icons of applications, icons of sports health applications, icons of calling applications, icons of instant messaging applications, icons of browser applications, icons of camera applications, etc.
手机在检测到用户在图3A所示的用户界面11中执行打开图库应用的操作(如点击图库应用的图标301)后,可显示图3B示例性所示的用户界面12,用户界面12用于显示图库应用中相册类别对应的页面。After detecting that the user performs an operation of opening the gallery application in the user interface 11 shown in FIG. 3A (for example, clicking the icon 301 of the gallery application), the mobile phone can display the user interface 12 exemplarily shown in FIG. 3B . The user interface 12 is used for Displays the page corresponding to the album category in the Gallery app.
其中,用户界面12中可以包括:控件3021,控件3021用于进入到包含有手机中全部的图片素材和/或视频素材的显示界面,以及控件3023,控件3023用于进入到图库应用中的相册类别对应的显示界面。Among them, the user interface 12 may include: a control 3021, which is used to enter a display interface containing all picture materials and/or video materials in the mobile phone, and a control 3023, which is used to enter the photo album in the gallery application The display interface corresponding to the category.
本申请实施例中,用户界面12的具体实现方式可包括多种。为了便于说明,图3B中,用户界面12分为两个分组。In this embodiment of the present application, the specific implementation manner of the user interface 12 may include various manners. For ease of illustration, in FIG. 3B, the user interface 12 is divided into two groups.
第一分组中包括两个部分。第一分组的标题在图3B中采用文字“相册”为例进行示意。The first group consists of two parts. The title of the first group is illustrated by taking the text "album" as an example in FIG. 3B .
第一部分中用于向用户提供按照照片、人物、地点等关键词搜索图片素材和/或视频素 材的途径的搜索框。In the first part, a search box is used to provide the user with a way to search for picture material and/or video material according to keywords such as photos, people, places, etc.
第二部分中包括控件3021,以及用于进入到仅包含视频素材的显示界面的控件。The second part includes controls 3021, and controls for entering a display interface containing only video material.
第二分组显示有通过截屏或某个应用等方式得到的图片。其中,第三部分的标题在图3B中采用文字“其他相册(3)”以及一个圆角矩形框为例进行示意。The second group displays pictures obtained by means of screenshots or an application. The title of the third part is illustrated in FIG. 3B by using the text “other albums (3)” and a rounded rectangular frame as an example.
另外,用户界面12中还包括:控件3022、控件3024和控件3025。其中,控件3022用于进入到图库应用中的照片类别对应的显示界面。控件3024用于进入到图库应用中的时刻类别对应的显示界面。控件3025用于进入到图库应用中的发现类别对应的显示界面。In addition, the user interface 12 further includes: a control 3022 , a control 3024 and a control 3025 . The control 3022 is used to enter the display interface corresponding to the photo category in the gallery application. The control 3024 is used to enter the display interface corresponding to the moment category in the gallery application. The control 3025 is used to enter the display interface corresponding to the discovery category in the gallery application.
另外,用户界面12中还可以包括:用于实现在用户界面12中删除已存在的分组、更改已存在的分组的名称等功能的控件,以及用于在用户界面12中添加新的分组的控件。In addition, the user interface 12 may further include: controls for implementing functions such as deleting an existing group in the user interface 12, changing the name of an existing group, and controls for adding a new group in the user interface 12 .
手机在检测到用户在图3B所示的用户界面12中执行如点击控件3021的操作后,可显示图3C示例性所示的用户界面13,用户界面13为手机中全部的图片素材和/或视频素材的显示界面。其中,本申请实施例对用户界面13中的图片素材的显示数量、图片素材的显示面积、图片素材的显示位置、视频素材的显示内容、视频素材的显示数量、视频素材的显示面积、视频素材的显示位置、各个类型的素材顺序等参数不做限定。After detecting that the user performs an operation such as clicking on the control 3021 in the user interface 12 shown in FIG. 3B, the mobile phone can display the user interface 13 exemplarily shown in FIG. 3C, and the user interface 13 is all the picture materials and/or the mobile phone. Display interface for video footage. Among them, the embodiments of the present application display the number of picture materials in the user interface 13, the display area of picture materials, the display position of picture materials, the display content of video materials, the display quantity of video materials, the display area of video materials, the display area of video materials, the The parameters such as the display position and the order of each type of material are not limited.
为了便于说明,图3C中,按照距离当前时刻由近到远的时间顺序,用户界面13中显示有:视频素材3031、图片素材3032、图片素材3033、视频素材3034、图片素材3035、图片素材3036、图片素材3037和视频素材3038。其中,针对任意一个视频素材而言,电子设备可选取该视频素材中的任意一帧所显示的图像作为电子设备向用户所展示的画面。故,图3C中,视频素材3031、视频素材3034和视频素材3038所显示的画面为各自视频素材中的任意一帧所显示的图像。For the convenience of description, in FIG. 3C , in the chronological order from the current moment to the farthest, the user interface 13 displays: video material 3031 , picture material 3032 , picture material 3033 , video material 3034 , picture material 3035 , picture material 3036 , picture material 3037 and video material 3038. Wherein, for any video material, the electronic device may select an image displayed in any frame in the video material as the screen displayed by the electronic device to the user. Therefore, in FIG. 3C , the images displayed by the video material 3031 , the video material 3034 and the video material 3038 are the images displayed by any frame in the respective video materials.
手机在检测到用户在图3C所示的用户界面13中执行用于选择图片素材和/或视频素材的操作(如长按操作)后,可显示图3D示例性所示的用户界面14,用户界面14用于显示用户选择生成视频所使用的图片素材和/或视频素材的显示界面。After detecting that the user performs an operation (such as a long-press operation) for selecting a picture material and/or a video material in the user interface 13 shown in FIG. 3C, the mobile phone may display the user interface 14 exemplarily shown in FIG. The interface 14 is used to display a display interface where the user selects the picture material and/or video material used to generate the video.
本申请实施例中,用户界面14的具体实现方式可包括多种。为了便于说明,图3D中,用户界面14包括用户界面13,以及覆盖在用户界面13上的编辑界面。In the embodiment of the present application, the specific implementation manner of the user interface 14 may include various. For the convenience of description, in FIG. 3D , the user interface 14 includes the user interface 13 and an editing interface overlaid on the user interface 13 .
针对用户未选择的图片素材和/或视频素材(图3D中采用除了视频素材3031、图片素材3032、图片素材3033、视频素材3034、图片素材3035、图片素材3036、图片素材3037和视频素材3038之外的其他图片素材/其他视频素材为例进行示意)而言,编辑界面中可在每个图片素材/视频素材的左上角显示有一个用于放大显示该图片素材/视频素材的控件(图3D中采用两个斜向且指向相反的箭头为例进行示意),以及在每个图片素材/视频素材的右下角显示有一个用于选择该图片素材/视频素材的控件(图3D中采用一个圆角矩形框为例进行示意)。For the picture material and/or video material not selected by the user (in Figure 3D, in addition to the video material 3031, picture material 3032, picture material 3033, video material 3034, picture material 3035, picture material 3036, picture material 3037 and video material 3038, For example, other picture materials/video materials other than the ones shown below are shown as an example), in the editing interface, a control can be displayed in the upper left corner of each picture material/video material for magnifying and displaying the picture material/video material (Figure 3D Two diagonal and oppositely pointing arrows are used as an example for illustration), and a control for selecting the picture material/video material is displayed in the lower right corner of each picture material/video material (a circle is used in Fig. 3D The corner rectangle box is used as an example to illustrate).
针对用户已选择的图片素材和/或视频素材(图3D中采用视频素材3031、图片素材3032、图片素材3033、视频素材3034、图片素材3035、图片素材3036、图片素材3037和视频素材3038为例进行示意)而言,编辑界面中可在每个图片素材/视频素材的左上角显示有一个用于放大显示该图片素材/视频素材的控件(图3D中采用两个斜向且指向相反的箭头为例进行示意),以及在每个图片素材/视频素材的右下角显示有一个用于选择该图片素材/视频素材的控件(图3D中采用一个圆角矩形框为例进行示意)。For the picture material and/or video material selected by the user (in Figure 3D, video material 3031, picture material 3032, picture material 3033, video material 3034, picture material 3035, picture material 3036, picture material 3037 and video material 3038 are used as examples For illustration), in the editing interface, a control for magnifying and displaying the picture material/video material can be displayed in the upper left corner of each picture material/video material (two diagonal and opposite arrows are used in Fig. 3D An example is shown for illustration), and a control for selecting the image material/video material is displayed in the lower right corner of each picture material/video material (a rounded rectangular box is used as an example for illustration in FIG. 3D).
并且,编辑界面中可以包括:控件304,控件304用于对用户已选择的图片素材和/或 视频素材进行创作。另外,编辑界面还可包括:用于对用户已选择的图片素材和/或视频素材进行如分享、全选、删除以及更多等操作的控件,本申请实施例对此不做限定。Moreover, the editing interface may include: a control 304, where the control 304 is used to create the picture material and/or video material selected by the user. In addition, the editing interface may further include: controls for performing operations such as sharing, selecting all, deleting, and more on the picture material and/or video material selected by the user, which is not limited in this embodiment of the present application.
手机在检测到用户在图3D所示的用户界面14中执行如点击控件304的操作后,可在用户界面14上显示图3E示例性所示的窗口305(图3E采用文字“影视”、文字“拼图”和一个圆角矩形框为例进行示意)。After detecting that the user performs an operation such as clicking on the control 304 in the user interface 14 shown in FIG. 3D, the mobile phone can display the window 305 exemplarily shown in FIG. 3E on the user interface 14 (FIG. "Puzzle" and a rounded rectangle as an example).
其中,在用户选择图片素材,或者视频素材,或者图片素材和视频素材时,若在用户在窗口305中的文字“影视”输入如点击等操作之后,则手机可显示用于对新的视频进行编辑的用户界面。Wherein, when the user selects a picture material, or a video material, or a picture material and a video material, after the user enters the text "video" in the window 305, such as clicking and other operations, the mobile phone can display a screen for performing operations on the new video. Edited user interface.
其中,在用户选择图片素材时,若在用户在窗口305中的文字“拼图”输入如点击等操作之后,则手机可显示用于对新的图片进行编辑的用户界面。Wherein, when the user selects a picture material, after the user enters the word "puzzle" in the window 305, such as clicking and other operations, the mobile phone can display a user interface for editing a new picture.
其中,在用户选择视频素材,或者图片素材和视频素材时,若在用户在窗口305中的文字“拼图”输入如点击等操作之后,则手机无法显示用于对新的图片进行编辑的用户界面,可显示文字“拼图不支持视频”,来提示用户取消选择视频素材。Wherein, when the user selects a video material, or a picture material and a video material, after the user enters the text "Puzzle" in the window 305, such as clicking and other operations, the mobile phone cannot display a user interface for editing a new picture , the text "Puzzle does not support video" can be displayed to prompt the user to deselect the video material.
手机在检测到用户在图3E所示的窗口305中执行如点击文字“影视”的操作后,基于用户选择的视频素材3031、图片素材3032、图片素材3033、视频素材3034、图片素材3035、图片素材3036、图片素材3037和视频素材3038,可确定视频模板的类型为亲子类型,从而基于亲子类型的视频模板,将用户选择的视频素材3031、图片素材3032、图片素材3033、视频素材3034、图片素材3035、图片素材3036、图片素材3037和视频素材3038生成视频,且可显示图3F示例性所示的用户界面15,用户界面15用于显示手机生成的视频。After the mobile phone detects that the user performs an operation such as clicking the text "video" in the window 305 shown in FIG. 3E, the mobile phone selects the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, and the picture based on the user's selection. Material 3036, picture material 3037, and video material 3038, the type of the video template can be determined as parent-child type, so based on the parent-child type video template, the video material 3031, picture material 3032, picture material 3033, video material 3034, picture material selected by the user The material 3035, the picture material 3036, the picture material 3037 and the video material 3038 generate a video and can display the user interface 15 exemplarily shown in FIG. 3F for displaying the video generated by the mobile phone.
其中,生成的视频中的片段与亲子类型的视频模板中的片段相对应。视频素材3031、图片素材3032、图片素材3033、视频素材3034、图片素材3035、图片素材3036、图片素材3037和视频素材3038在生成的视频中至少出现一次,且生成的视频中任意两个相邻片段不会放置相同的素材。The segments in the generated video correspond to segments in the parent-child video template. Video material 3031, picture material 3032, picture material 3033, video material 3034, picture material 3035, picture material 3036, picture material 3037, and video material 3038 appear at least once in the generated video, and any two adjacent to each other in the generated video Clips do not place the same footage.
综上,电子设备基于用户在图库应用中选定的图片素和/或视频素材便可自动生成视频。另外,用户界面15还用于显示编辑生成的视频的控件。To sum up, the electronic device can automatically generate a video based on the picture elements and/or video material selected by the user in the gallery application. In addition, the user interface 15 is used to display controls for editing the generated video.
其中,用户界面15中可以包括:预览区域306、进度条307、控件3081、控件3082、控件3083、控件3084、控件3085、控件30811、控件30812、控件30813、控件30814以及控件309。The user interface 15 may include: a preview area 306 , a progress bar 307 , a control 3081 , a control 3082 , a control 3083 , a control 3084 , a control 3085 , a control 30811 , a control 30812 , a control 30813 , a control 30814 , and a control 309 .
预览区域306,用于展示生成的视频,方便用户观看和调整视频。The preview area 306 is used to display the generated video, which is convenient for the user to watch and adjust the video.
进度条307,用于表示任意一个视频模板下视频的时长(图3F采用“00:00”示例性表示视频的起始时刻,“00:32”示例性表示视频的终止时刻,以及一滑动条示例性表示视频的进度)。The progress bar 307 is used to represent the duration of the video under any video template (FIG. 3F uses “00:00” to exemplarily represent the start time of the video, “00:32” to exemplarily represent the end time of the video, and a sliding bar exemplarily represents the progress of the video).
控件3081,用于提供不同类型的视频模板。控件30811用于表示亲子类型的视频模板(图3F采用文字“亲子”和一个加粗显示的圆角矩形框示例性表示亲子类型的视频模板),控件30812用于表示旅行类型的视频模板(图3F采用文字“旅行”和一个正常显示的圆角矩形框示例性表示旅行类型的视频模板),控件30813用于表示美食类型的视频模板(图3F采用文字“美食”和一个正常显示的圆角矩形框示例性表示美食类型的视频模板),控件30814用于表示运动类型的视频模板(图3F采用文字“运动”和一个正常显示的圆角 矩形框示例性表示运动类型的视频模板)。从而,在识别到素材匹配某个类型的视频主体模板时,电子设备还可向用户提供生成除该类型之外的其他类型的视频模板,有利于满足用户的各种需求。 Control 3081, used to provide different types of video templates. The control 30811 is used to represent the parent-child type of video template (Figure 3F uses the text "parent-child" and a bold rounded rectangle to exemplarily represent the parent-child type of video template), and the control 30812 is used to represent the travel type of video template (Figure 3F). 3F uses the text "travel" and a normally displayed rounded rectangular box to exemplarily represent a video template of travel type), and control 30813 is used to represent a video template of food type (Figure 3F uses the text "food" and a normal display of rounded corners) The rectangular box exemplarily represents the video template of gourmet type), and the control 30814 is used to represent the video template of sports type (Fig. 3F uses the word "sports" and a normally displayed rounded rectangular box exemplarily represents the video template of sports type). Therefore, when it is recognized that the material matches a certain type of video body template, the electronic device can also provide the user with generating other types of video templates other than this type, which is beneficial to meet various needs of the user.
控件3082,用于编辑视频的画幅,更改视频的时长,在视频中添加新的图片和/或视频,将视频中的图片和/或视频删除等功能。从而,基于用户需求生成对应长度和/或对应素材的视频,兼顾了视频生成的灵活性。The control 3082 is used for functions such as editing the frame of the video, changing the duration of the video, adding new pictures and/or videos in the video, and deleting pictures and/or videos in the video. Therefore, a video of a corresponding length and/or a corresponding material is generated based on user requirements, taking into account the flexibility of video generation.
控件3083,用于更改视频模板匹配的音乐。 Control 3083, used to change the music matched by the video template.
控件3084,用于更改视频的滤镜。 Control 3084, used to change the filter of the video.
控件3085,用于在视频中添加文字,例如在片头和片尾添加文字等。The control 3085 is used to add text in the video, such as adding text at the beginning and end, etc.
控件309,用于保存生成的视频,方便使用或观看保存后的视频。The control 309 is used to save the generated video for convenient use or viewing of the saved video.
基于上述描述,电子设备可通过预览区域306向用户显示生成的视频。Based on the above description, the electronic device may display the generated video to the user through the preview area 306 .
另外,由于电子设备确定出的视频模板的类型为亲子类型,故电子设备将控件3081中的圆角矩形框加粗显示,方便快速告知用户。In addition, since the type of the video template determined by the electronic device is the parent-child type, the electronic device displays the rounded rectangular box in the control 3081 in bold, which is convenient and quick to inform the user.
并且,基于用户界面15中的其他控件,用户可以执行如选择视频模板的类型、调整视频的画幅、调整视频的时长、在视频中添加新的图片素材和/或视频素材、选择视频匹配的音乐、选择视频的滤镜、在视频中添加文字等操作,使得电子设备能够确定出满足用户意愿的视频模板而生成对应的视频。And, based on other controls in the user interface 15, the user can perform operations such as selecting the type of video template, adjusting the frame of the video, adjusting the duration of the video, adding new picture material and/or video material to the video, selecting music that matches the video , select the filter of the video, add text in the video, etc., so that the electronic device can determine the video template that meets the user's wishes and generate the corresponding video.
例如,手机在检测到用户在图3F所示的用户界面15中执行如点击控件3081的操作后,可显示图3F示例性所示的用户界面15,使得用户可在控件30811、控件30812和控件30813中选择一个视频模板。For example, after detecting that the user performs an operation such as clicking on the control 3081 in the user interface 15 shown in FIG. 3F, the mobile phone can display the user interface 15 exemplarily shown in FIG. 3F, so that the user can click the control 30811, the control 30812 and the control Choose a video template from 30813.
又如,手机在检测到用户在图3F所示的用户界面15中执行如点击控件3082的操作后,可显示图3G示例性所示的用户界面21,用户界面21用于显示编辑视频的如画幅、时长、播放时所包含的素材等因素。For another example, after detecting that the user performs an operation such as clicking on the control 3082 in the user interface 15 shown in FIG. 3F, the mobile phone can display the user interface 21 exemplarily shown in FIG. Factors such as frame size, duration, and materials included in playback.
用户界面21中可以包括:视频播放区域3171、控件3172、控件3173、控件3174、控件3175、素材播放区域3176、控件3177。其中,视频播放区域3171用于展示待生成的视频所播放的效果。控件3172用于进入到更改视频的画幅的用户界面,其中,视频的画幅可以为16:9、1:1或者9:16等。控件3173用于进入到更改视频的时长的用户界面。控件3174用于进入到在视频中添加新的素材的用户界面。控件3175用于进入到视频中已存在的素材。素材播放区域3176用于展示视频中各个素材的播放效果。控件3177用于退出用户界面21。The user interface 21 may include: a video playback area 3171 , a control 3172 , a control 3173 , a control 3174 , a control 3175 , a material playback area 3176 , and a control 3177 . Among them, the video playing area 3171 is used to display the effect played by the video to be generated. The control 3172 is used to enter the user interface for changing the frame of the video, where the frame of the video can be 16:9, 1:1, or 9:16, etc. Control 3173 is used to enter the user interface for changing the duration of the video. Control 3174 is used to enter the user interface for adding new material to the video. Control 3175 is used to access material that already exists in the video. The material playback area 3176 is used to display the playback effect of each material in the video. Control 3177 is used to exit user interface 21 .
又如,手机在检测到用户在图3F所示的用户界面15中执行如点击控件3083的操作后,可显示图3H示例性所示的用户界面22,用户界面22用于显示编辑视频对应的音乐。For another example, after detecting that the user performs an operation such as clicking on the control 3083 in the user interface 15 shown in FIG. 3F, the mobile phone can display the user interface 22 exemplarily shown in FIG. 3H, and the user interface 22 is used to display the corresponding editing video. music.
用户界面22中可以包括:视频播放区域3181、进度条3182、控件3183、控件3184、控件3185。其中,视频播放区域3181用于展示待生成的视频所播放的效果。进度条3182用于显示或更改待生成的视频的播放进度。控件3183用于展示各个类型的视频模板,如显示有文字“亲子”、“旅行”、“美食”、“运动”等类型。控件3184用于展示某个类型的视频模板下对应的音乐,如显示有“歌曲1”、“歌曲2”、“歌曲3”。控件3185用于退出用户界面22。The user interface 22 may include: a video playing area 3181 , a progress bar 3182 , a control 3183 , a control 3184 , and a control 3185 . Among them, the video play area 3181 is used to display the effect played by the video to be generated. The progress bar 3182 is used to display or change the playback progress of the video to be generated. The control 3183 is used to display various types of video templates, such as the text "parent-child", "travel", "food", "sports" and other types. The control 3184 is used to display the corresponding music under a certain type of video template, for example, "Song 1", "Song 2", and "Song 3" are displayed. Control 3185 is used to exit user interface 22 .
又如,手机在检测到用户在图3F所示的用户界面15中执行如点击控件3084的操作 后,可显示图3I示例性所示的用户界面23,用户界面23用于显示编辑视频的滤镜。例如,图3H中,加粗显示文字“亲子”且在文字“歌曲1”对应的显示栏中有对勾标记可表示出手机当前选择亲子类型的视频模板,且视频模板对应的音乐为歌曲1。需要说明的是,分别位于文字“歌曲1”、“歌曲2”、“歌曲3”之前对应的圆角矩形框用于显示对应歌曲的图像。其中,本申请实施例对该图像的具体显示内容不做限定。为了便于说明,本申请实施例以填充白色为例进行示意。For another example, after detecting that the user performs an operation such as clicking on the control 3084 in the user interface 15 shown in FIG. 3F, the mobile phone can display the user interface 23 exemplarily shown in FIG. mirror. For example, in Figure 3H, the text "Parent-Child" is displayed in bold and there is a check mark in the display column corresponding to the text "Song 1" to indicate that the mobile phone currently selects a parent-child video template, and the music corresponding to the video template is Song 1 . It should be noted that the corresponding rounded rectangular boxes respectively located before the words "song 1", "song 2", and "song 3" are used to display the images of the corresponding songs. The specific display content of the image is not limited in this embodiment of the present application. For convenience of description, the embodiments of the present application take filling white as an example for illustration.
用户界面23中可以包括:视频播放区域3191、进度条3192、控件3183、控件3184、控件3185。其中,视频播放区域3191用于展示待生成的视频所播放的效果。进度条3192用于显示或更改待生成的视频的播放进度。控件3183用于展示各个滤镜,如显示有文字“滤镜1”、“滤镜2”、“滤镜3”、“滤镜4”、“滤镜5”等类型。其中,不同的滤镜,视频具有不同的展示效果,如柔化、黑白化、颜色加深等效果。控件3194用于退出用户界面23。例如,图3I中,加粗显示文字“滤镜1”可表示出手机当前选择视频的滤镜为滤镜1。The user interface 23 may include: a video playing area 3191 , a progress bar 3192 , a control 3183 , a control 3184 , and a control 3185 . Among them, the video play area 3191 is used to display the effect played by the video to be generated. The progress bar 3192 is used to display or change the playback progress of the video to be generated. The control 3183 is used to display each filter, such as the text "Filter 1", "Filter 2", "Filter 3", "Filter 4", "Filter 5" and other types. Among them, different filters, the video has different display effects, such as softening, black and white, color deepening and other effects. Control 3194 is used to exit user interface 23 . For example, in FIG. 3I , the text “Filter 1” displayed in bold may indicate that the filter of the video currently selected by the mobile phone is Filter 1 .
又如,手机在检测到用户在图3F所示的用户界面15中执行如点击控件3085的操作后,可显示图3J示例性所示的用户界面24,用户界面24用于显示编辑视频的滤镜。For another example, after detecting that the user performs an operation such as clicking on the control 3085 in the user interface 15 shown in FIG. 3F, the mobile phone can display the user interface 24 exemplarily shown in FIG. mirror.
用户界面24中可以包括:视频播放区域3201、控件3202、控件3203、控件3204、控件3185。其中,视频播放区域3191用于展示待生成的视频所播放的效果。控件3202用于选择在片头或者片尾中添加标题。控件3193用于展示各个标题,如显示有文字“标题1”、“标题2”、“标题3”、“标题4”、“标题5”等类型。针对任意两个不同的标题(如标题1和标题2),若标题1和标题2的内容可相同,则标题1和标题2可采用不同的播放效果在视频的任意一个画面中进行显示。其中,播放效果可以理解为标题中文字的字体、粗细、颜色等参数的改变而形成的效果。例如,标题1可为文字“周末小时光”,且标题1采用楷体。标题2为文字“周末小时光”,且标题1采用宋体。若标题1和标题2的内容不同,则标题1和标题2可采用相同或者不同的播放效果在视频的任意一个画面中进行显示。例如,标题1可为文字“周末小时光”。标题2为文字“美好的一天”。控件3194用于退出用户界面24。例如,图3J中,加粗显示文字“片头”和“标题1”可表示出手机当前选择在视频的片头添加标题1。The user interface 24 may include: a video playing area 3201 , a control 3202 , a control 3203 , a control 3204 , and a control 3185 . Among them, the video play area 3191 is used to display the effect played by the video to be generated. Control 3202 is used to select to add a title to the intro or end credit. The control 3193 is used to display each title, such as the text "Title 1", "Title 2", "Title 3", "Title 4", "Title 5" and so on. For any two different titles (eg, title 1 and title 2), if the content of title 1 and title 2 can be the same, then title 1 and title 2 can be displayed on any screen of the video with different playback effects. The playback effect can be understood as the effect formed by changing parameters such as font, thickness, and color of the text in the title. For example, Heading 1 may be the text "Hours of the Weekend" and Heading 1 is in italics. Title 2 is the text "Weekend Hours", and Title 1 is in Song Dynasty. If the contents of Title 1 and Title 2 are different, Title 1 and Title 2 may be displayed on any screen of the video with the same or different playback effects. For example, Title 1 may be the text "Hours of the Weekend." Title 2 is the text "Good day". Control 3194 is used to exit user interface 24 . For example, in FIG. 3J, the bold display of the words "title 1" and "title 1" may indicate that the mobile phone currently chooses to add title 1 to the title 1 of the video.
综上,电子设备可以向用户提供手动编辑已生成的视频的功能,方便用户基于自身意愿配置视频的时长、画幅、视频模板、包含的素材和滤镜等参数,丰富了视频的样式。In conclusion, the electronic device can provide the user with the function of manually editing the generated video, which is convenient for the user to configure parameters such as the duration, frame, video template, included materials and filters of the video based on their own wishes, thus enriching the style of the video.
另外,手机在检测到用户在图3F所示的用户界面15中执行如点击控件309的操作后,可保存视频。In addition, the mobile phone can save the video after detecting that the user performs an operation such as clicking the control 309 in the user interface 15 shown in FIG. 3F .
方式二Method 2
请参阅图3A-图3B、图3K-图3N、图3F,图3K-图3N为本申请一实施例提供的人机交互界面示意图。Please refer to FIG. 3A-FIG. 3B, FIG. 3K-FIG. 3N, and FIG. 3F. FIGS. 3K-FIG. 3N are schematic diagrams of human-computer interaction interfaces provided by an embodiment of the present application.
在一些实施例中,手机在检测到用户在图3K所示的用户界面12中执行如点击控件3025的操作后,可显示图3L示例性所示的用户界面16,用户界面16用于显示图库应用中发现类别对应的页面。图3L中,控件3023从加粗显示变为正常显示,控件3025从正常显示变为加粗显示。In some embodiments, after detecting that the user performs an operation such as clicking on the control 3025 in the user interface 12 shown in FIG. 3K, the mobile phone may display the user interface 16 exemplarily shown in FIG. 3L, and the user interface 16 is used to display the gallery The page corresponding to the category found in the application. In FIG. 3L, the control 3023 is changed from the bold display to the normal display, and the control 3025 is changed from the normal display to the bold display.
在另一些实施例中,手机在检测到用户指示的打开图库应用的操作(如点击图库应用 的图标301)后,可显示图3L示例性所示的用户界面16,用户界面16用于显示图库应用中发现类别对应的页面。图3L中,控件3025加粗显示。In other embodiments, after detecting the operation of opening the gallery application indicated by the user (such as clicking on the icon 301 of the gallery application), the mobile phone may display the user interface 16 exemplarily shown in FIG. 3L, and the user interface 16 is used to display the gallery The page corresponding to the category found in the application. In Figure 3L, control 3025 is shown in bold.
其中,用户界面16中可以包括:控件312,控件312用于进入到手机中存储的图片素材和/或视频素材的显示页面。The user interface 16 may include: a control 312, where the control 312 is used to enter the display page of the picture material and/or video material stored in the mobile phone.
本申请实施例中,用户界面16的具体实现方式可包括多种。为了便于说明,图3L中,用户界面16分为五部分。In the embodiment of the present application, the specific implementation manner of the user interface 16 may include various. For ease of illustration, in FIG. 3L, the user interface 16 is divided into five parts.
第一部分中包括用于向用户提供按照照片、人物、地点等关键词搜索图片素材和/或视频素材的途径的搜索框。The first part includes a search box for providing the user with a way to search for picture material and/or video material according to keywords such as photos, people, places, etc.
第二部分中包括用于进入到采用模板方式创作新的视频的控件(图3L采用文字“模板创作”和一图标为例进行示意),以及控件312,以及用于进入到采用拼图方式创作新的视频的控件(图3L采用文字“模板创作”和一图标为例进行示意)。The second part includes controls for entering into creating a new video by using a template method (Fig. 3L uses the text "template creation" and an icon as an example for illustration), and a control 312, and a control for entering into creating a new video using a puzzle method. The control of the video (Fig. 3L uses the text "template creation" and an icon as an example for illustration).
第三部分中显示有按照人像划分出的图片。其中,第三部分的标题在图3L中采用文字“人像”和文字“更多”为例进行示意。The third section shows pictures divided by portraits. Wherein, the title of the third part is illustrated in FIG. 3L by taking the text “portrait” and the text “more” as examples.
第四部分中包括按照地点划分出的图片和/或视频,如图3L所示的地点为“深圳市”的图片和/或视频、地点为“桂林市”的图片和/或视频以及地点为“成都市”的图片和/或视频。其中,第四部分的标题在图3L中采用文字“地点”和文字“更多”为例进行示意。The fourth part includes pictures and/or videos divided by location, as shown in Figure 3L, pictures and/or videos with the location of "Shenzhen", pictures and/or videos with the location of "Guilin City", and pictures and/or videos with the location of Pictures and/or videos of "Chengdu City". The title of the fourth part is illustrated in FIG. 3L by taking the words "location" and the words "more" as examples.
第五部分中显示有控件3022、控件3023、控件3024和控件3025。 Control 3022, Control 3023, Control 3024, and Control 3025 are displayed in the fifth section.
另外,用户界面16的标题在图3L中采用文字“发现”为例进行示意。用户界面16中还可以包括:用于实现如在用户界面16中添加新的分组或者删除已存在的分组等编辑用户界面16的控件(图3L采用三个黑点为例进行示意)。In addition, the title of the user interface 16 is illustrated in FIG. 3L by taking the word “discovery” as an example. The user interface 16 may further include: controls for implementing editing of the user interface 16 such as adding a new group or deleting an existing group in the user interface 16 ( FIG. 3L uses three black dots as an example for illustration).
手机在检测到用户在图3L所示的用户界面16中执行如点击控件312的操作后,可显示图3M示例性所示的用户界面17,用户界面17用于显示可采用自由创作方式生成新的视频的图片素材和/或视频素材。After detecting that the user performs an operation such as clicking on the control 312 in the user interface 16 shown in FIG. 3L, the mobile phone can display the user interface 17 exemplarily shown in FIG. The picture material and/or video material of the video.
本申请实施例中,用户界面17的具体实现方式可包括多种。为了便于说明,图3M中,用户界面17包括显示区域313,以及覆盖在显示区域313上的窗口314。In this embodiment of the present application, the specific implementation manner of the user interface 17 may include various manners. For convenience of description, in FIG. 3M , the user interface 17 includes a display area 313 and a window 314 overlying the display area 313 .
显示区域313中包括图片素材和/或视频素材,且在每个图片素材/视频素材的左上角显示有一个用于放大显示该图片素材/视频素材的控件(图3M中采用两个斜向且指向相反的箭头为例进行示意),以及在每个图片素材/视频素材的右下角显示有一个用于选择该图片素材/视频素材的控件(图3M中采用一个圆角矩形框为例进行示意)。The display area 313 includes picture material and/or video material, and in the upper left corner of each picture material/video material is displayed a control for magnifying and displaying the picture material/video material (two oblique directions are used in FIG. arrows pointing to the opposite direction as an example), and a control for selecting the picture material/video material is displayed in the lower right corner of each picture material/video material (a rounded rectangular box is used as an example in FIG. 3M for illustration ).
其中,本申请实施例对显示区域313中的图片素材的显示数量、图片素材的显示面积、图片素材的显示位置、视频素材的显示内容、视频素材的显示数量、视频素材的显示面积、视频素材的显示位置、各个类型的素材顺序等参数不做限定。为了便于说明,图3M中,显示区域313中显示有:视频素材3031、图片素材3032、图片素材3033、视频素材3034、图片素材3035、图片素材3036、图片素材3037和视频素材3038,具体可参见方式一中的描述,此处不做赘述。Among them, in the embodiment of the present application, the displayed number of picture materials in the display area 313, the display area of picture materials, the display position of picture materials, the display content of video materials, the display quantity of video materials, the display area of video materials, the display area of video materials, the The parameters such as the display position and the order of each type of material are not limited. For the convenience of description, in FIG. 3M, the display area 313 displays: video material 3031, picture material 3032, picture material 3033, video material 3034, picture material 3035, picture material 3036, picture material 3037 and video material 3038. For details, please refer to The description in Mode 1 will not be repeated here.
窗口314中可以包括:控件3141(图3M中采用图标“0/50”为例进行示意,其中“0”表示未选择任意一个图片素材/视频素材,“50”表示手机中存在50个的图片素材/视频素材),控件3141用于表示手机中存储的图片素材/视频素材的总数量以及表示用户当前选择的图片素材/视频素材的数量,以及控件3142,控件3142用于进入到开始制作新的视频的 显示界面,以及预览区域3143,预览区域3143用于展示用户选择的图片素材和/或视频素材。The window 314 may include: a control 3141 (in FIG. 3M, the icon "0/50" is used as an example for illustration, where "0" indicates that no picture material/video material is selected, and "50" indicates that there are 50 pictures in the mobile phone. material/video material), control 3141 is used to indicate the total number of picture material/video material stored in the mobile phone and the number of picture material/video material currently selected by the user, and control 3142, control 3142 is used to enter to start making new The display interface of the video, and the preview area 3143, the preview area 3143 is used to display the picture material and/or video material selected by the user.
手机在检测到用户在图3M所示的显示区域313中执行选择图片素材/视频素材的操作后,可显示图3N示例性所示的用户界面17中基于用户操作而发生的显示变化。After detecting that the user performs an operation of selecting a picture material/video material in the display area 313 shown in FIG. 3M , the mobile phone can display the display changes based on the user operation in the user interface 17 exemplarily shown in FIG. 3N .
针对用户未选择的图片和/或视频(图3N中采用除了视频素材3031、图片素材3032、图片素材3033、视频素材3034、图片素材3035、图片素材3036、图片素材3037和视频素材3038之外的其他图片素材/其他视频素材为例进行示意)而言,用户界面17中显示区域的其他图片素材/其他素材视频保持相同的显示画面。For pictures and/or videos that are not selected by the user (in FIG. 3N, pictures other than video material 3031, picture material 3032, picture material 3033, video material 3034, picture material 3035, picture material 3036, picture material 3037 and video material 3038 are used Taking other picture materials/other video materials as an example for illustration), other picture materials/other material videos in the display area in the user interface 17 maintain the same display screen.
针对用户已选择的图片素材和/或视频素材(图3N中采用视频素材3031、图片素材3032、图片素材3033、视频素材3034、图片素材3035、图片素材3036、图片素材3037和视频素材3038为例进行示意)而言,用户界面17中的显示区域313的视频素材3031、图片素材3032、图片素材3033、视频素材3034、图片素材3035、图片素材3036、图片素材3037和视频素材3038中的位于每个图片素材/视频素材的右下角的用于选择该图片素材/视频素材的控件发生显示变化(图3N中采用一个圆角矩形框中添加对勾为例进行示意)。For the picture material and/or video material selected by the user (in FIG. 3N, video material 3031, picture material 3032, picture material 3033, video material 3034, picture material 3035, picture material 3036, picture material 3037 and video material 3038 are used as examples For illustration), the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037 and the video material 3038 in the display area 313 in the user interface 17 are located in each The control for selecting the picture material/video material in the lower right corner of each picture material/video material changes in display (in FIG. 3N, a check mark is added to a rounded rectangular box as an example for illustration).
用户界面17中的控件3141显示用户选择的图片素材/视频素材的数量发生变化(图3N采用图标“8/50”为例进行示意,其中“8”表示用户选择八个图片素材/视频素材,“50”表示手机中存在50个的图片素材/视频素材可采用自由创作的方式生成新的视频)。The control 3141 in the user interface 17 displays that the number of picture materials/video materials selected by the user changes (Fig. 3N uses the icon "8/50" as an example for illustration, where "8" indicates that the user selects eight picture materials/video materials, "50" means that there are 50 picture materials/video materials in the mobile phone, which can be freely created to generate new videos).
用户界面17中的预览区域2143显示有选择图片素材/视频素材发生变化(图3N中采用显示视频素材3031、图片素材3032、图片素材3033和视频素材3034,且通过拖动滑动条来显示图片素材3035、图片素材3036、图片素材3037和视频素材3038为例进行示意)。The preview area 2143 in the user interface 17 displays that the selected picture material/video material changes (in FIG. 3N, the video material 3031, the picture material 3032, the picture material 3033 and the video material 3034 are displayed, and the picture material is displayed by dragging the slider bar. 3035, picture material 3036, picture material 3037, and video material 3038 are taken as examples for illustration).
手机在检测到用户在图3N所示的用户界面17中执行生成新的视频的操作(如点击用户界面17中的控件3142)后,基于用户选择的视频素材3031、图片素材3032、图片素材3033、视频素材3034、图片素材3035、图片素材3036、图片素材3037和视频素材3038,可确定视频模板的类型为亲子类型,从而基于亲子类型的视频模板,将用户选择的视频素材3031、图片素材3032、图片素材3033、视频素材3034、图片素材3035、图片素材3036、图片素材3037和视频素材3038生成视频,且可显示图3F示例性所示的用户界面15。其中,此处提及的生成的视频的具体实现方式可参见方式1中的生成的视频的描述。After the mobile phone detects that the user performs an operation of generating a new video in the user interface 17 shown in FIG. 3N (for example, clicking on the control 3142 in the user interface 17 ), the mobile phone selects the video material 3031 , the picture material 3032 , and the picture material 3033 based on the user’s selection. , video material 3034, picture material 3035, picture material 3036, picture material 3037 and video material 3038, the type of the video template can be determined to be parent-child type, so based on the parent-child type video template, the video material 3031, picture material 3032 selected by the user , picture material 3033 , video material 3034 , picture material 3035 , picture material 3036 , picture material 3037 , and video material 3038 generate a video, and may display the user interface 15 exemplarily shown in FIG. 3F . For a specific implementation manner of the generated video mentioned here, reference may be made to the description of the generated video in Mode 1.
综上,电子设备基于用户在图库应用中选定的图片素和/或视频素材便可自动生成视频。To sum up, the electronic device can automatically generate a video based on the picture elements and/or video material selected by the user in the gallery application.
其中,用户界面15的具体实现方式可参见前述描述的内容,此处不做赘述。故,电子设备可通过预览区域306向用户显示生成的视频。The specific implementation manner of the user interface 15 may refer to the content described above, which will not be repeated here. Therefore, the electronic device can display the generated video to the user through the preview area 306 .
另外,用户界面15还用于显示编辑生成的视频的控件。故,电子设备可以向用户提供手动编辑已生成的视频的功能,方便用户基于自身意愿配置视频的时长、画幅、视频模板、包含的素材和滤镜等参数,丰富了视频的样式。另外,手机在检测到用户在图3F所示的用户界面15中执行如点击控件309的操作后,可保存视频。In addition, the user interface 15 is used to display controls for editing the generated video. Therefore, the electronic device can provide the user with the function of manually editing the generated video, which is convenient for the user to configure parameters such as the duration, frame, video template, included materials and filters of the video based on their own wishes, thus enriching the style of the video. In addition, the mobile phone can save the video after detecting that the user performs an operation such as clicking the control 309 in the user interface 15 shown in FIG. 3F .
方式三way three
请参阅图3A-图3B、图3O-图3Q、图3T、图3F,图3O-图3Q、图3T为本申请一实施例提供的人机交互界面示意图。Please refer to FIG. 3A-FIG. 3B, FIG. 3O-FIG. 3Q, FIG. 3T, and FIG. 3F. FIGS. 3O-FIG. 3Q and FIG. 3T are schematic diagrams of human-computer interaction interfaces provided by an embodiment of the present application.
在一些实施例中,手机在检测到用户在图3O所示的用户界面12中执行如点击控件 3024的操作后,可显示图3P示例性所示的用户界面18,用户界面18用于显示图库应用中时刻类别对应的页面。图3P中,控件3023从加粗显示变为正常显示,控件3024从正常显示变为加粗显示。In some embodiments, after detecting that the user performs an operation such as clicking on the control 3024 in the user interface 12 shown in FIG. 3O, the mobile phone may display the user interface 18 exemplarily shown in FIG. 3P, and the user interface 18 is used to display the gallery The page corresponding to the moment category in the application. In FIG. 3P, the control 3023 changes from the bold display to the normal display, and the control 3024 changes from the normal display to the bold display.
在另一些实施例中,手机在检测到用户指示的打开图库应用的操作(如点击图库应用的图标301)后,可显示图3P示例性所示的用户界面18,用户界面18用于显示图库应用中时刻类别对应的页面。图3P中,控件3024加粗显示。In other embodiments, the mobile phone may display the user interface 18 exemplarily shown in FIG. 3P after detecting the operation of opening the gallery application (such as clicking the icon 301 of the gallery application) instructed by the user, and the user interface 18 is used to display the gallery The page corresponding to the moment category in the application. In Figure 3P, control 3024 is shown in bold.
其中,用户界面18中可以包括:控件3151,控件3151用于进入到采用本申请实施例提供的方式创作新的视频的显示页面。The user interface 18 may include: a control 3151, where the control 3151 is used to enter a display page for creating a new video in the manner provided by the embodiment of the present application.
本申请实施例中,用户界面18的具体实现方式可包括多种。为了便于说明,图3P中,用户界面18分为三部分。In the embodiment of the present application, the specific implementation manner of the user interface 18 may include various. For ease of illustration, in Figure 3P, the user interface 18 is divided into three parts.
第一部分中包括用于向用户提供按照照片、人物、地点等关键词搜索图片素材和/或视频素材的途径的搜索框。The first part includes a search box for providing the user with a way to search for picture material and/or video material according to keywords such as photos, people, places, etc.
第二部分中包括控件3152(图3P采用文字“周末的小时光”、日期“2020年9月”和一张图片素材为例进行示意),控件3152用于显示手机中一段时间内的图片素材和/或视频素材所生成的视频1,以及用于显示手机中一段时间内的图片素材和/或视频素材所生成的视频2的控件3153(图3P采文字“周末的小时光”、日期“2020年5月”和一张图片素材为例进行示意),以及用于显示手机中一段时间内的图片素材和/或视频素材所生成的视频3的控件3154(图3P采用文字“周末的小时光”、日期“2020年4月”和一张图片素材为例进行示意)。需要说明的是,视频1、视频2和视频3中的图片素材/视频素材可以存在重复,也可以不重复,本申请实施例对此不做限定。The second part includes a control 3152 (Figure 3P uses the text "hours of the weekend", the date "September 2020" and a picture material as examples), and the control 3152 is used to display the picture material within a period of time in the mobile phone Video 1 generated by and/or video material, and a control 3153 for displaying video 2 generated by picture material and/or video material in a period of time in the mobile phone (Fig. May 2020" and a picture material as an example), and a control 3154 for displaying video 3 generated from picture material and/or video material in a period of time in the mobile phone (Figure 3P uses the text "hours of the weekend" light", the date "April 2020" and a picture material for illustration). It should be noted that, the picture material/video material in Video 1, Video 2, and Video 3 may or may not be repeated, which is not limited in this embodiment of the present application.
需要说明的是,视频1、视频2和视频3均为电子设备按照本申请提供的方案生成的。It should be noted that the video 1, the video 2 and the video 3 are all generated by the electronic device according to the solution provided in this application.
第三部分中显示有控件3022、控件3023、控件3024和控件3025。 Control 3022, Control 3023, Control 3024, and Control 3025 are displayed in the third section.
其中,用户界面18的标题在图3P中采用文字“时刻”为例进行示意。The title of the user interface 18 is illustrated by taking the word "moment" as an example in FIG. 3P .
在一些实施例中,手机在检测到用户在图3P所示的用户界面18中执行如点击控件3151的操作后,可在用户界面18上显示图3Q示例性所示的窗口316,窗口316用于显示可生成影视或拼图的图片素材和/或视频素材。In some embodiments, after detecting that the user performs an operation such as clicking on the control 3151 in the user interface 18 shown in FIG. 3P, the mobile phone may display the window 316 exemplarily shown in FIG. 3Q on the user interface 18. The window 316 uses It is used to display picture material and/or video material that can generate movies or puzzles.
手机在检测到用户在图3Q所示的窗口316中执行如点击文字“创作影片”的操作后,可显示图3M示例性所示的用户界面17。其中,用户界面17的具体实现方式可参见前述描述的内容,此处不做赘述。After the mobile phone detects that the user performs an operation such as clicking the text "create a movie" in the window 316 shown in FIG. 3Q, the mobile phone can display the user interface 17 exemplarily shown in FIG. 3M. The specific implementation manner of the user interface 17 may refer to the content described above, which will not be repeated here.
手机在检测到用户在图3M所示的显示区域313中执行选择图片素材/视频素材的操作后,可显示图3N示例性所示的用户界面17中基于用户操作而发生的显示变化。其中,用户界面17的显示变化的具体实现方式可参见前述描述的内容,此处不做赘述。After detecting that the user performs an operation of selecting a picture material/video material in the display area 313 shown in FIG. 3M , the mobile phone can display the display changes based on the user operation in the user interface 17 exemplarily shown in FIG. 3N . The specific implementation of the display change of the user interface 17 may refer to the content described above, which will not be repeated here.
手机在检测到用户在图3N所示的用户界面17中执行生成新的视频的操作(如点击用户界面17中的控件3142)后,基于用户选择的视频素材3031、图片素材3032、图片素材3033、视频素材3034、图片素材3035、图片素材3036、图片素材3037和视频素材3038,可确定视频模板的类型为亲子类型,从而基于亲子类型的视频模板,将用户选择的视频素材3031、图片素材3032、图片素材3033、视频素材3034、图片素材3035、图片素材3036、图片素材3037和视频素材3038生成视频,且可显示图3F示例性所示的用户界面15。其中,此处提及的生成的视频的具体实现方式可参见方式1中的生成的视频的描述。After the mobile phone detects that the user performs an operation of generating a new video in the user interface 17 shown in FIG. 3N (for example, clicking on the control 3142 in the user interface 17 ), the mobile phone selects the video material 3031 , the picture material 3032 , and the picture material 3033 based on the user’s selection. , video material 3034, picture material 3035, picture material 3036, picture material 3037 and video material 3038, the type of the video template can be determined to be parent-child type, so based on the parent-child type video template, the video material 3031, picture material 3032 selected by the user , picture material 3033 , video material 3034 , picture material 3035 , picture material 3036 , picture material 3037 , and video material 3038 generate a video, and may display the user interface 15 exemplarily shown in FIG. 3F . For a specific implementation manner of the generated video mentioned here, reference may be made to the description of the generated video in Mode 1.
在另一些实施例中,手机在检测到用户在图3P所示的用户界面18中执行如点击控件3152的操作后,可显示图3T示例性所示的用户界面19。In other embodiments, after detecting that the user performs an operation such as clicking on the control 3152 in the user interface 18 shown in FIG. 3P, the mobile phone may display the user interface 19 exemplarily shown in FIG. 3T.
其中,用户界面19可以包括控件317,控件317用于进入到可播放视频1的界面,此处提及的视频1即为基于本申请的方案所生成的视频。Wherein, the user interface 19 may include a control 317, and the control 317 is used to enter an interface that can play the video 1, and the video 1 mentioned here is the video generated based on the solution of the present application.
手机在检测到用户在图3Q所示的用户界面19中执行如点击控件317的操作后,可显示图3F示例性所示的用户界面15。After detecting that the user performs an operation such as clicking the control 317 in the user interface 19 shown in FIG. 3Q , the mobile phone may display the user interface 15 exemplarily shown in FIG. 3F .
综上,电子设备基于用户在图库应用中选定的图片素和/或视频素材便可自动生成视频。To sum up, the electronic device can automatically generate a video based on the picture elements and/or video material selected by the user in the gallery application.
其中,用户界面15的具体实现方式可参见前述描述的内容,此处不做赘述。故,电子设备可通过预览区域306向用户显示生成的视频。The specific implementation manner of the user interface 15 may refer to the content described above, which will not be repeated here. Therefore, the electronic device can display the generated video to the user through the preview area 306 .
另外,用户界面15还用于显示编辑生成的视频的控件。故,电子设备可以向用户提供手动编辑已生成的视频的功能,方便用户基于自身意愿配置视频的时长、画幅、视频模板、包含的素材和滤镜等参数,丰富了视频的样式。In addition, the user interface 15 is used to display controls for editing the generated video. Therefore, the electronic device can provide the user with the function of manually editing the generated video, which is convenient for the user to configure parameters such as the duration, frame, video template, included materials and filters of the video based on their own wishes, thus enriching the style of the video.
另外,手机在检测到用户在图3F所示的用户界面15中执行如点击控件309的操作后,可保存视频。In addition, the mobile phone can save the video after detecting that the user performs an operation such as clicking the control 309 in the user interface 15 shown in FIG. 3F .
需要说明的是,方式一、方式二、方式三提及的用户界面的控件大小、控件位置、显示内容、跳转方式等参数包括但不限于前述描述。It should be noted that the parameters such as the control size, control position, display content, and jumping mode of the user interface mentioned in the first, second, and third modes include but are not limited to the foregoing descriptions.
基于方式一、方式二、方式三的描述,手机可将生成的视频保存在图库应用中。Based on the descriptions of Mode 1, Mode 2, and Mode 3, the mobile phone can save the generated video in the gallery application.
请参阅图3A、图3R-图3S,图3R-图3S为本申请一实施例提供的人机交互界面示意图。Please refer to FIGS. 3A and 3R to 3S. FIGS. 3R to 3S are schematic diagrams of a human-computer interaction interface provided by an embodiment of the present application.
手机在检测到用户在图3A所示的用户界面11中执行打开图库应用的操作(如点击图库应用的图标301)后,可显示图3R示例性所示的用户界面12',用户界面12'用于显示图库应用中相册的页面。After detecting that the user performs an operation of opening the gallery application in the user interface 11 shown in FIG. 3A (for example, clicking on the icon 301 of the gallery application), the mobile phone can display the user interface 12' exemplarily shown in FIG. 3R, and the user interface 12' The page used to display albums in the Gallery app.
其中,用户界面12'与图3B所示的用户界面12的界面布局基本相同,具体实现方式可参见方式一中图3B所示的用户界面12的描述,此处不做赘述。与图3B所示的用户界面12不同的是,用户界面12'中存储的视频的数量加1,故,图3R中的用户界面12'显示所有照片的数量从“182”增加至“183”,视频的数量从“49”增加至“50”。The interface layout of the user interface 12 ′ is basically the same as that of the user interface 12 shown in FIG. 3B , and the specific implementation can refer to the description of the user interface 12 shown in FIG. 3B in the first mode, which will not be repeated here. Unlike the user interface 12 shown in Figure 3B, the number of videos stored in the user interface 12' is incremented by 1, so the user interface 12' in Figure 3R shows that the number of all photos has increased from "182" to "183" , the number of videos increased from "49" to "50".
手机在检测到用户在图3R所示的用户界面12'中执行如点击控件3021的操作后,可显示图3S示例性所示的用户界面13',用户界面13'为手机中的图片和视频的显示界面。After detecting that the user performs an operation such as clicking on the control 3021 in the user interface 12' shown in FIG. 3R, the mobile phone can display the user interface 13' exemplarily shown in FIG. 3S, and the user interface 13' is the pictures and videos in the mobile phone. display interface.
其中,用户界面13与图3C示的用户界面13界面布局基本相同,具体实现方式可参见方式一中图3C用户界面13描述,此处不做赘述。与图3C示的用户界面13同的是,用户界面13'中存储的图片/视频整体向下一个移动,故,按照距离当前时刻由近到远的时间顺序,图3S中的用户界面13'显示的第一个素材为新生成的视频3039。The interface layout of the user interface 13 is basically the same as that of the user interface 13 shown in FIG. 3C . For a specific implementation manner, refer to the description of the user interface 13 in FIG. 3C in the first mode, which will not be repeated here. Similar to the user interface 13 shown in FIG. 3C, the pictures/videos stored in the user interface 13' move to the next as a whole. Therefore, according to the time sequence from near to far from the current moment, the user interface 13' in FIG. The first clip displayed is the newly generated video 3039.
手机在检测到用户在图3S所示的用户界面13'中执行如点击视频3039的操作后,可播放视频3039。After detecting that the user performs an operation such as clicking on the video 3039 in the user interface 13' shown in FIG. 3S, the mobile phone can play the video 3039.
本申请实施例中,每个视频模板可对应一首音乐。通常,不同视频模板对应的音乐不同。其中,电子设备可默认每个视频模板对应的音乐保持不变,也可基于用户选择更换每个视频模板对应的音乐,从而电子设备可通过实际情况进行灵活设置。其中,该音乐可以 为电子设备预先设置好的,也可以为用户手动添加的,本申请实施例对此不做限定。In this embodiment of the present application, each video template may correspond to a piece of music. Usually, the music corresponding to different video templates is different. The electronic device can keep the music corresponding to each video template unchanged by default, or can change the music corresponding to each video template based on user selection, so that the electronic device can flexibly set according to the actual situation. The music may be preset by the electronic device, or manually added by the user, which is not limited in this embodiment of the present application.
一方面,视频模板还与运镜、速度和转场均相关。通常,无论视频模板对应的音乐是否相同,不同的视频模板,对应的运镜、速度和转场中的至少一个不同。On the one hand, video templates are also related to movement, speed, and transitions. Generally, no matter whether the music corresponding to the video templates is the same or not, different video templates correspond to at least one of camera movement, speed and transition.
针对任意一个视频模板对应的音乐而言,该音乐的每个片段可匹配设置好的运镜、速度和转场。其中,运镜和转场可与素材的类型相关,视频素材采用的运镜与图片素材采用的运镜可相同或不同,视频素材采用的转场与图片素材采用的转场可相同或不同。另外,视频素材通常可设置速度对应的播放效果。For the music corresponding to any video template, each segment of the music can match the set motion, speed and transition. The motion and transition may be related to the type of material, the motion used for the video material may be the same or different from the motion used for the picture material, and the transition used for the video material may be the same or different from that used for the picture material. In addition, the video material can usually be set with a playback effect corresponding to the speed.
请参阅图4A-图4J,图4A-图4J示出了图片素材3033采用运镜后的效果示意图。Please refer to FIG. 4A-FIG. 4J, FIG. 4A-FIG. 4J are schematic diagrams showing the effect of the picture material 3033 after the mirror movement is adopted.
手机存储有图4A示例性所示的图片素材3033,其中,图片素材3033可参见图3C实施例的描述,此处不做赘述。The mobile phone stores the picture material 3033 exemplarily shown in FIG. 4A , where the picture material 3033 can be referred to the description of the embodiment in FIG. 3C , which is not repeated here.
在手机采用向对角线移动的运镜效果显示图片像素3033时,手机可从显示图4B示例性所示的界面11变为显示图4C示例性所示的界面12,其中,界面11为图片素材3033的区域a1,界面12为图片素材3033的区域a2,且区域a1和区域a2位于图片素材3033的不同位置。When the mobile phone displays the picture pixels 3033 using the mirror movement effect moving diagonally, the mobile phone can change from displaying the interface 11 exemplarily shown in FIG. 4B to displaying the interface 12 exemplarily shown in FIG. 4C , wherein the interface 11 is a picture The area a1 of the material 3033, the interface 12 is the area a2 of the picture material 3033, and the area a1 and the area a2 are located at different positions of the picture material 3033.
其中,除了对角线移动的运镜效果之外,电子设备还可以采用向上、向上、向左、向右等的运镜效果,本申请实施例对此不做限定。Wherein, in addition to the mirror movement effect of diagonal movement, the electronic device may also adopt the mirror movement effect of upward, upward, leftward, rightward, etc., which is not limited in this embodiment of the present application.
在手机采用放大的运镜效果显示图片像素3033时,手机可从显示图4B示例性所示的界面1变为显示图4D示例性所示的界面13,其中,界面11为图片素材3033的区域a1,界面13为图片素材3033的区域a3的放大图。When the mobile phone uses the magnified mirror motion effect to display the picture pixels 3033 , the mobile phone can change from displaying the interface 1 exemplarily shown in FIG. 4B to displaying the interface 13 exemplarily shown in FIG. 4D , wherein the interface 11 is the area of the picture material 3033 a1, the interface 13 is an enlarged view of the area a3 of the picture material 3033.
其中,除了放大的运镜效果之外,电子设备还可以采用缩放的运镜效果,本申请实施例对此不做限定。Wherein, in addition to the magnified mirror movement effect, the electronic device may also use the zoomed mirror movement effect, which is not limited in this embodiment of the present application.
另外,在图片素材20为图4E示例性所示的竖幅图片,且生成的视频采用横幅的画幅时,电子设备可采用从上到下移动的运镜效果,显示图片素材20。例如,手机可从显示图4F示例性所示的界面21变为显示图4G示例性所示的界面22,其中,界面21为图片素材20的区域b1,界面22为图片素材20的区域b2,且区域b1和区域b2位于图片素材20的不同位置。可选地,区域b1和区域b2组成的区域的形状可设置为正方形。其中,若图片素材20包含有人物、人脸等,则电子设备将区域b1和区域b2组成的区域可尽量多的包含素材中人物、人脸对应的区域。In addition, when the picture material 20 is a vertical picture exemplarily shown in FIG. 4E , and the generated video adopts a banner format, the electronic device can display the picture material 20 by moving the mirror from top to bottom. For example, the mobile phone may change from displaying the interface 21 exemplarily shown in FIG. 4F to displaying the interface 22 exemplarily shown in FIG. 4G , wherein the interface 21 is the area b1 of the picture material 20 , the interface 22 is the area b2 of the picture material 20 , And the area b1 and the area b2 are located at different positions of the picture material 20 . Optionally, the shape of the region composed of the region b1 and the region b2 may be set to be a square. Wherein, if the picture material 20 includes characters, faces, etc., the area formed by the area b1 and the area b2 by the electronic device may include as many areas as possible corresponding to the characters and faces in the material.
在图片素材30为图4H示例性所示的横幅图片,且生成的视频采用竖幅的画幅时,电子设备可采用从左到右移动的运镜效果,显示图片素材30。例如,手机可从显示图4I示例性所示的界面31变为显示图4J示例性所示的界面32,其中,界面31为图片素材30的区域c1,界面32为图片素材303的区域c2,且区域c1和区域c2位于图片素材30的不同位置。可选地,区域c1和区域c2组成的区域的形状可设置为正方形。其中,若图片素材30包含有人物、人脸等,则电子设备将区域c1和区域c2组成的区域可尽量多的包含素材中人物、人脸对应的区域。When the picture material 30 is the banner picture exemplarily shown in FIG. 4H , and the generated video adopts a vertical frame, the electronic device can display the picture material 30 by using a mirror movement effect that moves from left to right. For example, the mobile phone can change from displaying the interface 31 exemplarily shown in FIG. 4I to displaying the interface 32 exemplarily shown in FIG. 4J , wherein the interface 31 is the area c1 of the picture material 30 , the interface 32 is the area c2 of the picture material 303 , And the area c1 and the area c2 are located at different positions of the picture material 30 . Optionally, the shape of the region composed of the region c1 and the region c2 may be set to be a square. Wherein, if the picture material 30 includes characters, faces, etc., the area formed by the area c1 and the area c2 by the electronic device may include as many areas as possible corresponding to the characters and faces in the material.
从而,有利于电子设备生成的视频可最大化展现素材,丰富了视频的内容,保证了视频带来的电影感和画面感。Therefore, it is beneficial for the video generated by the electronic device to maximize the display material, enrich the content of the video, and ensure the cinematic and picture sense brought by the video.
请参阅图5,图5示出了视频素材3038不同速度的效果示意图。其中,视频素材3038可参见图3C实施例的描述,此处不做赘述。Please refer to FIG. 5 , which shows a schematic diagram of the effect of the video material 3038 at different speeds. For the video material 3038, reference may be made to the description of the embodiment in FIG. 3C, which will not be repeated here.
如图5所示,假设电子设备基于素材生成的视频在t0至t1时间段以及t2至t3时间段内均播放视频素材3038,且t2至t3时间段的时长为t0至t1时间段的时长的三倍,则电子设备在t0至t1时间段播放视频素材3038的速度为在t2至t3时间段内播放视频素材3038的速度的三倍。As shown in FIG. 5 , it is assumed that the video generated by the electronic device based on the material plays the video material 3038 in both the time period from t0 to t1 and the time period from t2 to t3, and the time period from t2 to t3 is equal to the length of the time period from t0 to t1. Three times, the speed of the electronic device playing the video material 3038 in the time period t0 to t1 is three times the speed of playing the video material 3038 in the time period t2 to t3.
需要说明的是,速度除了三倍速之外,还可以包括任意一个比值的速度,本申请实施例对此不做限定。It should be noted that, in addition to the triple speed, the speed may also include a speed of any ratio, which is not limited in this embodiment of the present application.
请参阅图6,图6示出了图片素材3033和图片素材3032之间的转场的效果示意图。其中,图片素材3033和图片素材3032可参见图3C实施例的描述,此处不做赘述。Please refer to FIG. 6 , which shows a schematic diagram of the effect of the transition between the picture material 3033 and the picture material 3032 . The picture material 3033 and the picture material 3032 may refer to the description of the embodiment in FIG. 3C , which will not be repeated here.
如图6所示,假设电子设备基于素材生成的视频在t4至t5时间段内播放图片素材3033,在t6至t7时间段内播放图片素材3032,且在t5至t6时间段内图片素材3033采用叠加模糊的转场效果过渡到图片素材3032,则电子设备在t4至t5时间段内播放图片素材3033,在t5至t6时间段内播放叠加在模糊处理的图片素材3033上的逐渐放大的图片素材3032,在t6至t7时间段内播放图片素材3032。As shown in FIG. 6 , it is assumed that the electronic device plays the picture material 3033 in the time period from t4 to t5, plays the picture material 3032 in the time period from t6 to t7, and the picture material 3033 uses the The transition effect of superimposing and blurring transitions to the picture material 3032, then the electronic device plays the picture material 3033 in the time period from t4 to t5, and plays the gradually enlarged picture material superimposed on the blurred picture material 3033 in the time period from t5 to t6. 3032. Play the picture material 3032 in the time period from t6 to t7.
需要说明的是,转场除了“叠加模糊”效果之外,还可以包括焦点模糊等效果,本申请实施例对此不做限定。It should be noted that, in addition to the "superimposed blur" effect, the transition may also include effects such as focus blur, which is not limited in this embodiment of the present application.
从而,电子设备可按照设置好的运镜、速度和转场,实现了素材的场面调度和镜头调度。Therefore, the electronic device can realize the scene scheduling and shot scheduling of the material according to the set movement, speed and transition.
另一方面,视频模板与景别类型相关。通常,无论视频模板对应的音乐是否相同,不同类型的视频模板,对应的景别类型不同;相同类型的视频模板,对应的景别类型相同。On the other hand, video templates are related to scene types. Generally, regardless of whether the music corresponding to the video templates is the same, different types of video templates correspond to different scene types; video templates of the same type correspond to the same scene type.
在用户选择默认的视频模板对应的音乐时,电子设备无需对视频模板的每个片段的时长进行调整,从而可实现音乐卡点。在用户选择其他音乐作为视频模板对应的音乐时,电子设备需要对用户选定的音乐进行节拍检测,得到用户选定的音乐的拍速,然后判断视频模板的每个片段的时长是否等于所得拍速的整数倍,将时长不等于所得拍速的整数倍的片段的时长进行调整,使得视频模板中的每个片段的时长为拍速的整数倍。When the user selects the music corresponding to the default video template, the electronic device does not need to adjust the duration of each segment of the video template, so that the music jam point can be realized. When the user selects other music as the music corresponding to the video template, the electronic device needs to perform beat detection on the music selected by the user, obtain the tempo of the music selected by the user, and then determine whether the duration of each segment of the video template is equal to the obtained beat Integer multiples of the tempo, adjust the durations of segments whose durations are not equal to the integer multiples of the obtained tempo, so that the duration of each segment in the video template is an integer multiple of the tempo.
其中,针对任意一首音乐,本申请实施例可采用BPM(Beat Per Minute,拍数)检测方法对音乐进行节拍检测,获得拍速(bpm),其中,电子设备通过数字信号处理(digital signal processing,DSP)的方法对音频进行分析得到音乐的拍点。通常的算法会将原始音频分割成若干片段,然后通过快速傅里叶变换得到频谱,最后基于声音能量进行滤波分析从而得到该音乐的拍点。Wherein, for any piece of music, in the embodiment of the present application, the BPM (Beat Per Minute, beat number) detection method can be used to detect the beat of the music to obtain the tempo (bpm), wherein the electronic device uses digital signal processing (digital signal processing). , DSP) method to analyze the audio to get the beat of the music. The usual algorithm divides the original audio into several segments, then obtains the spectrum through fast Fourier transform, and finally performs filter analysis based on the sound energy to obtain the beat of the music.
需要说明的是,每个的视频模板中每个片段对应的景别类型是事先基于实际经验(如用户对某些位置的单一片段对应的景别类和多个连续片段各自对应的景别类型都是感知强烈的)进行设置。It should be noted that the scene type corresponding to each segment in each video template is based on actual experience in advance (such as the user's scene type corresponding to a single segment at certain locations and the scene type corresponding to multiple consecutive segments). are all perceptually strong) to set.
针对任意一个视频模板对应的音乐而言,本申请实施例可以该音乐的拍点为分界线,将整首音乐划分为多个片段,并对每个片段匹配设置好的景别类型。For the music corresponding to any video template, the embodiment of the present application may use the beat of the music as the dividing line, divide the entire piece of music into multiple segments, and match each segment with the set scene type.
其中,每个片段为该音乐的拍速的整数倍,从而实现每个片段的音乐拍点。可以理解的是,音乐拍点即为拍子或节拍,指强拍和弱拍的组合规律,具体是指在乐谱中每一小节的音符总长度,这些音符可以是如二分音符、四分音符、八分音符等。通常,一首音乐可以由多个拍点组成的,且一首音乐的拍点通常是固定不变的。Wherein, each segment is an integer multiple of the tempo of the music, so as to realize the music timing of each segment. It can be understood that the beat of music is the beat or beat, which refers to the combination rule of upbeat and downbeat, and specifically refers to the total length of notes in each measure in the score. These notes can be, for example, half notes, quarter notes, Eighth notes, etc. Usually, a piece of music can be composed of multiple beats, and the beats of a piece of music are usually fixed.
可以理解的是,素材的选择是随机的,实际情况中很有可能存在素材无法全部满足每 个片段所设置的景别类型。故,在出现前述问题时,电子设备可采用多种方式调整素材的排列顺序。It is understandable that the selection of materials is random, and in actual situations it is very likely that the materials cannot all meet the scene type set for each clip. Therefore, when the aforementioned problem occurs, the electronic device can adjust the arrangement order of the materials in various ways.
在一些实施例中,电子设备可将每个片段设置优先级。其中,高优先级的片段可以包括但不限于:该音乐的开头部分、副歌部分、结尾部分或者重音部分等片段。进而,电子设备可优先满足高优先级的片段所设置的景别类型,将满足高优先级的片段所设置的景别类型对应的素材放于满足高优先级的片段,再按照剩余片段所设置的景别类型,将剩余素材放于剩余片段,此时,剩余素材的景别类型与剩余片段所设置的景别类型可相同或不同。In some embodiments, the electronic device may prioritize each segment. The high-priority segments may include, but are not limited to, segments such as the beginning part, the chorus part, the ending part, or the accent part of the music. Further, the electronic device can give priority to satisfying the scene type set by the high-priority segment, and place the material corresponding to the scene type set by the high-priority segment to the high-priority segment, and then follow the settings of the remaining segments. The scene type of the remaining material is placed in the remaining clips. At this time, the scene type of the remaining material and the scene type set in the remaining clips can be the same or different.
在另一些实施例中,电子设备可优先满足位置靠前的片段所设置的景别类型,将满足位置靠前的片段所设置的景别类型对应的素材放于位置靠前的片段,再按照剩余片段所设置的景别类型,将剩余素材放于剩余片段,此时,剩余素材的景别类型与剩余片段所设置的景别类型可相同或不同。In other embodiments, the electronic device may preferentially satisfy the scene type set by the segment at the front, and place the material corresponding to the type of the scene set by the segment at the front in the segment at the front, and then follow the The scene type set for the remaining clips, the remaining material is placed in the remaining clips. At this time, the scene type of the remaining material and the scene type set for the remaining clips can be the same or different.
其中,电子设备也可以优先满足剩余片段中位置靠前的片段所设置的景别类型。Wherein, the electronic device may also preferentially satisfy the scene type set by the segment located at the front among the remaining segments.
针对任意一个视频模板对应的音乐而言,本申请实施例可以该音乐的拍点为分界线将整首音乐划分为多个片段,并对多个连续片段匹配设置好的景别类型,其余片段的景别类型可不做限定。从而,使得生成的视频的镜头感和电影感增强。其中,多个连续片段可以为该音乐的开头部分、结尾部分或者副歌部分等片段。For the music corresponding to any video template, in this embodiment of the present application, the beat of the music can be used as the dividing line to divide the entire piece of music into multiple segments, and the multiple consecutive segments are matched with the set scene types, and the remaining segments are The type of scene is not limited. Thereby, the lens sense and cinematic sense of the generated video are enhanced. The multiple continuous segments may be segments such as the beginning part, the ending part, or the chorus part of the music.
以景别类型划分为图7示例性所示的近景、中景和远景三种为例,介绍多个连续片段各自对应的景别类型。其中,A代表近景对应的景别类型,B代表中景对应的景别类型,C代表远景对应的景别类型。Taking the classification of scene types into three types exemplarily shown in FIG. 7 , namely, close-range, medium-range, and long-range, as an example, the scene types corresponding to each of the multiple continuous segments are introduced. Among them, A represents the type of scene corresponding to the close shot, B represents the type of scene corresponding to the medium shot, and C represents the type of scene corresponding to the long shot.
例如,本申请实施例可在该音乐的开头和/或结尾对应的5个连续片段对应的景别类型分别为CCCBA,使得生成的视频在开头部分具有悬疑的效果,或者在结尾部分具有未完待续的效果。For example, in this embodiment of the present application, the scene types corresponding to the five consecutive segments corresponding to the beginning and/or the end of the music may be CCCBA, respectively, so that the generated video has a suspenseful effect at the beginning, or has unfinished content at the end. continued effect.
又如,本申请实施例可在该音乐的开头和/或结尾对应的4个连续片段对应的景别类型分别为ABBC,使得生成的视频在开头部分或者在结尾部分为视频展开叙述做准备的效果。For another example, in this embodiment of the present application, the scene types corresponding to the four consecutive segments corresponding to the beginning and/or the end of the music may be ABBC, respectively, so that the generated video is prepared for the narration of the video at the beginning or at the end. Effect.
又如,本申请实施例可在该音乐的开头之后的片段和/或结尾之前的片段对应的5个连续片段对应的景别类型分别为BBBBB,使得生成的视频在对应的片段具有展开叙述的效果。For another example, in this embodiment of the present application, the scene types corresponding to the five consecutive segments corresponding to the segment after the beginning and/or the segment before the end of the music are respectively BBBBB, so that the generated video has a narration in the corresponding segment. Effect.
又如,本申请实施例可在该音乐的副歌部分对应的5个连续片段对应的景别类型分别为CCCCA,使得生成的视频在副歌处将视频的叙述提升到高潮的效果。For another example, in this embodiment of the present application, the scene types corresponding to the five consecutive segments corresponding to the chorus part of the music can be CCCCA respectively, so that the generated video has the effect of enhancing the narration of the video to a climax at the chorus.
需要说明的是,本申请实施例包括但不限于上述音乐的多个连续片段对应的景别类型的具体实现方式。It should be noted that the embodiments of the present application include, but are not limited to, specific implementations of scene types corresponding to the above-mentioned multiple consecutive pieces of music.
从而,电子设备按照音乐的拍点所设置的片段的景别类型,可调整素材的排列顺序。Therefore, the electronic device can adjust the arrangement order of the materials according to the scene type of the segment set at the beat point of the music.
综上,电子设备按照视频模板中设置好的景别顺序对素材进行排列顺序,并按照视频模板中设置好的运镜、速度和转场,增添素材的场面感和镜头感,生成具备视频模板对应的播放效果的视频,使得生成的视频在影片剧情的叙述、人物思想感情的表达、人物关系的处理等方面上更加具备表现力和张力,从而增强生成的视频的艺术感染力。To sum up, the electronic equipment arranges the materials according to the sequence of scenes set in the video template, and adds the scene and lens sense of the materials according to the movement, speed and transition set in the video template, and generates a video template with The video with the corresponding playback effect makes the generated video more expressive and intense in terms of the narrative of the film's plot, the expression of characters' thoughts and feelings, and the handling of character relationships, thereby enhancing the artistic appeal of the generated video.
为了便于说明,结合表1和表2,以亲子类型的视频模板和旅行类型的视频模板为例,介绍视频模板的具体实现方式。表1和表2中,景别类型以图7示例性所示的近景、中景和远景三种为例,为了便于表述,A代表近景对应的景别类型,B代表中景对应的景别类 型,C代表远景对应的景别类型。For convenience of description, the specific implementation of the video template is introduced by taking the parent-child type video template and the travel type video template as examples in conjunction with Table 1 and Table 2. In Tables 1 and 2, the types of scene types are taken as examples of the three types of close-up, medium-range and long-range shown in Figure 7. For ease of expression, A represents the type of scenery corresponding to the close-up, and B represents the type of scenery corresponding to the medium-range. Type, C represents the type of scene corresponding to the vision.
表1 亲子类型的视频模板Table 1 Parent-child video templates
Figure PCTCN2021116047-appb-000001
Figure PCTCN2021116047-appb-000001
Figure PCTCN2021116047-appb-000002
Figure PCTCN2021116047-appb-000002
表1中,在视频的开头,针对视频素材,转场采用“白色渐亮”的效果以及“片头淡入淡出”的效果。针对图片素材,转场采用“白色渐亮”的效果。In Table 1, at the beginning of the video, for the video material, the transition uses the effect of "white fading" and the effect of "fading in and out of title". For the picture material, the transition uses the "white fade" effect.
在6x时刻,针对视频素材,转场采用“快速下移”的效果。针对图片素材,转场采用“上下模糊斜角推移”的效果。At 6x time, for video material, the transition uses a "quick down" effect. For the picture material, the transition adopts the effect of "up and down blurred oblique angle transition".
在14x时刻,针对视频素材,转场采用“伸展进”的效果。针对图片素材,转场采用“左右模糊推挤”的效果。At 14x time, for the video material, the transition uses a "stretch in" effect. For the picture material, the transition adopts the effect of "left and right blur and push".
在22x时刻,针对视频素材,转场采用“快速上”的效果。针对图片素材,转场采用“推上去且焦点模糊/缩放幕后”的效果。At 22x time, for the video material, the transition uses a "fast up" effect. For image assets, transitions use a "push up and focus blur/zoom behind the scenes" effect.
在32x时刻,针对视频素材,转场采用“极速左”的效果。针对图片素材,转场采用“向右轴旋转模糊”的效果。At 32x time, for the video material, the transition uses the effect of "speed left". For the picture material, the transition adopts the effect of "rotation blur on the right axis".
在34x时刻,针对视频素材,转场采用“右旋转”的效果。针对图片素材,转场采用“向左轴旋转模糊”的效果。At 34x time, for the video material, the transition uses a "rotate right" effect. For the picture material, the transition uses the effect of "rotate to the left to blur".
在36x时刻,针对视频素材,转场采用“快速左滑”的效果。针对图片素材,转场采用“透视模糊”的效果。At 36x time, for the video material, the transition adopts the effect of "quick left slide". For the picture material, the transition uses the "perspective blur" effect.
在38x时刻,针对视频素材,转场采用“模糊叠化”的效果。针对图片素材,转场不采用任何效果。At 38x time, for the video material, the transition uses the effect of "blur and dissolve". For image materials, transitions do not use any effects.
在40x时刻,针对视频素材,转场采用“极速左”的效果。针对图片素材,转场不采用任何效果。At 40x time, for the video material, the transition uses the effect of "speed left". For image materials, transitions do not use any effects.
在42x时刻,针对视频素材,转场采用“变白淡出”的效果以及“右旋转”的效果。针对图片素材,转场采用“变白淡出”的效果。At 42x time, for the video material, the transition uses the effect of "fade out" and "rotate right". For the picture material, the transition adopts the effect of "whitening and fading out".
在44x时刻,针对视频素材,转场采用“快速左滑”的效果。针对图片素材,转场不采用任何效果。At 44x time, for the video material, the transition adopts the effect of "quick left slide". For image materials, transitions do not use any effects.
在46x时刻,针对视频素材,转场采用“模糊叠化”的效果。针对图片素材,转场不采用任何效果。At 46x time, for the video material, the transition uses a "blurred and dissolve" effect. For image materials, transitions do not use any effects.
在48x时刻,针对视频素材,转场采用“变白淡出”的效果。针对图片素材,转场采用“变白淡出”的效果。At 48x time, for the video material, the transition adopts the effect of "whitening and fading out". For the picture material, the transition adopts the effect of "whitening and fading out".
在49x时刻,针对视频素材,转场采用“左旋转”的效果。针对图片素材,转场采用“透视模糊”的效果。At 49x time, for the video material, the transition uses a "rotate left" effect. For the picture material, the transition uses the "perspective blur" effect.
在52x时刻,针对视频素材,转场采用“左旋转”的效果。针对图片素材,转场不采用任何效果。At 52x time, for the video material, the transition uses a "left rotation" effect. For image materials, transitions do not use any effects.
在56x时刻,针对视频素材,转场采用“快速左”的效果。针对图片素材,转场不采用任何效果。At 56x time, for the video material, the transition uses a "fast left" effect. For image materials, transitions do not use any effects.
在60x时刻,针对视频素材,转场采用“快速左”的效果。针对图片素材,转场不采用任何效果。At 60x, for video footage, the transition uses a "fast left" effect. For image materials, transitions do not use any effects.
在62x时刻,针对视频素材,转场采用“伸展进”的效果。针对图片素材,转场不采用任何效果。At 62x time, for the video material, the transition uses a "stretch in" effect. For image materials, transitions do not use any effects.
表2 旅行类型的视频模板Table 2 Video templates for travel types
Figure PCTCN2021116047-appb-000003
Figure PCTCN2021116047-appb-000003
Figure PCTCN2021116047-appb-000004
Figure PCTCN2021116047-appb-000004
Figure PCTCN2021116047-appb-000005
Figure PCTCN2021116047-appb-000005
Figure PCTCN2021116047-appb-000006
Figure PCTCN2021116047-appb-000006
表2中转场的具体实现方式可参见表1中转场的描述方式,此处不做赘述。For a specific implementation manner of the transition field in Table 2, reference may be made to the description manner of the transition field in Table 1, which will not be repeated here.
需要说明的是,视频模板包括但不限于与景别类型、运镜、速度和转场等参数相关。It should be noted that the video template includes, but is not limited to, parameters related to scene type, camera movement, speed, and transition.
另外,视频模板基于素材的画幅,也可以自适应调整视频的镜头的移动方式,达到最佳的播放效果。例如,在生成横幅的视频时,电子设备可运用镜头从上到下的移动方式,实现竖副的素材的最大区域化显示效果;在生成竖幅的视频时,电子设备可运用镜头从左到右的移动方式,实现横幅的素材的最大区域化显示效果。从而,有利于视频最大化展现素材,丰富了视频的内容,保证了视频带来的电影感和画面感。In addition, the video template can also adaptively adjust the movement of the video lens based on the frame of the material to achieve the best playback effect. For example, when generating a banner video, the electronic device can use the moving method of the lens from top to bottom to achieve the maximum regionalized display effect of the vertical material; when generating a vertical video, the electronic device can use the lens to move from left to bottom. The right movement method realizes the maximum regional display effect of the banner material. Therefore, it is beneficial to maximize the display of materials in the video, enrich the content of the video, and ensure the cinematic and picture sense brought by the video.
本申请实施例中,视频模板中的各个景别类型各自对应一个片段,该片段的时长可相同或者不同。电子设备可先基于视频模板中的各个片段的时长大小,来放置用户选定的素材。通常,视频素材可优先图片素材放置在时长较长的片段中。电子设备再基于各个片段对应的景别类型,来调整放置的素材的排列顺序,使得素材的景别类型与片段的景别类型匹配,从而确保用户选定的素材在生成的视频中至少出现一次以及相邻片段不会放置相同的素材。In the embodiment of the present application, each scene type in the video template corresponds to a segment, and the duration of the segment may be the same or different. The electronic device may first place the material selected by the user based on the duration of each segment in the video template. Typically, video footage can be placed in longer clips in preference to image footage. The electronic device then adjusts the arrangement order of the placed materials based on the scene type corresponding to each clip, so that the scene type of the material matches the scene type of the clip, thereby ensuring that the material selected by the user appears at least once in the generated video and adjacent clips do not place the same footage.
需要说明的是,本申请实施例不限于上述实现方式来调整素材在视频中的排列顺序。It should be noted that the embodiments of the present application are not limited to the above implementation manner to adjust the arrangement order of materials in the video.
另外,在用户选定的素材的数量较多,且视频的时长设置较小时,景别类型对应的片段的时长可设置较小,使得全部的素材均能够在视频中出现一次。在用户选定的素材的数量较少,且视频的时长设置较大时,电子设备可从视频素材中选择一个或者多个片段在生成的视频中重复出现N次,N为大于1的正整数。若此时仍无法满足较长的视频的时长,则电子设备可将已排列好的全部的素材在生成的视频中重复M次,M为大于1的正整数。In addition, when the number of materials selected by the user is large and the duration of the video is set to be small, the duration of the segment corresponding to the scene type can be set to be small, so that all the materials can appear in the video once. When the number of materials selected by the user is small and the duration of the video is set relatively large, the electronic device can select one or more segments from the video materials to repeat N times in the generated video, where N is a positive integer greater than 1 . If the length of the longer video still cannot be satisfied at this time, the electronic device may repeat all the arranged materials M times in the generated video, where M is a positive integer greater than 1.
其中,电子设备通常针对视频模板对应的音乐的时长可设置有一个最低时长和最高时长,从而确保用户选定的素材在生成的视频中至少出现一次。Wherein, the electronic device can usually set a minimum duration and a maximum duration for the duration of the music corresponding to the video template, so as to ensure that the material selected by the user appears at least once in the generated video.
基于前述描述,图3S中视频3039的播放效果与视频模板相关。通常,视频模板不同,视频3039的播放效果不同。其中,在用户选定视频素材3031、图片素材3032、图片素材3033、视频素材3034、图片素材3035、图片素材3036、图片素材3037和视频素材3038时,视频3039中可以包括:视频素材3031、图片素材3032、图片素材3033、视频素材3034、图片素材3035、图片素材3036、图片素材3037和视频素材3038。Based on the foregoing description, the playback effect of the video 3039 in FIG. 3S is related to the video template. Generally, the video 3039 will play differently depending on the video template. Wherein, when the user selects video material 3031, picture material 3032, picture material 3033, video material 3034, picture material 3035, picture material 3036, picture material 3037 and video material 3038, the video 3039 may include: video material 3031, picture material Material 3032, picture material 3033, video material 3034, picture material 3035, picture material 3036, picture material 3037 and video material 3038.
以景别类型划分为图7示例性所示的近景、中景和远景三种为例,介绍多个连续片段各自对应的景别类型。其中,A代表近景对应的景别类型,B代表中景对应的景别类型,C代表远景对应的景别类型。Taking the classification of scene types into three types exemplarily shown in FIG. 7 , namely, close-range, medium-range, and long-range, as an example, the scene types corresponding to each of the multiple continuous segments are introduced. Among them, A represents the type of scene corresponding to the close shot, B represents the type of scene corresponding to the medium shot, and C represents the type of scene corresponding to the long shot.
本申请实施例中,电子设备可识别出视频素材3031对应的景别类型分别为BCBBB,图片素材3032的景别类型为B,图片素材3033的景别类型为B,视频素材3034对应的景别类分别为CCCC,图片素材3035的景别类型为B,图片素材3036的景别类型为A,图片素材3037的景别类型为A,视频素材3038对应的景别类型分别为BCCCC。In the embodiment of the present application, the electronic device can identify that the scene type corresponding to the video material 3031 is BCBBB, the scene type of the picture material 3032 is B, the scene type of the picture material 3033 is B, and the scene type corresponding to the video material 3034 The category is CCCC, the scene type of the picture material 3035 is B, the scene type of the picture material 3036 is A, the scene type of the picture material 3037 is A, and the scene type corresponding to the video material 3038 is BCCCC.
在电子设备基于视频素材3031、图片素材3032、图片素材3033、视频素材3034、图片素材3035、图片素材3036、图片素材3037和视频素材3038,识别出生成视频3039可采用亲子类型的视频模板。Based on the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037 and the video material 3038, the electronic device recognizes that a parent-child video template can be used to generate the video 3039.
在一些实施例中,若采用表1所示的亲子类型的视频模板,则电子设备基于视频素材3031、图片素材3032、图片素材3033、视频素材3034、图片素材3035、图片素材3036、图片素材3037和视频素材3038各自的景别类型,按照表1给出音乐的每个片段对应的景别类型,将视频素材3031、图片素材3032、图片素材3033、视频素材3034、图片素材3035、图片素材3036、图片素材3037和视频素材3038分别放置到该音乐对应的位置处,得到视频3039。In some embodiments, if the parent-child video template shown in Table 1 is used, the electronic device will be based on video material 3031, picture material 3032, picture material 3033, video material 3034, picture material 3035, picture material 3036, picture material 3037 and the respective scene types of video material 3038, according to Table 1, the scene type corresponding to each segment of the music is given. , the picture material 3037 and the video material 3038 are respectively placed at the positions corresponding to the music, and a video 3039 is obtained.
在另一些实施例中,电子设备可按照视频模板已设置好音乐对应的多个连续片段的景别类型,来增强生成的视频的播放效果,有利于提升视频的镜头感和电影感。In other embodiments, the electronic device can set scene types of multiple continuous segments corresponding to music according to the video template to enhance the playback effect of the generated video, which is beneficial to enhance the video's sense of shots and movies.
需要说明的是,除了上述两种方式之外,电子设备可根据实际情况和经验值设置视频模板中的景别类型,本申请实施例对视频模板中的景别类型的设置方式不做限定。It should be noted that, in addition to the above two methods, the electronic device can set the scene type in the video template according to the actual situation and experience value, and the embodiment of the present application does not limit the setting method of the scene type in the video template.
下面,结合图8A-图8E,对电子设备基于用户选定的素材而生成的视频的播放效果进行举例说明。Below, with reference to FIGS. 8A-8E , an example will be given to illustrate the playback effect of the video generated by the electronic device based on the material selected by the user.
请参阅图8A-图8E,图8A-图8E示出了电子设备播放生成的视频时各个素材的播放顺序的示意图。Please refer to FIG. 8A-FIG. 8E. FIG. 8A-FIG. 8E are schematic diagrams showing the playback sequence of each material when the electronic device plays the generated video.
如图8A所示,在用户选定图片素材11、图片素材12、图片素材13、图片素材14和图片素材15时,电子设备确定出:视频模板中的景别类型为CCCBA,且景别类型CCCBA的时长分别为4x、2x、2x、x和2x,其中x=0.48秒,以及图片素材11的景别类型为B, 图片素材12的景别类型为B,图片素材13的景别类型为C,图片素材14的景别类型为A以及图片素材15的景别类型为C。As shown in FIG. 8A , when the user selects picture material 11, picture material 12, picture material 13, picture material 14 and picture material 15, the electronic device determines that the scene type in the video template is CCCBA, and the scene type is The durations of CCCBA are 4x, 2x, 2x, x and 2x respectively, where x=0.48 seconds, and the scene type of picture material 11 is B, the scene type of picture material 12 is B, and the scene type of picture material 13 is C, the scene type of the picture material 14 is A and the scene type of the picture material 15 is C.
则基于视频模板中的景别类型,以及图片素材11、图片素材12、图片素材13、图片素材14和图片素材15各自的景别类型,电子设备可获知全部的素材中缺少一个时长为2x的景别类型C,导致全部的素材无法准确匹配视频模板中的景别类型。由于全部的素材至少需要出现一次,故,电子设备可以将视频模板中的景别类型CCCBA更改为CBCBA。Then, based on the scene type in the video template, and the scene types of picture material 11, picture material 12, picture material 13, picture material 14, and picture material 15, the electronic device can learn that there is a 2x duration missing in all the materials. Scene type C, so that all the materials cannot accurately match the scene type in the video template. Since all the material needs to appear at least once, the electronic device can change the scene type CCCBA in the video template to CBCBA.
从而,电子设备基于CBCBA这个景别类型,将图片素材11、图片素材12、图片素材13、图片素材14和图片素材15的排列顺序进行调整,生成图8A示例性所示的视频。Therefore, based on the scene type CBCBA, the electronic device adjusts the arrangement order of picture material 11, picture material 12, picture material 13, picture material 14 and picture material 15 to generate the video exemplarily shown in FIG. 8A.
图8A中,生成的视频中的图片素材11、图片素材12、图片素材13、图片素材14和图片素材15的播放顺序为:In FIG. 8A , the playback sequence of picture material 11, picture material 12, picture material 13, picture material 14 and picture material 15 in the generated video is:
第0-第4x之间:图片素材13;Between 0-4x: picture material 13;
第4x-第6x之间:图片素材11;Between the 4x-6x: picture material 11;
第6x-第8x之间:图片素材15;Between the 6x-8x: picture material 15;
第8x-第9x之间:图片素材12;Between the 8x-9x: picture material 12;
第9x-第11x之间:图片素材14。Between 9x-11x: Image 14.
进而,图8A示例性所示的视频对应的景别类型分别为CBCBA。Furthermore, the scene types corresponding to the videos exemplarily shown in FIG. 8A are respectively CBCBA.
如图8B所示,在用户选定图片素材21、图片素材22、图片素材23、图片素材24和图片素材25时,电子设备确定出:视频模板中的景别类型为CCCBA,且景别类型CCCBA的时长分别为4x、2x、2x、x和2x,其中x=0.48秒,以及图片素材21的景别类型为C,图片素材22的景别类型为B,图片素材23的景别类型为C,图片素材24的景别类型为A以及图片素材25的景别类型为C。As shown in FIG. 8B , when the user selects picture material 21, picture material 22, picture material 23, picture material 24 and picture material 25, the electronic device determines that the scene type in the video template is CCCBA, and the scene type is The durations of CCCBA are 4x, 2x, 2x, x and 2x respectively, where x=0.48 seconds, and the scene type of picture material 21 is C, the scene type of picture material 22 is B, and the scene type of picture material 23 is C, the scene type of the picture material 24 is A and the scene type of the picture material 25 is C.
则基于视频模板中的景别类型,以及图片素材21、图片素材22、图片素材23、图片素材24和图片素材25各自的景别类型,电子设备可获知全部的素材能够准确匹配视频模板中的景别类型。从而,电子设备基于CCCBA这个景别类型,将图片素材21、图片素材22、图片素材23、图片素材24和图片素材25的排列顺序进行调整,生成图8B示例性所示的视频。Based on the scene type in the video template and the scene types of picture material 21, picture material 22, picture material 23, picture material 24, and picture material 25, the electronic device can know that all the materials can accurately match the scene type in the video template. Scene type. Therefore, based on the scene type CCCBA, the electronic device adjusts the arrangement order of picture material 21, picture material 22, picture material 23, picture material 24, and picture material 25 to generate the video exemplarily shown in FIG. 8B.
图8B中,生成的视频中的图片素材21、图片素材22、图片素材23、图片素材24和图片素材25的播放顺序为:In FIG. 8B , the playback order of picture material 21, picture material 22, picture material 23, picture material 24 and picture material 25 in the generated video is:
第0-第4x之间:图片素材23;Between 0-4x: picture material 23;
第4x-第6x之间:图片素材21;Between the 4x-6x: picture material 21;
第6x-第8x之间:图片素材25;Between the 6x-8x: picture material 25;
第8x-第9x之间:图片素材22;Between the 8x-9x: picture material 22;
第9x-第11x之间:图片素材24。Between 9x-11x: Image 24.
进而,图8B示例性所示的视频对应的景别类型分别为CCCBA。Furthermore, the scene types corresponding to the videos exemplarily shown in FIG. 8B are respectively CCCBA.
如图8C所示,在用户选定图片素材31、视频素材31、视频素材32、图片素材32和图片素材33时,电子设备确定出:视频模板中的景别类型为CCCBA,且景别类型CCCBA的时长分别为4x、2x、2x、x和2x,其中x=0.48秒,以及图片素材31的景别类型为B,视频素材31的景别类型为B,视频素材31的时长等于x,视频素材32的景别类型为C, 视频素材32的时长大于或等于4x,图片素材32的景别类型为A以及图片素材33的景别类型为C。As shown in FIG. 8C, when the user selects picture material 31, video material 31, video material 32, picture material 32 and picture material 33, the electronic device determines that the scene type in the video template is CCCBA, and the scene type is The durations of CCCBA are 4x, 2x, 2x, x, and 2x, respectively, where x=0.48 seconds, and the scene type of the picture material 31 is B, the scene type of the video material 31 is B, and the duration of the video material 31 is equal to x, The scene type of the video material 32 is C, the duration of the video material 32 is greater than or equal to 4x, the scene type of the picture material 32 is A, and the scene type of the picture material 33 is C.
则基于视频模板中的景别类型,以及图片素材31、视频素材31、视频素材32、图片素材32和图片素材33各自的景别类型,电子设备可获知全部的素材中缺少一个时长为2x的景别类型C,导致全部的素材无法准确匹配视频模板中的景别类型。由于全部的素材至少需要出现一次,故,电子设备可以将视频模板中的景别类型CCCBA更改为CBCBA。Then, based on the scene type in the video template, and the scene types of picture material 31, video material 31, video material 32, picture material 32, and picture material 33, the electronic device can know that there is a missing one with a duration of 2x in all the materials. Scene type C, so that all the materials cannot accurately match the scene type in the video template. Since all the material needs to appear at least once, the electronic device can change the scene type CCCBA in the video template to CBCBA.
从而,电子设备基于CBCBA这个景别类型,将图片素材31、视频素材31、视频素材32、图片素材32和图片素材33的排列顺序进行调整,生成图8C示例性所示的视频。Therefore, the electronic device adjusts the arrangement order of picture material 31, video material 31, video material 32, picture material 32, and picture material 33 based on the scene type CBCBA to generate the video exemplarily shown in FIG. 8C.
图8C中,生成的视频中的图片素材31、视频素材31、视频素材32、图片素材32和图片素材33的播放顺序为:In FIG. 8C , the playback order of picture material 31, video material 31, video material 32, picture material 32 and picture material 33 in the generated video is:
第0-第4x之间:视频素材32;Between 0-4x: video material 32;
第4x-第6x之间:图片素材31;Between the 4x-6x: picture material 31;
第6x-第8x之间:图片素材33;Between the 6x-8x: picture material 33;
第8x-第9x之间:视频素材32;Between 8x-9x: video material 32;
第9x-第11x之间:图片素材32。Between 9x-11x: Image 32.
进而,图8C示例性所示的视频对应的景别类型分别为CBCBA。Furthermore, the scene types corresponding to the videos exemplarily shown in FIG. 8C are respectively CBCBA.
如图8D所示,在用户选定图片素材41、视频素材41、视频素材42、图片素材42和图片素材43时,电子设备确定出:视频模板中的景别类型为CCCBA,且景别类型CCCBA的时长分别为4x、2x、2x、x和2x,其中x=0.48秒,以及图片素材41的景别类型为C,视频素材41的景别类型为B,视频素材41的时长等于x,视频素材42的景别类型为C,视频素材42的时长大于或等于4x,图片素材42的景别类型为A以及图片素材43的景别类型为C。As shown in FIG. 8D, when the user selects picture material 41, video material 41, video material 42, picture material 42 and picture material 43, the electronic device determines that the scene type in the video template is CCCBA, and the scene type is The durations of CCCBA are 4x, 2x, 2x, x, and 2x, respectively, where x=0.48 seconds, and the scene type of the picture material 41 is C, the scene type of the video material 41 is B, and the duration of the video material 41 is equal to x, The scene type of the video material 42 is C, the duration of the video material 42 is greater than or equal to 4x, the scene type of the picture material 42 is A, and the scene type of the picture material 43 is C.
则基于视频模板中的景别类型,以及图片素材41、视频素材41、视频素材42、图片素材42和图片素材43各自的景别类型,电子设备可获知全部的素材能够准确匹配视频模板中的景别类型。从而,电子设备基于CCCBA这个景别类型,将图片素材41、视频素材41、视频素材42、图片素材42和图片素材43的排列顺序进行调整,生成图8D示例性所示的视频。Based on the scene type in the video template, and the scene types of picture material 41, video material 41, video material 42, picture material 42, and picture material 43, the electronic device can know that all the materials can accurately match the scene type in the video template. Scene type. Therefore, the electronic device adjusts the arrangement order of picture material 41, video material 41, video material 42, picture material 42 and picture material 43 based on the scene type CCCBA to generate the video exemplarily shown in FIG. 8D.
图8D中,生成的视频中的图片素材41、视频素材41、视频素材42、图片素材42和图片素材43的播放顺序为:In FIG. 8D, the playback sequence of the picture material 41, the video material 41, the video material 42, the picture material 42 and the picture material 43 in the generated video is:
第0-第4x之间:视频素材42;Between 0-4x: video material 42;
第4x-第6x之间:图片素材41;Between the 4x-6x: picture material 41;
第6x-第8x之间:图片素材43;Between the 6x-8x: picture material 43;
第8x-第9x之间:视频素材42;Between 8x-9x: video material 42;
第9x-第11x之间:图片素材42。Between 9x-11x: Image 42.
进而,图8D示例性所示的视频对应的景别类型分别为CCCBA。Furthermore, the scene types corresponding to the videos exemplarily shown in FIG. 8D are respectively CCCBA.
如图8E所示,在用户选定图片素材51、视频素材51、视频素材52和图片素材52时,电子设备确定出:视频模板中的景别类型为CCCBA,且景别类型CCCBA的时长分别为4x、2x、2x、x和2x,其中x=0.48秒,以及图片素材51的景别类型为C,视频素材51的景别类型为BC,视频素材51中的景别类型为C对应的片段的时长等于2x,视频素材51 中的景别类型为B对应的片段的时长等于x,视频素材52的景别类型为C,视频素材42的时长大于或等于4x,以及图片素材52的景别类型为A。As shown in FIG. 8E, when the user selects the picture material 51, the video material 51, the video material 52 and the picture material 52, the electronic device determines that the scene type in the video template is CCCBA, and the durations of the scene type CCCBA are respectively are 4x, 2x, 2x, x and 2x, where x=0.48 seconds, and the scene type of the picture material 51 is C, the scene type of the video material 51 is BC, and the scene type in the video material 51 is corresponding to C The duration of the segment is equal to 2x, the duration of the segment corresponding to the scene type B in the video material 51 is equal to x, the scene type of the video material 52 is C, the duration of the video material 42 is greater than or equal to 4x, and the scene type of the picture material 52. The other type is A.
则基于视频模板中的景别类型,以及图片素材51、视频素材51、视频素材52和图片素材52各自的景别类型,电子设备可获知全部的素材能够准确匹配视频模板中的景别类型。从而,电子设备基于CCCBA这个景别类型,将图片素材51、视频素材51、视频素材52和图片素材52的排列顺序进行调整,生成图8E示例性所示的视频。Then, based on the scene type in the video template, and the scene types of the picture material 51, video material 51, video material 52, and picture material 52, the electronic device can know that all the materials can accurately match the scene type in the video template. Therefore, the electronic device adjusts the arrangement order of the picture material 51 , the video material 51 , the video material 52 and the picture material 52 based on the scene type CCCBA to generate the video exemplarily shown in FIG. 8E .
图8E中,生成的视频中的图片素材51、视频素材51、视频素材52和图片素材52的播放顺序为:In FIG. 8E, the playback sequence of the picture material 51, the video material 51, the video material 52 and the picture material 52 in the generated video is:
第0-第4x之间:视频素材52;Between 0-4x: video material 52;
第4x-第6x之间:图片素材51中的景别类型C对应的片段;Between the 4x-6x: the fragment corresponding to the scene type C in the picture material 51;
第6x-第8x之间:图片素材51;Between the 6x-8x: picture material 51;
第8x-第9x之间:视频素材51中的景别类型B对应的片段;Between the 8x-9x: the segment corresponding to the scene type B in the video material 51;
第9x-第11x之间:图片素材52。Between 9x-11x: Image 52.
进而,图8E示例性所示的视频对应的景别类型分别为CCCBA。Furthermore, the scene types corresponding to the videos exemplarily shown in FIG. 8E are respectively CCCBA.
基于前述描述,在确定出视频模板中景别类型为CCCBA之后,电子设备可基于用户选定的素材对应的景别类型,考虑到视频的播放效果、视频的时长、视频中的景别类型、素材的使用情况、素材的数量、素材的景别类型、素材是否支持重复使用等因素,对素材的景别类型和视频模板中的景别类型进行预设程度的匹配,从而生成视频。也就是说,电子设备生成的视频对应的景别类型与视频模板中的景别类型完全相同或者部分相同。其中,预设程度可以为100%(即精准匹配),也可以为90%(即模糊匹配),通常预设程度大于等于50%。本申请实施例中,电子设备基于视频模板中景别类型的排列顺序来调整素材在生成的视频中的排列顺序,再结合视频模板中的运镜、速度和转场等技术,可生成视线连贯且品质感高的视频。Based on the foregoing description, after determining that the scene type in the video template is CCCBA, the electronic device can consider the playback effect of the video, the duration of the video, the scene type in the video, The use of the material, the quantity of the material, the scene type of the material, whether the material supports repeated use and other factors, the scene type of the material and the scene type in the video template are matched to a preset degree to generate the video. That is, the scene type corresponding to the video generated by the electronic device is completely or partially the same as the scene type in the video template. The preset degree may be 100% (ie, exact matching) or 90% (ie, fuzzy matching), and usually the preset degree is greater than or equal to 50%. In the embodiment of the present application, the electronic device adjusts the arrangement order of the materials in the generated video based on the arrangement order of the scene types in the video template, and then combines the techniques of mirror movement, speed, and transition in the video template to generate a coherent line of sight. and high-quality video.
综上,本申请实施例的视频生成方法加强了视频的镜头感和电影感,有利于提升用户的使用体验。To sum up, the video generation method of the embodiment of the present application enhances the sense of shots and movies of the video, which is beneficial to improve the user experience.
基于前述描述,本申请实施例可提供一种视频生成方法。Based on the foregoing description, the embodiments of the present application may provide a video generation method.
请参阅图9,图9示出了本申请一实施例提供的一种视频生成方法的示意图。如图9所示,本申请实施例的视频生成方法可以包括:Please refer to FIG. 9. FIG. 9 shows a schematic diagram of a video generation method provided by an embodiment of the present application. As shown in FIG. 9 , the video generation method of the embodiment of the present application may include:
S101、电子设备显示第一应用的第一界面,第一界面中包括第一控件和第二控件。S101. The electronic device displays a first interface of a first application, where the first interface includes a first control and a second control.
S102、电子设备在接收到作用于第一控件上的第一操作之后,确定第一素材、第二素材和第三素材的排列顺序为第一顺序;并按照第一顺序,将第一素材、第二素材和第三素材生成第一视频。S102. After receiving the first operation acting on the first control, the electronic device determines that the arrangement order of the first material, the second material and the third material is the first order; The second material and the third material generate the first video.
S103、电子设备在接收到作用于第二控件上的第二操作之后,确定第一素材、第二素材和第三素材的排列顺序为第二顺序,第二顺序与第三顺序不同;并按照第二顺序,将第一素材、第二素材和第三素材生成第二视频。S103. After receiving the second operation acting on the second control, the electronic device determines that the arrangement order of the first material, the second material and the third material is the second order, and the second order is different from the third order; In the second sequence, a second video is generated from the first material, the second material and the third material.
其中,第一素材、第二素材和第三素材为存储在电子设备中不同的图像素材,第三顺序为第一素材、第二素材和第三素材存储到电子设备中的时间顺序,第一顺序与第三顺序不同。Wherein, the first material, the second material and the third material are different image materials stored in the electronic device, and the third order is the time sequence in which the first material, the second material and the third material are stored in the electronic device. The order is different from the third order.
本申请实施例中,第一素材、第二素材和第三素材的具体实现方式可参见前述描述。第一控件的具体实现方式可参见图3F示例性所示的控件30811、控件30812、控件30813、控件30814中的任意一个控件,第二控件的具体实现方式可参见图3F示例性所示的控件30811、控件30812、控件30813、控件30814中的任意一个控件,第一控件与第二控件不同。第一顺序与第二顺序可以相同或者不同,本申请实施例对此不做限定。第一视频和第二视频的播放效果不同,具体可参见前文提及的视频1、视频2、视频3和基于用户选择的视频素材3031、图片素材3032、图片素材3033、视频素材3034、图片素材3035、图片素材3036、图片素材3037和视频素材3038生成的视频。在一些实施例中,第一应用为电子设备的图库应用。In this embodiment of the present application, reference may be made to the foregoing description for specific implementation manners of the first material, the second material, and the third material. For the specific implementation of the first control, refer to any one of the controls 30811, 30812, 30813, and 30814 exemplarily shown in FIG. 3F, and for the specific implementation of the second control, refer to the control exemplarily shown in FIG. 3F 30811, control 30812, control 30813, and control 30814, the first control is different from the second control. The first order and the second order may be the same or different, which is not limited in this embodiment of the present application. The playback effects of the first video and the second video are different. For details, please refer to the aforementioned video 1, video 2, video 3 and user-selected video material 3031, picture material 3032, picture material 3033, video material 3034, picture material 3035, picture material 3036, picture material 3037, and video material 3038 generate videos. In some embodiments, the first application is a gallery application of the electronic device.
本申请实施例中,电子设备通过识别素材的景别类型,匹配合适的视频模板,基于视频模板中每个片段已设置的景别类型,对素材的排列顺序进行调整,并结合视频模板中每个片段已设置的运镜、速度和转场,可自动生成视线连贯且品质感高的视频,无需依赖用户的手动编辑,加强了视频的镜头感和电影感,提升了用户的使用体验。In the embodiment of the present application, the electronic device matches the appropriate video template by identifying the scene type of the material, and adjusts the arrangement order of the material based on the scene type that has been set for each segment in the video template. The camera movement, speed and transition that have been set for each clip can automatically generate a video with a coherent line of sight and high quality, without relying on the user's manual editing, which enhances the camera and cinematic sense of the video, and improves the user experience.
在一些实施例中,第一视频以音乐的拍点为分界线划分为多个片段;在第一视频中第一素材、第二素材和第三素材至少出现一次,且在第一视频的任意两个相邻片段中出现的素材不同;在第二视频中第一素材、第二素材和第三素材至少出现一次,且在第二视频的任意两个相邻片段中出现的素材不同。In some embodiments, the first video is divided into a plurality of segments with the beat point of the music as a dividing line; the first material, the second material and the third material appear at least once in the first video, and in any part of the first video The materials appearing in two adjacent segments are different; the first material, the second material and the third material appear at least once in the second video, and the materials appearing in any two adjacent segments of the second video are different.
在一些实施例中,方法还包括:电子设备显示第一应用的第二界面;电子设备在接收到作用于第二界面上的第三操作之后,将第一素材、第二素材和第三素材生成第一视频。In some embodiments, the method further includes: displaying the second interface of the first application by the electronic device; after the electronic device receives the third operation acting on the second interface, displaying the first material, the second material and the third material Generate the first video.
本申请实施例中,第二界面的具体实现过程可参见方式一中图3E示例性所示的用户界面13的描述,或者,可参见方式二中图3N示例性所示的用户界面17的描述,或者,可参见方式三中图3N示例性所示的用户界面17的描述。第三操作的具体实现过程可参见方式一中图3E示例性所示的点击用户界面13的窗口305中的文字“影视”的描述,或者,可参见方式二中图3N示例性所示的点击用户界面17中的控件3142的描述,或者,可参见方式三中图3N示例性所示的点击用户界面17中的控件3142的描述。In this embodiment of the present application, for the specific implementation process of the second interface, reference may be made to the description of the user interface 13 exemplarily shown in FIG. 3E in Mode 1, or, reference may be made to the description of the user interface 17 exemplarily shown in FIG. 3N in Mode 2 , or, refer to the description of the user interface 17 exemplarily shown in FIG. 3N in the third way. For the specific implementation process of the third operation, please refer to the description of clicking on the text “video” in the window 305 of the user interface 13 in the first mode exemplarily shown in FIG. For the description of the control 3142 in the user interface 17, or, please refer to the description of the control 3142 in the click user interface 17 as exemplarily shown in FIG. 3N in the third way.
在一些实施例中,方法还包括:电子设备从第一素材、第二素材、第三素材和第四素材中,确定将第一素材、第二素材和第三素材生成第一视频;其中,第四素材为存储在电子设备中与第一素材、第二素材和第三素材不同的图像素材。In some embodiments, the method further includes: the electronic device determines from the first material, the second material, the third material and the fourth material to generate the first video from the first material, the second material and the third material; wherein, The fourth material is an image material stored in the electronic device that is different from the first material, the second material, and the third material.
本申请实施例中,上述方案的具体实现过程可参见方式三中图3P示例性所示的用户界面18中的视频1、视频2和视频的描述。In this embodiment of the present application, for the specific implementation process of the above solution, reference may be made to the description of video 1, video 2, and video in the user interface 18 exemplarily shown in FIG. 3P in the third mode.
在一些实施例中,第一界面中还包括第三控件;方法还包括:电子设备在接收到作用于第三控件的第四操作之后,显示第三界面,第三界面中包括:配置信息的选项,配置信息包括:时长、滤镜、画幅、素材或者标题中的至少一个参数;电子设备在接收到作用于配置信息的选项上的第五操作之后,按照第一顺序,基于配置信息,将第一素材、第二素材和第三素材生成第三视频。In some embodiments, the first interface further includes a third control; the method further includes: after the electronic device receives the fourth operation acting on the third control, displaying a third interface, where the third interface includes: the configuration information option, the configuration information includes: at least one parameter of duration, filter, frame, material or title; after the electronic device receives the fifth operation acting on the option of the configuration information, in the first order, based on the configuration information, The first material, the second material, and the third material generate a third video.
本申请实施例中,第三控件的具体实现方式可参见图3F示例性所示的控件3082、控件3083、控件3084、控件3085的描述,此处不做赘述。第三界面可参见图3G示例性所示的用户界面21的描述,或者,图3H示例性所示的用户界面22的描述,或者,图3I示例性所示的用户界面23的描述,或者,图3J示例性所示的用户界面24的描述,此处不 做赘述。In this embodiment of the present application, for a specific implementation manner of the third control, reference may be made to the descriptions of the control 3082 , the control 3083 , the control 3084 , and the control 3085 exemplarily shown in FIG. 3F , which will not be repeated here. For the third interface, please refer to the description of the user interface 21 exemplarily shown in FIG. 3G , or the description of the user interface 22 exemplarily shown in FIG. 3H , or the description of the user interface 23 exemplarily shown in FIG. 3I , or, The description of the user interface 24 exemplarily shown in FIG. 3J will not be repeated here.
例如,电子设备可通过图3G示例性所示的用户界面21,来调整视频1的时长、画幅、是否添加新的素材、是否删除已存在的素材等参数。又如,电子设备可通过图3H示例性所示的用户界面22,来调整视频1的音乐。又如,电子设备可通过图3I示例性所示的用户界面23,来调整视频1的滤镜。又如,电子设备可通过图3J示例性所示的用户界面24,来调整视频1中是否添加标题。For example, the electronic device can adjust parameters such as the duration, frame, whether to add new material, whether to delete existing material, etc. of the video 1 through the user interface 21 exemplarily shown in FIG. 3G . As another example, the electronic device may adjust the music of the video 1 through the user interface 22 exemplarily shown in FIG. 3H . For another example, the electronic device may adjust the filter of the video 1 through the user interface 23 exemplarily shown in FIG. 3I . For another example, the electronic device can adjust whether to add a title to the video 1 through the user interface 24 exemplarily shown in FIG. 3J .
在一些实施例中,第一界面中还包括第四控件;方法还包括:电子设备在生成第一视频之后,响应于作用于第四控件上的第四操作,保存第一视频。本申请实施例中,第四控件的具体实现方式可参见图3F示例性所示的控件309的描述,此处不做赘述。In some embodiments, the first interface further includes a fourth control; the method further includes: after the electronic device generates the first video, in response to a fourth operation acting on the fourth control, the electronic device saves the first video. In this embodiment of the present application, for a specific implementation manner of the fourth control, reference may be made to the description of the control 309 exemplarily shown in FIG. 3F , which is not repeated here.
在一些实施例中,方法具体包括:电子设备确定第一素材对应的景别类型、第二素材对应的景别类型和第三素材对应的景别类型;电子设备基于第一素材对应的景别类型、第二素材对应的景别类型、第三素材对应的景别类型和第一视频模板中的每个片段设置好的景别类型,确定与第一片段对应的景别类型匹配的素材,第一片段为第一视频模板中的任意一个片段;并将第一视频模板中的全部片段对应的素材的排列顺序为第一顺序;电子设备基于第一素材对应的景别类型、第二素材对应的景别类型、第三素材对应的景别类型和第二视频模板中的每个片段设置好的景别类型,确定与第二片段对应的景别类型匹配的素材,第二片段为第二视频模板中的任意一个片段;并将第二视频模板中的全部片段对应的素材的排列顺序为第二顺序;其中,第一视频模板与第二视频模板不同,第二视频中的每个片段与第二视频模板中的每个片段相对,第二视频中的每个片段与第二视频模板中的每个片段相对应。In some embodiments, the method specifically includes: the electronic device determines the scene type corresponding to the first material, the scene type corresponding to the second material, and the scene type corresponding to the third material; the electronic device determines the scene type corresponding to the first material based on the scene type corresponding to the first material. type, the scene type corresponding to the second material, the scene type corresponding to the third material, and the scene type set for each segment in the first video template, determine the material that matches the scene type corresponding to the first segment, The first segment is any segment in the first video template; the arrangement order of the materials corresponding to all the segments in the first video template is the first order; the electronic device is based on the scene type corresponding to the first material, the second material The corresponding scene type, the scene type corresponding to the third material, and the scene type set for each segment in the second video template, determine the material that matches the scene type corresponding to the second segment, and the second segment is the first segment. Any one segment in the two video templates; and the arrangement order of the materials corresponding to all the segments in the second video template is the second order; wherein, the first video template is different from the second video template, and each The segments correspond to each segment in the second video template, and each segment in the second video corresponds to each segment in the second video template.
本申请实施例中,上述方案可参见前文提及的基于视频1、视频2、视频3和基于用户选择的视频素材3031、图片素材3032、图片素材3033、视频素材3034、图片素材3035、图片素材3036、图片素材3037和视频素材3038生成的视频的描述,此处不做赘述。In this embodiment of the present application, for the above solutions, reference may be made to the aforementioned video material 1, video 2, video 3 and user-selected video material 3031, picture material 3032, picture material 3033, video material 3034, picture material 3035, picture material 3036, the description of the video generated by the picture material 3037 and the video material 3038, which will not be repeated here.
在一些实施例中,方法还包括:电子设备按照第一顺序以及第一视频模板中的每个片段设置好的运镜效果、速度效果和转场效果,将第一素材、第二素材和第三素材生成第一视频;电子设备按照第二顺序以及第二视频模板中的每个片段设置好的运镜效果、速度效果和转场效果,将第一素材、第二素材和第三素材生成第二视频。In some embodiments, the method further includes: the electronic device converts the first material, the second material and the first material according to the first order and the motion effect, speed effect and transition effect set for each segment in the first video template. The first video is generated from three materials; the electronic device generates the first, second and third materials according to the second order and the motion mirror effect, speed effect and transition effect set for each segment in the second video template Second video.
本申请实施例中,上述方案可参见前文提及的描述,运镜效果的具体实现方式可参见图5示例性所示的描述,速度效果的具体实现方式可参见图6示例性所示的描述,转场效果的具体实现方式可参见图7示例性所示的描述,此处不做赘述。In the embodiment of the present application, the above-mentioned solution can refer to the description mentioned above, the specific implementation of the mirror movement effect can refer to the description exemplarily shown in FIG. 5 , and the specific implementation of the speed effect can refer to the description exemplarily shown in FIG. 6 . , the specific implementation of the transition effect may refer to the description exemplarily shown in FIG. 7 , which will not be repeated here.
在一些实施例中,在第一素材为图片素材时,方法具体包括:电子设备在第一素材对应的景别类型与第一片段对应的景别类型相同,或者,第一素材对应的景别类型按照预设规则与第一片段对应的景别类型的排序相邻时,将第一素材确定为与第一片段对应的景别类型匹配的素材;电子设备在第一素材对应的景别类型与第二片段对应的景别类型相同,或者,第一素材对应的景别类型按照预设规则与第二片段对应的景别类型的排序相邻时,将第一素材确定为与第二片段对应的景别类型匹配的素材。In some embodiments, when the first material is a picture material, the method specifically includes: the scene type corresponding to the first material and the scene type corresponding to the first segment of the electronic device are the same, or the scene type corresponding to the first material is When the type is adjacent to the order of the scene type corresponding to the first clip according to the preset rule, the first material is determined to be a material matching the scene type corresponding to the first clip; the electronic device is in the scene type corresponding to the first material. The scene type corresponding to the second clip is the same, or, when the scene type corresponding to the first material is adjacent to the scene type corresponding to the second clip according to the preset rules, the first material is determined to be the same as the second clip. The corresponding scene type matches the material.
本申请实施例中,上述方案的具体实现过程可参见图8A-图8E示例性所示的描述,此处不做赘述。第一像素的具体实现方式可参见图8A-图8E示例性提及的图片像素。In this embodiment of the present application, for the specific implementation process of the above solution, reference may be made to the descriptions exemplarily shown in FIGS. 8A to 8E , which will not be repeated here. For the specific implementation of the first pixel, reference may be made to the picture pixels exemplarily mentioned in FIGS. 8A-8E .
在一些实施例中,在第一素材为视频素材时,方法具体包括:电子设备在第四素材对 应的景别类型与第一片段对应的景别类型相同或者第四素材对应的景别类型按照预设规则与第一片段对应的景别类型的排序相邻,且第四素材的时长等于第一片段的时长时,从第一素材中截取第四素材,并将第四素材确定为与第一片段对应的景别类型匹配的素材;电子设备在第四素材对应的景别类型与第二片段对应的景别类型相同或者第四素材对应的景别类型按照预设规则与第二片段对应的景别类型的排序相邻,且第四素材的时长等于第二片段的时长时,从第二素材中截取第四素材,并将第四素材确定为与第二片段对应的景别类型匹配的素材;其中,第四素材为第一素材的部分或者全部。In some embodiments, when the first material is a video material, the method specifically includes: in the electronic device, the scene type corresponding to the fourth material is the same as the scene type corresponding to the first segment or the scene type corresponding to the fourth material is according to When the preset rule is adjacent to the sequence of the scene types corresponding to the first clip, and the duration of the fourth material is equal to the duration of the first clip, the fourth material is cut out from the first material, and the fourth material is determined to be the same as the first clip. A material that matches the scene type corresponding to the clip; the scene type corresponding to the fourth material of the electronic device is the same as the scene type corresponding to the second clip or the scene type corresponding to the fourth material corresponds to the second clip according to preset rules. When the sequence of the scene types is adjacent, and the duration of the fourth material is equal to the duration of the second segment, the fourth material is cut out from the second material, and the fourth material is determined to match the scene type corresponding to the second segment. material; wherein, the fourth material is part or all of the first material.
本申请实施例中,上述方案的具体实现过程可参见图8A-图8E示例性所示的描述,此处不做赘述。第一像素的具体实现方式可参见图8A-图8E示例性提及的视频像素,第四像素的具体实现方式可参见视频像素51或者视频像素52。In this embodiment of the present application, for the specific implementation process of the above solution, reference may be made to the descriptions exemplarily shown in FIGS. 8A to 8E , which will not be repeated here. For the specific implementation of the first pixel, reference may be made to the video pixels exemplarily mentioned in FIGS. 8A-8E , and for the specific implementation of the fourth pixel, reference may be made to the video pixel 51 or the video pixel 52 .
在一些实施例中,按照预设规则的排序,景别类型包括:近景、中景和远景,与近景相邻的景别类型为远景,与中景相邻的景别类型为近景和远景,与远景相邻的景别类型为近景。本申请实施例中,景别类型的划分不限于上述实现方式,具体可参见前文提及的描述,此处不做赘述。In some embodiments, according to the ordering of preset rules, the types of scenes include: close-up, medium-range, and distant, the types of scenes adjacent to the close-range are distant, and the types of scenes adjacent to the mid-range are close-range and distant. The type of scene adjacent to the distant scene is the close scene. In this embodiment of the present application, the division of scene types is not limited to the foregoing implementation manner, and for details, reference may be made to the descriptions mentioned above, which will not be repeated here.
示例性地,本申请提供一种电子设备,包括:存储器和处理器;存储器用于存储程序指令;处理器用于调用存储器中的程序指令使得电子设备执行前文实施例中的视频生成方法。Exemplarily, the present application provides an electronic device, including: a memory and a processor; the memory is used for storing program instructions; the processor is used for calling the program instructions in the memory to make the electronic device execute the video generation method in the foregoing embodiments.
示例性地,本申请提供一种芯片系统,芯片系统应用于包括存储器、显示屏和传感器的电子设备;芯片系统包括:处理器;当处理器执行存储器中存储的计算机指令时,电子设备执行前文实施例中的视频生成方法。Exemplarily, the present application provides a chip system, which is applied to an electronic device including a memory, a display screen and a sensor; the chip system includes: a processor; when the processor executes the computer instructions stored in the memory, the electronic device executes the foregoing The video generation method in the embodiment.
示例性地,本申请提供一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器使得电子设备执行时实现前文实施例中的视频生成方法。Exemplarily, the present application provides a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processor to cause an electronic device to implement the video generation method in the foregoing embodiments.
示例性地,本申请提供一种计算机程序产品,包括:执行指令,执行指令存储在可读存储介质中,电子设备的至少一个处理器可以从可读存储介质读取执行指令,至少一个处理器执行执行指令使得电子设备实现前文实施例中的视频生成方法。Exemplarily, the present application provides a computer program product, comprising: execution instructions, the execution instructions are stored in a readable storage medium, at least one processor of an electronic device can read the execution instructions from the readable storage medium, and at least one processor Executing the execution instruction enables the electronic device to implement the video generation method in the foregoing embodiment.
在上述实施例中,全部或部分功能可以通过软件、硬件、或者软件加硬件的组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如,固态硬盘(solid state disk,SSD))等。In the above embodiments, all or part of the functions may be implemented by software, hardware, or a combination of software and hardware. When implemented in software, it can be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present application are generated. The computer may be a general purpose computer, special purpose computer, computer network, or other programmable device. The computer instructions may be stored in a computer-readable storage medium. The computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes an integration of one or more available media. The usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media (eg, solid state disks (SSDs)), and the like.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments can be implemented. The process can be completed by instructing the relevant hardware by a computer program, and the program can be stored in a computer-readable storage medium. When the program is executed , which may include the processes of the foregoing method embodiments. The aforementioned storage medium includes: ROM or random storage memory RAM, magnetic disk or optical disk and other mediums that can store program codes.

Claims (16)

  1. 一种视频生成方法,其特征在于,包括:A method for generating video, comprising:
    电子设备显示第一应用的第一界面,所述第一界面中包括第一控件和第二控件;The electronic device displays a first interface of the first application, where the first interface includes a first control and a second control;
    所述电子设备在接收到作用于所述第一控件上的第一操作之后,确定第一素材、第二素材和第三素材的排列顺序为第一顺序,所述第一素材、所述第二素材和所述第三素材为存储在所述电子设备中不同的图像素材,所述第一顺序与第三顺序不同,所述第三顺序为所述第一素材、所述第二素材和所述第三素材存储到所述电子设备中的时间顺序;并按照所述第一顺序,将所述第一素材、所述第二素材和所述第三素材生成第一视频;After receiving the first operation acting on the first control, the electronic device determines that the arrangement order of the first material, the second material and the third material is the first order, and the first material, the first material and the third material are arranged in the first order. The second material and the third material are different image materials stored in the electronic device, the first order is different from the third order, and the third order is the first material, the second material and the the time sequence in which the third material is stored in the electronic device; and generating a first video from the first material, the second material and the third material according to the first order;
    所述电子设备在接收到作用于所述第二控件上的第二操作之后,确定所述第一素材、所述第二素材和所述第三素材的排列顺序为第二顺序,所述第二顺序与所述第三顺序不同;并按照所述第二顺序,将所述第一素材、所述第二素材和所述第三素材生成第二视频。After receiving the second operation acting on the second control, the electronic device determines that the arrangement order of the first material, the second material and the third material is the second order, and the first material The second order is different from the third order; and according to the second order, a second video is generated from the first material, the second material and the third material.
  2. 根据权利要求1所述的方法,其特征在于,所述第一视频以音乐的拍点为分界线划分为多个片段;The method according to claim 1, wherein the first video is divided into a plurality of segments by taking the beat of the music as a dividing line;
    在所述第一视频中所述第一素材、所述第二素材和所述第三素材至少出现一次,且在所述第一视频的任意两个相邻片段中出现的素材不同;The first material, the second material and the third material appear at least once in the first video, and the materials appearing in any two adjacent segments of the first video are different;
    在所述第二视频中所述第一素材、所述第二素材和所述第三素材至少出现一次,且在所述第二视频的任意两个相邻片段中出现的素材不同。The first material, the second material and the third material appear at least once in the second video, and the materials appearing in any two adjacent segments of the second video are different.
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:The method according to claim 1 or 2, wherein the method further comprises:
    所述电子设备显示所述第一应用的第二界面;the electronic device displays the second interface of the first application;
    所述电子设备在接收到作用于所述第二界面上的第三操作之后,将所述第一素材、所述第二素材和所述第三素材生成所述第一视频。After receiving the third operation acting on the second interface, the electronic device generates the first video from the first material, the second material and the third material.
  4. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:The method according to claim 1 or 2, wherein the method further comprises:
    所述电子设备从所述第一素材、所述第二素材、所述第三素材和第四素材中,确定将所述第一素材、所述第二素材和所述第三素材生成所述第一视频;The electronic device determines from the first material, the second material, the third material and the fourth material to generate the first material, the second material and the third material into the first video;
    其中,第四素材为存储在所述电子设备中与所述第一素材、所述第二素材和所述第三素材不同的图像素材。The fourth material is an image material stored in the electronic device that is different from the first material, the second material and the third material.
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述第一界面中还包括第三控件;所述方法还包括:The method according to any one of claims 1-4, wherein the first interface further includes a third control; the method further includes:
    所述电子设备在接收到作用于所述第三控件的第四操作之后,显示第三界面,所述第三界面中包括:配置信息的选项,所述配置信息包括:时长、滤镜、画幅、素材或者标题中的至少一个参数;After receiving the fourth operation acting on the third control, the electronic device displays a third interface, where the third interface includes: options of configuration information, and the configuration information includes: duration, filter, frame , at least one parameter in the material or title;
    所述电子设备在接收到作用于所述配置信息的选项上的第五操作之后,按照所述第一顺序,基于所述配置信息,将所述第一素材、所述第二素材和所述第三素材生成第三视频。After the electronic device receives the fifth operation acting on the options of the configuration information, in the first order, based on the configuration information, the first material, the second material and the The third material generates a third video.
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述第一界面中还包括第四控件;所述方法还包括:The method according to any one of claims 1-5, wherein the first interface further includes a fourth control; the method further includes:
    所述电子设备在生成所述第一视频之后,响应于作用于所述第四控件上的第四操作,保存所述第一视频。After generating the first video, the electronic device saves the first video in response to a fourth operation acting on the fourth control.
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述方法具体包括:The method according to any one of claims 1-6, wherein the method specifically comprises:
    所述电子设备确定所述第一素材对应的景别类型、所述第二素材对应的景别类型和所 述第三素材对应的景别类型;The electronic device determines the scene category type corresponding to the first material, the scene category type corresponding to the second material, and the scene category type corresponding to the third material;
    所述电子设备基于所述第一素材对应的景别类型、所述第二素材对应的景别类型、所述第三素材对应的景别类型和第一视频模板中的每个片段设置好的景别类型,确定与第一片段对应的景别类型匹配的素材,所述第一片段为所述第一视频模板中的任意一个片段;并将所述第一视频模板中的全部片段对应的素材的排列顺序为所述第一顺序;The electronic device is set based on the scene type corresponding to the first material, the scene type corresponding to the second material, the scene type corresponding to the third material, and each segment in the first video template. Scene type, determine the material matching the scene type corresponding to the first segment, and the first segment is any segment in the first video template; and assign all segments corresponding to the first video template The arrangement order of the materials is the first order;
    所述电子设备基于所述第一素材对应的景别类型、所述第二素材对应的景别类型、所述第三素材对应的景别类型和第二视频模板中的每个片段设置好的景别类型,确定与第二片段对应的景别类型匹配的素材,所述第二片段为所述第二视频模板中的任意一个片段;并将所述第二视频模板中的全部片段对应的素材的排列顺序为所述第二顺序;The electronic device is set based on the scene type corresponding to the first material, the scene type corresponding to the second material, the scene type corresponding to the third material, and each segment in the second video template. Scene type, determine the material that matches the scene type corresponding to the second segment, and the second segment is any segment in the second video template; The arrangement order of the materials is the second order;
    其中,所述第一视频模板与所述第二视频模板不同,所述第二视频中的每个片段与所述第二视频模板中的每个片段相对,所述第二视频中的每个片段与所述第二视频模板中的每个片段相对应。Wherein, the first video template is different from the second video template, each segment in the second video is opposite to each segment in the second video template, and each segment in the second video A segment corresponds to each segment in the second video template.
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:The method according to claim 7, wherein the method further comprises:
    所述电子设备按照所述第一顺序以及所述第一视频模板中的每个片段设置好的运镜效果、速度效果和转场效果,将所述第一素材、所述第二素材和所述第三素材生成所述第一视频;The electronic device converts the first material, the second material, and all of the first material, the second material, and all of them according to the first sequence and the motion effect, speed effect, and transition effect set for each segment in the first video template. generating the first video from the third material;
    所述电子设备按照所述第二顺序以及所述第二视频模板中的每个片段设置好的运镜效果、速度效果和转场效果,将所述第一素材、所述第二素材和所述第三素材生成所述第二视频。The electronic device converts the first material, the second material, and all of the first material, the second material, and all of them into The third material generates the second video.
  9. 根据权利要求7或8所述的方法,其特征在于,在所述第一素材为图片素材时,所述方法具体包括:The method according to claim 7 or 8, wherein when the first material is a picture material, the method specifically comprises:
    所述电子设备在所述第一素材对应的景别类型与所述第一片段对应的景别类型相同,或者,所述第一素材对应的景别类型按照预设规则与所述第一片段对应的景别类型的排序相邻时,将所述第一素材确定为与第一片段对应的景别类型匹配的素材;The scene type corresponding to the first material of the electronic device is the same as the scene type corresponding to the first segment, or the scene type corresponding to the first material is the same as the first segment according to a preset rule. When the ordering of the corresponding scene types is adjacent, determining the first material as a material matching the scene type corresponding to the first segment;
    所述电子设备在所述第一素材对应的景别类型与所述第二片段对应的景别类型相同,或者,所述第一素材对应的景别类型按照预设规则与所述第二片段对应的景别类型的排序相邻时,将所述第一素材确定为与第二片段对应的景别类型匹配的素材。The scene type corresponding to the electronic device in the first material is the same as the scene type corresponding to the second segment, or the scene type corresponding to the first material is the same as the second segment according to a preset rule. When the sequence of the corresponding scene types is adjacent, the first material is determined to be a material matching the scene type corresponding to the second segment.
  10. 根据权利要求7或8所述的方法,其特征在于,在所述第一素材为视频素材时,所述方法具体包括:The method according to claim 7 or 8, wherein when the first material is a video material, the method specifically comprises:
    所述电子设备在第四素材对应的景别类型与所述第一片段对应的景别类型相同或者所述第四素材对应的景别类型按照预设规则与所述第一片段对应的景别类型的排序相邻,且所述第四素材的时长等于所述第一片段的时长时,从所述第一素材中截取所述第四素材,并将所述第四素材确定为与第一片段对应的景别类型匹配的素材;The scene type corresponding to the fourth material of the electronic device is the same as the scene type corresponding to the first segment, or the scene type corresponding to the fourth material is the scene type corresponding to the first segment according to the preset rule. When the order of types is adjacent, and the duration of the fourth material is equal to the duration of the first segment, the fourth material is cut out from the first material, and the fourth material is determined to be the same as the first segment. The material that matches the scene type corresponding to the clip;
    所述电子设备在第四素材对应的景别类型与所述第二片段对应的景别类型相同或者所述第四素材对应的景别类型按照预设规则与所述第二片段对应的景别类型的排序相邻,且所述第四素材的时长等于所述第二片段的时长时,从所述第二素材中截取所述第四素材,并将所述第四素材确定为与第二片段对应的景别类型匹配的素材;The scene type corresponding to the electronic device in the fourth material is the same as the scene type corresponding to the second segment, or the scene type corresponding to the fourth material is the scene type corresponding to the second segment according to the preset rule. When the order of types is adjacent, and the duration of the fourth material is equal to the duration of the second segment, the fourth material is cut out from the second material, and the fourth material is determined to be the same as the second segment. The material that matches the scene type corresponding to the clip;
    其中,所述第四素材为所述第一素材的部分或者全部。Wherein, the fourth material is part or all of the first material.
  11. 根据权利要求10所述的方法,其特征在于,按照所述预设规则的排序,景别类 型包括:近景、中景和远景,与所述近景相邻的景别类型为远景,与所述中景相邻的景别类型为近景和远景,与所述远景相邻的景别类型为近景。The method according to claim 10, characterized in that, according to the sorting of the preset rules, the types of scenes include: close-up, medium-range, and distant, and the type of scene adjacent to the close-range is a distant-scape, which is the same as the close-range. The types of scenes adjacent to the medium shot are close shots and distant shots, and the types of scenes adjacent to the distant shots are close shots.
  12. 根据权利要求1-11任一项所述的方法,其特征在于,所述第一应用为所述电子设备的图库应用。The method according to any one of claims 1-11, wherein the first application is a gallery application of the electronic device.
  13. 一种电子设备,其特征在于,包括:存储器和处理器;An electronic device, comprising: a memory and a processor;
    所述存储器用于存储程序指令;the memory is used to store program instructions;
    所述处理器用于调用所述存储器中的程序指令使得所述电子设备执行权利要求1-12任一项所述的视频生成方法。The processor is configured to invoke program instructions in the memory to cause the electronic device to execute the video generation method according to any one of claims 1-12.
  14. 一种芯片系统,其特征在于,所述芯片系统应用于包括存储器、显示屏和传感器的电子设备;所述芯片系统包括:处理器;当所述处理器执行所述存储器中存储的计算机指令时,所述电子设备执行如权利要求1-12任一项所述的视频生成方法。A chip system, characterized in that, the chip system is applied to an electronic device including a memory, a display screen and a sensor; the chip system includes: a processor; when the processor executes computer instructions stored in the memory , the electronic device executes the video generation method according to any one of claims 1-12.
  15. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器使得所述电子设备执行时实现权利要求1-12任一项所述的视频生成方法。A computer-readable storage medium on which a computer program is stored, wherein the computer program is executed by a processor to cause the electronic device to implement the video generation method according to any one of claims 1-12.
  16. 一种程序产品,其特征在于,所述程序产品包括计算机程序,所述计算机程序存储在可读存储介质中,通信装置的至少一个处理器可以从所述可读存储介质读取所述计算机程序,所述至少一个处理器执行所述计算机程序使得通信装置实施如权利要求1-12任意一项所述的方法。A program product, characterized in that the program product comprises a computer program, the computer program is stored in a readable storage medium, and at least one processor of a communication device can read the computer program from the readable storage medium , the at least one processor executing the computer program causes the communication device to implement the method according to any one of claims 1-12.
PCT/CN2021/116047 2020-09-29 2021-09-01 Video generation method and electronic device WO2022068511A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011057180.9A CN114363527B (en) 2020-09-29 2020-09-29 Video generation method and electronic equipment
CN202011057180.9 2020-09-29

Publications (1)

Publication Number Publication Date
WO2022068511A1 true WO2022068511A1 (en) 2022-04-07

Family

ID=80949616

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/116047 WO2022068511A1 (en) 2020-09-29 2021-09-01 Video generation method and electronic device

Country Status (2)

Country Link
CN (1) CN114363527B (en)
WO (1) WO2022068511A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116055799A (en) * 2022-05-30 2023-05-02 荣耀终端有限公司 Multi-track video editing method, graphical user interface and electronic equipment
CN116055715A (en) * 2022-05-30 2023-05-02 荣耀终端有限公司 Scheduling method of coder and decoder and electronic equipment
CN117216312A (en) * 2023-11-06 2023-12-12 长沙探月科技有限公司 Method and device for generating questioning material, electronic equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115185429A (en) * 2022-05-13 2022-10-14 北京达佳互联信息技术有限公司 File processing method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130047081A1 (en) * 2011-10-25 2013-02-21 Triparazzi, Inc. Methods and systems for creating video content on mobile devices using storyboard templates
CN104581380A (en) * 2014-12-30 2015-04-29 联想(北京)有限公司 Information processing method and mobile terminal
CN107437076A (en) * 2017-08-02 2017-12-05 陈雷 The method and system that scape based on video analysis does not divide
CN109618222A (en) * 2018-12-27 2019-04-12 北京字节跳动网络技术有限公司 A kind of splicing video generation method, device, terminal device and storage medium
CN110825912A (en) * 2019-10-30 2020-02-21 北京达佳互联信息技术有限公司 Video generation method and device, electronic equipment and storage medium
CN111541946A (en) * 2020-07-10 2020-08-14 成都品果科技有限公司 Automatic video generation method and system for resource matching based on materials

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009044423A (en) * 2007-08-08 2009-02-26 Univ Of Electro-Communications Scene detection system and scene detecting method
CN107770484A (en) * 2016-08-19 2018-03-06 杭州海康威视数字技术股份有限公司 A kind of video monitoring information generation method, device and video camera
US10477177B2 (en) * 2017-12-15 2019-11-12 Intel Corporation Color parameter adjustment based on the state of scene content and global illumination changes
CN111048016B (en) * 2018-10-15 2021-05-14 广东美的白色家电技术创新中心有限公司 Product display method, device and system
CN111083138B (en) * 2019-12-13 2022-07-12 北京秀眼科技有限公司 Short video production system, method, electronic device and readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130047081A1 (en) * 2011-10-25 2013-02-21 Triparazzi, Inc. Methods and systems for creating video content on mobile devices using storyboard templates
CN104581380A (en) * 2014-12-30 2015-04-29 联想(北京)有限公司 Information processing method and mobile terminal
CN107437076A (en) * 2017-08-02 2017-12-05 陈雷 The method and system that scape based on video analysis does not divide
CN109618222A (en) * 2018-12-27 2019-04-12 北京字节跳动网络技术有限公司 A kind of splicing video generation method, device, terminal device and storage medium
CN110825912A (en) * 2019-10-30 2020-02-21 北京达佳互联信息技术有限公司 Video generation method and device, electronic equipment and storage medium
CN111541946A (en) * 2020-07-10 2020-08-14 成都品果科技有限公司 Automatic video generation method and system for resource matching based on materials

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116055799A (en) * 2022-05-30 2023-05-02 荣耀终端有限公司 Multi-track video editing method, graphical user interface and electronic equipment
CN116055715A (en) * 2022-05-30 2023-05-02 荣耀终端有限公司 Scheduling method of coder and decoder and electronic equipment
CN116055715B (en) * 2022-05-30 2023-10-20 荣耀终端有限公司 Scheduling method of coder and decoder and electronic equipment
CN116055799B (en) * 2022-05-30 2023-11-21 荣耀终端有限公司 Multi-track video editing method, graphical user interface and electronic equipment
CN117216312A (en) * 2023-11-06 2023-12-12 长沙探月科技有限公司 Method and device for generating questioning material, electronic equipment and storage medium
CN117216312B (en) * 2023-11-06 2024-01-26 长沙探月科技有限公司 Method and device for generating questioning material, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114363527A (en) 2022-04-15
CN114363527B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
CN114397979B (en) Application display method and electronic equipment
WO2022068537A1 (en) Image processing method and related apparatus
WO2022068511A1 (en) Video generation method and electronic device
WO2021000881A1 (en) Screen splitting method and electronic device
WO2021104485A1 (en) Photographing method and electronic device
CN114390139B (en) Method for presenting video by electronic equipment in incoming call, electronic equipment and storage medium
WO2021258814A1 (en) Video synthesis method and apparatus, electronic device, and storage medium
CN109819306B (en) Media file clipping method, electronic device and server
CN113170037B (en) Method for shooting long exposure image and electronic equipment
WO2020192761A1 (en) Method for recording user emotion, and related apparatus
CN114860136A (en) Display method of widget and electronic equipment
WO2022156473A1 (en) Video playing method and electronic device
CN115525783B (en) Picture display method and electronic equipment
CN115734032A (en) Video editing method, electronic device and storage medium
WO2022228010A1 (en) Method for generating cover, and electronic device
WO2023065832A1 (en) Video production method and electronic device
WO2023280021A1 (en) Method for generating theme wallpaper, and electronic device
WO2023160455A1 (en) Object deletion method and electronic device
WO2023116669A1 (en) Video generation system and method, and related apparatus
WO2023207799A1 (en) Message processing method and electronic device
CN115484392B (en) Video shooting method and electronic equipment
WO2023142731A1 (en) Method for sharing multimedia file, sending end device, and receiving end device
WO2023142690A1 (en) Restorative shooting method and electronic device
CN117762281A (en) Method for managing service card and electronic equipment
CN117950846A (en) Resource scheduling method and related device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21874172

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21874172

Country of ref document: EP

Kind code of ref document: A1