CN114363527B - Video generation method and electronic equipment - Google Patents

Video generation method and electronic equipment Download PDF

Info

Publication number
CN114363527B
CN114363527B CN202011057180.9A CN202011057180A CN114363527B CN 114363527 B CN114363527 B CN 114363527B CN 202011057180 A CN202011057180 A CN 202011057180A CN 114363527 B CN114363527 B CN 114363527B
Authority
CN
China
Prior art keywords
video
electronic device
segment
scene type
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011057180.9A
Other languages
Chinese (zh)
Other versions
CN114363527A (en
Inventor
张韵叠
苏达
陈绍君
胡靓
徐迎庆
徐千尧
郭子淳
高家思
周雪怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Huawei Technologies Co Ltd
Original Assignee
Tsinghua University
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Huawei Technologies Co Ltd filed Critical Tsinghua University
Priority to CN202011057180.9A priority Critical patent/CN114363527B/en
Priority to PCT/CN2021/116047 priority patent/WO2022068511A1/en
Publication of CN114363527A publication Critical patent/CN114363527A/en
Application granted granted Critical
Publication of CN114363527B publication Critical patent/CN114363527B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72469User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a video generation method and electronic equipment. The method comprises the following steps: the electronic device displays a first interface of the first application. After the electronic equipment receives a first operation acting on the first control, determining that the arrangement sequence of the first material, the second material and the third material is a first sequence, wherein the first sequence is different from the third sequence; and generating a first video from the first material, the second material and the third material in a first order. After receiving a second operation acting on the second control, the electronic device determines that the arrangement sequence of the first material, the second material and the third material is a second sequence, and the second sequence is different from the third sequence; and generating a second video from the first material, the second material and the third material in a second order. The third sequence is a time sequence of storing the first material, the second material and the third material into the electronic device. Thus, the visual line of the video is coherent and the quality feeling is high.

Description

Video generation method and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of electronics, in particular to a video generation method and electronic equipment.
Background
With the popularity of short videos, there is an increasing need for users to quickly generate videos on electronic devices such as cell phones. At present, video generated by an electronic device has poor continuity of vision and low quality sense, and cannot meet the high requirements of users on the visual sense and the video visualization of the video. Therefore, a method for generating a video having a consistent line of sight and a high quality texture is demanded.
Disclosure of Invention
The video generation method and the electronic device are used for conveniently and rapidly generating the video, and the sight of the video is coherent and high in quality, so that the lens sense and the movie sense of the video are enhanced, and the use experience of a user is improved.
In a first aspect, the present application provides a video generating method, including: the electronic equipment displays a first interface of a first application, wherein the first interface comprises a first control and a second control; after the electronic equipment receives a first operation acting on the first control, determining that the arrangement sequence of the first material, the second material and the third material is a first sequence, wherein the first sequence is different from the third sequence; generating a first video from the first material, the second material and the third material according to a first sequence; after receiving a second operation acting on the second control, the electronic device determines that the arrangement sequence of the first material, the second material and the third material is a second sequence, and the second sequence is different from the third sequence; and generating a second video from the first material, the second material and the third material in a second order. The first material, the second material and the third material are different image materials stored in the electronic equipment, and the third sequence is the time sequence of storing the first material, the second material and the third material into the electronic equipment.
According to the method provided by the first aspect, the scene type of the material is identified, the proper video template is matched, the arrangement sequence of the material is adjusted based on the scene type set by each segment in the video template, and the video with coherent sight and high quality can be automatically generated by combining the motion mirror, speed and transition set by each segment in the video template, so that manual editing of a user is not needed, the lens sense and movie sense of the video are enhanced, and the use experience of the user is improved.
In one possible design, the first video is divided into a plurality of segments with the beat point of the music as a dividing line; the first material, the second material and the third material appear at least once in the first video, and the materials appearing in any two adjacent segments of the first video are different; the first material, the second material, and the third material appear at least once in the second video, and the materials appearing in any two adjacent segments of the second video are different. Thus, it is ensured that the generated video can have a professional feeling of taking a shot and a feeling of movie.
In one possible design, the method further comprises: the electronic equipment displays a second interface of the first application; after receiving a third operation acting on the second interface, the electronic device generates a first video from the first material, the second material and the third material. Thus, the electronic device can generate a video with coherent line of sight and high quality feeling based on the material selected by the user.
In one possible design, the method further comprises: the electronic equipment determines to generate a first video from the first material, the second material, the third material and the fourth material; the fourth material is an image material which is stored in the electronic device and is different from the first material, the second material and the third material. Therefore, the electronic equipment can automatically generate videos based on the stored materials, and the timely demands of users are met.
In one possible design, the first interface further includes a third control; the method further comprises the steps of: after receiving a fourth operation acting on the third control, the electronic device displays a third interface, wherein the third interface comprises: options of configuration information, the configuration information includes: at least one parameter of duration, filter, frame, material, or title; after receiving a fifth operation on the options of the configuration information, the electronic device generates a third video based on the configuration information, the first material, the second material, and the third material in the first order. Therefore, the variety of the video is enriched, and the requirement of a user for adjusting various parameters of the video is met.
In one possible design, the first interface further includes a fourth control; the method further comprises the steps of: after generating the first video, the electronic device saves the first video in response to a fourth operation on a fourth control. Thus, the user can conveniently watch and edit the generated video later.
In one possible design, the method specifically includes: the electronic equipment determines a scene type corresponding to the first material, a scene type corresponding to the second material and a scene type corresponding to the third material; the electronic equipment determines a material matched with the scene type corresponding to the first segment based on the scene type corresponding to the first material, the scene type corresponding to the second material, the scene type corresponding to the third material and the scene type set by each segment in the first video template, wherein the first segment is any segment in the first video template; the arrangement sequence of the materials corresponding to all the fragments in the first video template is a first sequence; the electronic equipment determines materials matched with the scene type corresponding to the second segment based on the scene type corresponding to the first material, the scene type corresponding to the second material, the scene type corresponding to the third material and the scene type set by each segment in the second video template, wherein the second segment is any segment in the second video template; the arrangement sequence of the materials corresponding to all the fragments in the second video template is a second sequence; wherein the first video template is different from the second video template, each segment in the second video is opposite to each segment in the second video template, and each segment in the second video corresponds to each segment in the second video template.
In one possible design, the method further comprises: the electronic equipment generates a first video from the first material, the second material and the third material according to the first sequence and the mirror effect, the speed effect and the transition effect which are set by each segment in the first video template; and the electronic equipment generates a second video from the first material, the second material and the third material according to the second sequence and the mirror effect, the speed effect and the transition effect which are set by each segment in the second video template.
In one possible design, when the first material is a picture material, the method specifically includes: when the scene type corresponding to the first material is the same as the scene type corresponding to the first segment, or the scene type corresponding to the first material is adjacent to the sequence of the scene type corresponding to the first segment according to a preset rule, the electronic device determines the first material as a material matched with the scene type corresponding to the first segment; and when the scene type corresponding to the first material is the same as the scene type corresponding to the second segment, or the scene type corresponding to the first material is adjacent to the sequence of the scene type corresponding to the second segment according to a preset rule, the electronic equipment determines the first material as a material matched with the scene type corresponding to the second segment.
In one possible design, when the first material is video material, the method specifically includes: when the scene type corresponding to the fourth material is the same as the scene type corresponding to the first segment or the scene type corresponding to the fourth material is adjacent to the sequence of the scene type corresponding to the first segment according to a preset rule and the duration of the fourth material is equal to the duration of the first segment, the electronic device intercepts the fourth material from the first material and determines the fourth material as a material matched with the scene type corresponding to the first segment; when the scene type corresponding to the fourth material is the same as the scene type corresponding to the second segment or the scene type corresponding to the fourth material is adjacent to the sequence of the scene type corresponding to the second segment according to a preset rule, and the duration of the fourth material is equal to that of the second segment, the electronic device intercepts the fourth material from the second material and determines the fourth material as a material matched with the scene type corresponding to the second segment; the fourth material is part or all of the first material.
In one possible design, the scene types include, in order of a preset rule: the close range, the middle range and the far range, the view type adjacent to the close range is the far range, the view type adjacent to the middle range is the close range and the far range, and the view type adjacent to the far range is the close range.
In one possible design, the first application is a gallery application of the electronic device.
In a second aspect, the present application provides an electronic device, comprising: a memory and a processor; the memory is used for storing program instructions; the processor is configured to invoke program instructions in the memory to cause the electronic device to perform the video generation method of the first aspect and any of the possible designs of the first aspect.
In a third aspect, the present application provides a chip system for use in an electronic device comprising a memory, a display screen and a sensor; the chip system includes: a processor; when the processor executes computer instructions stored in the memory, the electronic device performs the video generation method of the first aspect and any of the possible designs of the first aspect.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, causes an electronic device to implement the video generation method of the first aspect and any one of the possible designs of the first aspect.
In a fifth aspect, the present application provides a computer program product comprising: executing instructions stored in a readable storage medium, the executing instructions readable by at least one processor of the electronic device, the executing instructions executable by the at least one processor causing the electronic device to implement the video generation method of the first aspect and any one of the possible designs of the first aspect.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 2 is a block diagram of a software architecture of an electronic device according to an embodiment of the present application;
fig. 3A-3T are schematic diagrams of a man-machine interaction interface according to an embodiment of the present application;
fig. 4A to fig. 4J are schematic diagrams illustrating an effect of using a mirror for a picture material according to an embodiment of the present application;
fig. 5 is a schematic diagram of effects of using different speeds for a piece of picture material according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of an effect of transferring a picture material according to an embodiment of the present application;
FIG. 7 is a schematic view of a scene type of a character type story provided in an embodiment of the present application;
fig. 8A-8E are schematic views illustrating playback of a video generated based on a material according to an embodiment of the present application;
fig. 9 is a schematic diagram of a video generating method according to an embodiment of the present application.
Detailed Description
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c alone may represent: a alone, b alone, c alone, a combination of a and b, a combination of a and c, b and c, or a combination of a, b and c, wherein a, b, c may be single or plural. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 1, the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, a user identification module (subscriber identification module, SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the structure illustrated herein does not constitute a specific limitation on the electronic device 100. In other embodiments, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc., respectively, through different I2C bus interfaces. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, such that the processor 110 communicates with the touch sensor 180K through an I2C bus interface to implement a touch function of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, the processor 110 may contain multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through the I2S interface, to implement a function of answering a call through the bluetooth headset.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface to implement a function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through a UART interface, to implement a function of playing music through a bluetooth headset.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as a display 194, a camera 193, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the photographing functions of electronic device 100. The processor 110 and the display 194 communicate via a DSI interface to implement the display functionality of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transfer data between the electronic device 100 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the connection between the modules illustrated in the present application is merely illustrative, and does not limit the structure of the electronic device 100. In other embodiments, the electronic device 100 may also employ different interfaces in the above embodiments, or a combination of interfaces.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 100 may listen to music, or to hands-free conversations, through the speaker 170A.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When electronic device 100 is answering a telephone call or voice message, voice may be received by placing receiver 170B in close proximity to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may also be provided with three, four, or more microphones 170C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the touch operation intensity according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B may also be used for navigating, somatosensory game scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude from barometric pressure values measured by barometric pressure sensor 180C, aiding in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip cover using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip machine, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, the electronic device 100 may range using the distance sensor 180F to achieve quick focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light outward through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it may be determined that there is an object in the vicinity of the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there is no object in the vicinity of the electronic device 100. The electronic device 100 can detect that the user holds the electronic device 100 close to the ear by using the proximity light sensor 180G, so as to automatically extinguish the screen for the purpose of saving power. The proximity light sensor 180G may also be used in holster mode, pocket mode to automatically unlock and lock the screen.
The ambient light sensor 180L is used to sense ambient light level. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180L may also cooperate with proximity light sensor 180G to detect whether electronic device 100 is in a pocket to prevent false touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint feature to unlock the fingerprint, access the application lock, photograph the fingerprint, answer the incoming call, etc.
The temperature sensor 180J is for detecting temperature. In some embodiments, the electronic device 100 performs a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by temperature sensor 180J exceeds a threshold, electronic device 100 performs a reduction in the performance of a processor located in the vicinity of temperature sensor 180J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device 100 heats the battery 142 to avoid the low temperature causing the electronic device 100 to be abnormally shut down. In other embodiments, when the temperature is below a further threshold, the electronic device 100 performs boosting of the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperatures.
The touch sensor 180K, also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, bone conduction sensor 180M may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 180M may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 180M may also be provided in a headset, in combination with an osteoinductive headset. The audio module 170 may analyze the voice signal based on the vibration signal of the sound portion vibration bone block obtained by the bone conduction sensor 180M, so as to implement a voice function. The application processor may analyze the heart rate information based on the blood pressure beat signal acquired by the bone conduction sensor 180M, so as to implement a heart rate detection function.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device 100. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to realize functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, i.e.: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In this embodiment, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated. The type of the operating system of the electronic device is not limited in the embodiment of the application. For example, an Android system, a Linux system, a Windows system, an iOS system, a hong OS system (harmony operating system, hong OS), and the like.
Referring to fig. 2, fig. 2 is a software block diagram of an electronic device according to an embodiment of the present application. As shown in fig. 2, the hierarchical architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer (APP), an application framework layer (APP framework), an Zhuoyun row (Android run) and system library (library), and a kernel layer (kernel).
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include Applications (APP) such as camera, gallery, calendar, call, map, navigation, WLAN, bluetooth, music, video, game, chat, shopping, travel, instant messaging (e.g., short message), smart home, device control, etc.
The intelligent home application can be used for controlling or managing home equipment with networking function. For example, home appliances may include electric lights, televisions, and air conditioners. For another example, the home appliances may also include a burglarproof door lock, a speaker, a floor sweeping robot, a socket, a body fat scale, a desk lamp, an air purifier, a refrigerator, a washing machine, a water heater, a microwave oven, an electric cooker, a curtain, a fan, a television, a set-top box, a door and window, and the like.
In addition, the application package may further include: the home screen (i.e. desktop), the negative screen, the control center, the notification center, etc. application programs.
The negative one screen, which may be referred to as "-1 screen", refers to a User Interface (UI) that slides the screen rightward on the main screen of the electronic device until it slides to the leftmost split screen. For example, the negative screen may be used to place shortcut service functions and notification messages, such as global search, shortcut entries (payment codes, weChat, etc.) for a page of an application, instant messaging and reminders (express information, expense information, commute road conditions, driving travel information, schedule information, etc.), and attention dynamics (football stand, basketball stand, stock information, etc.), etc. The control center is an up-slide message notification bar of the electronic device, that is, a user interface displayed by the electronic device when a user starts an up-slide operation at the bottom of the electronic device. The notification center is a drop-down message notification bar of the electronic device, i.e., a user interface displayed by the electronic device when a user begins to operate downward on top of the electronic device.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
Window managers (window manager) are used to manage window programs such as manage window states, attributes, view (view) additions, deletions, updates, window order, message collection and processing, and the like. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like. And, the window manager accesses the portal of the window for the outside world.
The content provider is used to store and retrieve data and make such data accessible to the application. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide the communication functions of the electronic device 100. Such as the management of call status (including on, hung-up, etc.).
A resource manager (resource manager) provides various resources for an application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
The android runtime includes a core library and virtual machines. And the Android running process is responsible for scheduling and managing the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of the Android system.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media library (media library), three-dimensional graphics processing library (e.g., openGLES), 2D graphics engine (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio video encoding formats, such as: MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The workflow of the software and hardware of the electronic device 100 is illustrated below in connection with a scenario in which sound is played using a smart speaker.
When touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into the original input event (including information such as touch coordinates, time stamp of touch operation, etc.). The original input event is stored at the kernel layer. The application framework layer acquires an original input event from the kernel layer, and identifies a control corresponding to the input event. Taking the touch operation as a touch click operation, taking a control corresponding to the click operation as an example of a control of an intelligent sound box icon, the intelligent sound box application calls an interface of an application framework layer, starts the intelligent sound box application, further starts audio driving by calling a kernel layer, and converts an audio electric signal into a sound signal by a loudspeaker 170A.
It is to be understood that the structure illustrated herein does not constitute a specific limitation on the electronic device 100. In other embodiments, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The technical solutions involved in the following embodiments may be implemented in the electronic device 100 having the above-described hardware architecture and software architecture.
The embodiment of the application provides a video generation method and electronic equipment, the scene type of material is identified through the electronic equipment, a proper video template is matched, the arrangement sequence of the material is adjusted based on the scene type set in the video template, and the video can be automatically generated by combining a moving mirror, speed and transition set in the video template, so that the generated video has a consistent sight and high quality, the lens sense and movie sense of the video are enhanced, the use experience of a user is improved, and parameters such as the duration, a filter and a picture of the video can be manually adjusted by the user, so that the actual user demands are met, and the variety of the video is enriched.
The electronic device may be a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA), an intelligent television, a smart screen, a high-definition television, a 4K television, an intelligent sound box, an intelligent projector, or the like, and the specific type of the electronic device is not limited in the embodiments of the present application.
In the following, some terms related to the embodiments of the present application are explained for easy understanding by those skilled in the art.
1. Material may be understood as picture material or video material stored in an electronic device. It should be noted that the picture material and the photo material mentioned in the embodiment of the present application have the same meaning. The picture material may be obtained by shooting by the electronic device, may be obtained by downloading by the electronic device from a server, or may be received by the electronic device from other electronic devices, which is not limited in the embodiment of the present application.
2. The scene is understood as a difference in the size of the range exhibited by the subject in the subject due to the difference in the distance between the subject and the subject. The photographic subject may be an electronic device, or may be a device connected to the electronic device in a communication manner, which is not limited in the embodiment of the present application.
In the embodiment of the application, the division of the scene type can comprise various implementation manners. It should be noted that, the scene type mentioned in the embodiment of the present application refers to a scene type.
In some embodiments, the division of the scene types may be three, from near to far, near, medium, and far, respectively. For example, near vision refers to above the chest of a human body, intermediate vision refers to above the thigh of a human body, and far vision refers to conditions other than near and intermediate vision.
In other embodiments, the division of scene types may be five, close-up, medium, panoramic, and distant, respectively. For example, close-up refers to above the shoulders of the human body, close-up refers to above the chest of the human body, middle-view refers to above the knees of the human body, panoramic refers to the whole and surrounding environment of the human body, and distant-view refers to the environment in which the subject is located.
In this embodiment of the present application, the scene type corresponding to the video material may be regarded as a set of respective scene types of the plurality of picture materials. Typically, the electronic device may record a start time and a duration, or a start time and a stop time, or a start time, a duration, and a stop time for each scene type. And the electronic equipment adopts the technologies of face recognition, semantic recognition, significance feature recognition, semantic segmentation and the like, and can classify the technologies to judge the scene type of the material, namely, the scene type of the picture material is determined.
Next, a specific implementation manner of the electronic device to determine the scene type of any one material is described with reference to the embodiment.
A. Face close-up and face close-up
The electronic device determines a face recognition frame of any one material based on a face recognition technology.
And when the area of the face recognition frame is larger than the threshold A1, the electronic equipment judges that the scene of the material is a face close-up.
When the area of the face recognition frame is larger than the threshold A2 and smaller than the threshold A1, the electronic equipment judges that the scene of the material is a close-up scene of the face.
The specific values of the threshold A1 and the threshold A2 may be set according to factors such as experience values and face recognition technology.
B. Character close-up and character close-up
The electronic device performs face recognition on any one material based on a face recognition technology.
When the recognition result indicates that no face exists and the semantic style of the person exists (such as the existence of a side face/back image of the person in the material), the electronic device can obtain the person recognition frame by using the threshold value of the area of the head.
When the area of the person identification frame is larger than the threshold B1, the electronic equipment judges that the scene of the material is a person feature.
When the area of the person identification frame is larger than the threshold B2 and smaller than the threshold B1, the electronic equipment judges that the scene of the material is a person close scene.
The specific values of the threshold B1 and the threshold B2 may be set according to factors such as empirical values.
C. Food close-up and food close-up
The electronic equipment determines the semantic recognition result and the salient feature recognition result of any material based on the voice segmentation recognition technology and the salient feature recognition technology.
When the semantic recognition result shows that the area of the food is larger than the threshold C1, the salient feature result shows that the area of the salience is larger than the threshold C2, and the area of the food is overlapped with the area of the salience, the electronic equipment judges that the scene of the material is a food feature.
When the semantic recognition result shows that the area of the food is larger than the threshold C1 and the area of the saliency feature result shows that the saliency is smaller than the threshold C2, the electronic equipment judges that the scene of the material is a food close scene.
The specific values of the threshold C1 and the threshold C2 may be set according to factors such as empirical values.
D. Large aperture close-up of non-person
When a photo with a large aperture mode of any material is detected or a large virtual focus image of the material is detected, the electronic equipment judges that the scene of the material is a non-person large aperture close-up scene.
E. Significant flower close-up and significant pet close-up
The electronic equipment determines the semantic recognition result and the salient feature recognition result of any material based on the voice segmentation recognition technology and the salient feature recognition technology.
When the semantic recognition result shows that the area of the flower is larger than the threshold D1, the salient feature result shows that the area of the salience is larger than the threshold D2, and the area of the flower is overlapped with the area of the salience, the electronic equipment judges that the scene of the material is a salient flower close scene.
The specific values of the threshold D1 and the threshold D2 may be set according to factors such as empirical values.
When the semantic recognition result shows that the area of the pet is larger than the threshold E1, the salient feature result shows that the area of the saliency is larger than the threshold E2, and the area of the pet is overlapped with the area of the saliency, the electronic equipment judges that the scene of the material is a salient pet close-range.
The specific values of the threshold E1 and the threshold E2 may be set according to factors such as empirical values.
F. Character middle view
The electronic device performs face recognition on any one material based on a face recognition technology.
When the recognition result shows that no face or decomposition result conforming to the close range of the person appears, or that the complete person completely enters the picture frame (the trunk leaves the edge of the picture frame and the face or the head is smaller than a threshold value), the electronic equipment judges that the scene of the material is the scene of the person.
G. Significant long-range view
The electronic device determines a salient feature recognition result of any one material based on a salient feature recognition technology.
When the saliency result exists, and the saliency result shows that the area of the saliency is smaller than a threshold F (for example, the material is a camel picture material in the desert, wherein the camel is the saliency result), the electronic equipment judges that the scene of the material is a saliency perspective.
H. Scenery long distance view
The electronic device determines a picture segmentation result of any one material based on a semantic segmentation technology.
When the picture segmentation result shows that the area of the material is larger than the threshold G and is a preset target, the electronic equipment judges that the scene of the material is a scenery perspective.
The threshold G may be set to be equal to or greater than 90%, and specific values of the threshold G are not limited in the embodiment of the present application. The preset target may be a landscape feature such as sea, sky, mountain, etc.
I. Others
When the electronic equipment cannot identify the scene of the material based on the technology, the electronic equipment judges that the scene of the material is a middle scene.
Wherein a to E are close-up ranges, deterministic order is person close-up = face close-up > food close-up > person close-up = face close-up > non-person large aperture close-up > food close-up = salient flower close-up and salient pet close-up, G and H are far views, deterministic order is H > G, F and I are intermediate views.
3. The lens is also called a moving lens, and mainly refers to the movement of the lens. In this embodiment of the present application, the fortune mirror is related to the type of the material, that is, the fortune mirror corresponding to the picture material and the fortune mirror corresponding to the video material may be the same or different.
4. Transition may be understood as a paragraph-to-paragraph, scene-to-scene transition or transition. Wherein each paragraph (the smallest unit that makes up a video is a shot, and a series of shots formed by connecting together individual shots) has a single, relatively complete meaning, such as representing a course of action, representing a correlation, representing a meaning, etc. It is a complete narrative hierarchy in the video, just like a curtain in a drama, a chapter in a novel, and the individual paragraphs are linked together to form the complete video. Thus, a paragraph is the most basic structural form of a video, and the structural hierarchy of a video on the content is represented by the paragraph.
5. Scene type, fortune mirror, speed and transition set up in video template
A video template may be understood as the theme or style of a video. Types of video templates may include, but are not limited to: travel, parents, parties, sports, delicacies, scenery, retro, cities, night curtains, humanity and the like.
Parameters in any one video template may include, but are not limited to: jing Bie type, mirror, speed, transition, etc. Typically, different video templates, at least one of the corresponding scene type, the fortune mirror, the speed, and the transition are different.
In the embodiment of the application, the electronic device has the function of generating the video from the stored materials, so that one or more picture materials and/or video materials in the electronic device generate the video. In addition, the electronic equipment provides various entrance modes for generating videos for users, so that the users can generate videos timely and quickly, and the convenience of the users is improved.
The method for generating the video by the electronic device according to the embodiment of the present application will be described in detail with reference to the first, second and third modes by taking the gallery application of the electronic device as an example for generating the video. It should be noted that, the embodiments of the present application include, but are not limited to, gallery applications as an entry way of generating video, and include, but are not limited to, the three ways described above.
Mode one
Referring to fig. 3A-3F, fig. 3A-3F are schematic diagrams of a man-machine interface according to an embodiment of the present application. For convenience of explanation, in fig. 3A to 3F, an electronic device is taken as an example of a mobile phone to be schematically illustrated.
The handset may display a user interface 11 as exemplarily shown in fig. 3A. Wherein the user interface 11 may be a desktop Home screen, the user interface 11 may include, but is not limited to: status bars, navigation bars, calendar indicators, weather indicators, and a plurality of application icons, etc. The application icons may include: icon 301 of the gallery application, the application icons may also include: such as an icon for a video application, an icon for a music application, an icon for a cell phone manager application, an icon for a setup application, an icon for a mall application, an icon for a smart life application, an icon for a sports health application, an icon for a talk application, an icon for an instant messaging application, an icon for a browser application, an icon for a camera application, etc.
After detecting that the user performs an operation of opening the gallery application in the user interface 11 shown in fig. 3A (for example, clicking the icon 301 of the gallery application), the mobile phone may display the user interface 12 shown in fig. 3B, where the user interface 12 is used to display a page corresponding to the album category in the gallery application.
Among other things, the user interface 12 may include: the control 3021, the control 3021 is used for entering a display interface containing all picture materials and/or video materials in the mobile phone, and the control 3023, the control 3023 is used for entering a display interface corresponding to an album category in a gallery application.
In embodiments of the present application, the specific implementation of the user interface 12 may include a variety of types. For ease of illustration, in FIG. 3B, the user interface 12 is divided into two groupings.
The first packet includes two parts. The header of the first group is illustrated in fig. 3B using the word "album" as an example.
The first section is used to provide a search box for a user to search for picture material and/or video material by keywords such as photos, characters, places, etc.
A control 3021 is included in the second portion, as well as controls for accessing a display interface containing only video material.
The second packet displays a picture obtained by means of a screen shot or some application, etc. The title of the third portion is illustrated in fig. 3B by using the text "other album (3)" and a rounded rectangular frame as examples.
In addition, the user interface 12 further includes: control 3022, control 3024, and control 3025. The control 3022 is used for entering a display interface corresponding to the photo category in the gallery application. The control 3024 is used for entering a display interface corresponding to a time category in the gallery application. The control 3025 is used to enter a display interface corresponding to the discovery category in the gallery application.
Additionally, the user interface 12 may further include: controls for implementing functions in the user interface 12 such as deleting an existing group, changing the name of an existing group, and controls for adding a new group in the user interface 12.
After detecting that the user performs an operation such as clicking on the control 3021 in the user interface 12 shown in fig. 3B, the mobile phone may display the user interface 13 shown in fig. 3C, where the user interface 13 is a display interface of all the picture material and/or the video material in the mobile phone. Parameters such as the number of display of the picture materials, the display area of the picture materials, the display position of the picture materials, the display content of the video materials, the display number of the video materials, the display area of the video materials, the display position of the video materials, the sequence of the materials of each type and the like in the user interface 13 are not limited.
For convenience of illustration, in fig. 3C, the user interface 13 shows: video material 3031, picture material 3032, picture material 3033, video material 3034, picture material 3035, picture material 3036, picture material 3037, and video material 3038. For any video material, the electronic device may select an image displayed by any frame in the video material as a picture displayed by the electronic device to the user. Therefore, in fig. 3C, the frames displayed by the video material 3031, the video material 3034, and the video material 3038 are images displayed by any one frame in the respective video materials.
After detecting that the user performs an operation (such as a long press operation) for selecting a picture material and/or a video material in the user interface 13 shown in fig. 3C, the mobile phone may display the user interface 14 shown in fig. 3D, where the user interface 14 is used to display a display interface for selecting the picture material and/or the video material used for generating the video.
In embodiments of the present application, the specific implementation of the user interface 14 may include a variety of types. For ease of illustration, in FIG. 3D, user interface 14 includes user interface 13, and an editing interface overlaid on user interface 13.
For picture material and/or video material not selected by the user (illustrated in fig. 3D by way of example with the exception of video material 3031, picture material 3032, picture material 3033, video material 3034, picture material 3035, picture material 3036, picture material 3037 and video material 3038), a control for enlarging and displaying each picture material/video material may be displayed in the editing interface in the upper left corner of the picture material/video material (illustrated in fig. 3D by way of example with two diagonal and oppositely directed arrows), and a control for selecting the picture material/video material may be displayed in the lower right corner of the picture material/video material (illustrated in fig. 3D by way of example with a rounded rectangular box).
For the picture material and/or video material selected by the user (illustrated in fig. 3D by using video material 3031, picture material 3032, picture material 3033, video material 3034, picture material 3035, picture material 3036, picture material 3037, and video material 3038 as examples), a control for magnifying and displaying each picture material/video material may be displayed in the editing interface at the upper left corner of the picture material/video material (illustrated in fig. 3D by using two diagonal and opposite-pointing arrows as examples), and a control for selecting the picture material/video material is displayed at the lower right corner of the picture material/video material (illustrated in fig. 3D by using a rounded rectangular frame as examples).
And, the editing interface may include: the control 304, the control 304 is used for authoring the picture material and/or the video material selected by the user. Additionally, the editing interface may further include: controls for performing operations such as sharing, selecting, deleting, and more on the picture material and/or the video material that have been selected by the user are not limited in this embodiment of the present application.
Upon detecting a user performing an operation such as clicking on control 304 in user interface 14 shown in fig. 3D, the handset may display window 305 (illustrated in fig. 3E using the text "movie", the text "puzzle", and a rounded rectangular box for example) as exemplarily shown in fig. 3E on user interface 14.
When the user selects the picture material, the video material, or the picture material and the video material, if the user performs an operation such as clicking on the text "movie" in the window 305, the mobile phone may display a user interface for editing the new video.
When the user selects the picture material, if the user performs an operation such as clicking on the text "jigsaw" input in the window 305, the mobile phone may display a user interface for editing the new picture.
When the user selects the video material, or the picture material and the video material, if the user inputs the text "jigsaw" in the window 305, such as clicking, the mobile phone cannot display a user interface for editing a new picture, and the text "jigsaw does not support video" can be displayed to prompt the user to cancel selecting the video material.
After detecting that the user performs an operation such as clicking the text "movie" in the window 305 shown in fig. 3E, the mobile phone may determine that the type of the video template is a parent-child type based on the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, and the video material 3038 selected by the user, so that the video is generated by the user based on the video template of the parent-child type, and the user interface 15 exemplarily shown in fig. 3F may be displayed, where the user interface 15 is used to display the video generated by the mobile phone.
Wherein the segments in the generated video correspond to segments in the parent-child type video templates. The video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037 and the video material 3038 appear at least once in the generated video, and any two adjacent segments in the generated video do not place the same material.
In summary, the electronic device may automatically generate a video based on picture elements and/or video material selected by the user in the gallery application. In addition, the user interface 15 is also used to display controls for editing the generated video.
Among other things, the user interface 15 may include: preview area 306, progress bar 307, control 3081, control 3082, control 3083, control 3084, control 3085, control 30811, control 30812, control 30813, control 30814, and control 309.
And a preview area 306 for displaying the generated video, so that the user can watch and adjust the video conveniently.
A progress bar 307 for indicating the duration of the video under any one of the video templates (fig. 3F uses "00:00" for example to indicate the start time of the video, "00:32" for example to indicate the end time of the video, and a slide bar for example to indicate the progress of the video).
Control 3081 for providing different types of video templates. Control 30811 is used to represent a parent-child type video template (fig. 3F is a video template with the text "parent" and a bolded rectangular box is exemplary of a parent-child type video template), control 30812 is used to represent a travel type video template (fig. 3F is a video template with the text "travel" and a normal rectangular box is exemplary of a travel type video template), control 30813 is used to represent a food type video template (fig. 3F is a video template with the text "food" and a normal rectangular box is exemplary of a food type), and control 30814 is used to represent a sports type video template (fig. 3F is a video template with the text "sports" and a normal rectangular box is exemplary of a sports type). Thus, when the material is identified to match a video main body template of a certain type, the electronic device can also provide the user with the generation of other types of video templates besides the type, which is beneficial to meeting various requirements of the user.
And the control 3082 is used for editing the picture of the video, changing the duration of the video, adding new pictures and/or videos into the video, deleting the pictures and/or videos in the video and the like. Therefore, the video with the corresponding length and/or the corresponding material is generated based on the user requirement, and the flexibility of video generation is considered.
Control 3083, for altering the music of the video template match.
Control 3084, a filter for changing the video.
Control 3085 is used to add text to the video, such as adding text at the beginning and end of a film.
And a control 309, configured to store the generated video, and facilitate use or viewing of the stored video.
Based on the above description, the electronic device can display the generated video to the user through the preview area 306.
In addition, because the type of the video template determined by the electronic equipment is parent-child type, the electronic equipment thickens and displays the round-corner rectangular frame in the control 3081, so that the user can be conveniently and quickly informed.
And, based on other controls in the user interface 15, the user may perform operations such as selecting a type of video template, adjusting a frame of the video, adjusting a duration of the video, adding new picture material and/or video material in the video, selecting music for matching the video, selecting a filter for the video, adding text in the video, etc., so that the electronic device can determine the video template meeting the user's wish to generate a corresponding video.
For example, upon detecting that the user performs an operation such as clicking on control 3081 in user interface 15 shown in fig. 3F, the handset may display user interface 15 shown in fig. 3F as an example, so that the user may select one of controls 30811, 30812, and 30813.
As another example, after detecting that the user performs an operation such as clicking on the control 3082 in the user interface 15 shown in fig. 3F, the mobile phone may display the user interface 21 shown in fig. 3G, where the user interface 21 is used to display factors such as a frame, a duration, and materials included in playing the edited video.
The user interface 21 may include: video play area 3171, control 3172, control 3173, control 3174, control 3175, material play area 3176, control 3177. The video playing area 3171 is used for displaying the effect played by the video to be generated. Control 3172 is used to enter a user interface that alters the frame of the video, where the frame of the video may be 16: 9. 1:1 or 9:16, etc. Control 3173 is used to enter a user interface that alters the duration of the video. Control 3174 is used to enter a user interface that adds new material in the video. Control 3175 is used to access material already in the video. The material playing area 3176 is used to show the playing effect of each material in the video. Control 3177 is used to exit user interface 21.
As another example, after detecting that the user performs an operation such as clicking on the control 3083 in the user interface 15 shown in fig. 3F, the mobile phone may display the user interface 22 shown in fig. 3H, where the user interface 22 is used to display music corresponding to the edited video.
The user interface 22 may include: video play area 3181, progress bar 3182, control 3183, control 3184, control 3185. The video playing area 3181 is used for displaying the effect played by the video to be generated. The progress bar 3182 is used to display or change the play progress of the video to be generated. The control 3183 is used to present various types of video templates, such as the types displayed with the words "parent", "travel", "delicates", "sports", and the like. Control 3184 is used to present corresponding music under a certain type of video template, such as "Song 1", "Song 2", "Song 3" are displayed. Control 3185 is used to exit user interface 22.
As another example, the cell phone may display the user interface 23 illustrated in the example of fig. 3I after detecting that the user performs an operation such as clicking on the control 3084 in the user interface 15 illustrated in fig. 3F, where the user interface 23 is used to display a filter for editing video. For example, in fig. 3H, the letter "parent-child" is bolded and the display column corresponding to the letter "song 1" has a pair-hook mark, which indicates that the mobile phone currently selects a video template of parent-child type, and the music corresponding to the video template is song 1. It should be noted that, the rounded rectangle boxes respectively located before the words "song 1", "song 2", and "song 3" are used for displaying the images of the corresponding songs. The embodiment of the application does not limit the specific display content of the image. For ease of illustration, the embodiments of the present application are illustrated with filled white as an example.
The user interface 23 may include: video play area 3191, progress bar 3192, control 3183, control 3184, control 3185. The video playing area 3191 is used for displaying the effect played by the video to be generated. The progress bar 3192 is used to display or change the play progress of the video to be generated. Controls 3183 are used to show individual filters, such as those displayed with the words "filter 1", "filter 2", "filter 3", "filter 4", "filter 5", and so forth. Wherein, different filters, video have different bandwagon effects, such as effects such as softening, blackening whiten, color deepening, etc. Control 3194 is used to exit user interface 23. For example, in fig. 3I, the bolded display of the word "filter 1" may indicate that the filter currently selected for video by the cell phone is filter 1.
As another example, the cell phone may display the user interface 24 shown in the example of fig. 3J after detecting that the user performs an operation such as clicking on the control 3085 in the user interface 15 shown in fig. 3F, where the user interface 24 is used to display a filter for editing video.
The user interface 24 may include: video play area 3201, control 3202, control 3203, control 3204, control 3185. The video playing area 3191 is used for displaying the effect played by the video to be generated. Control 3202 is used to select adding a title in the head or tail. The control 3193 is used to show various titles, such as those displayed with the words "title 1", "title 2", "title 3", "title 4", "title 5", and the like. For any two different titles (e.g., title 1 and title 2), if the contents of title 1 and title 2 may be the same, then title 1 and title 2 may be displayed in any one of the pictures of the video with different playback effects. The playback effect is understood as an effect formed by changing parameters such as font, thickness, color, etc. of characters in the title. For example, heading 1 may be the word "weekend hours" and heading 1 takes regular script. Title 2 is the word "weekend hours" and title 1 uses Song body. If the contents of title 1 and title 2 are different, title 1 and title 2 may be displayed on any one of the video screens with the same or different playback effects. For example, title 1 may be the word "weekend hours light". Title 2 is the word "beautiful day". Control 3194 is used to exit user interface 24. For example, in fig. 3J, the bolded display of the words "title" and "title 1" may indicate that the handset currently chooses to add title 1 to the title of the video.
In sum, the electronic equipment can provide the function of manually editing the generated video for the user, is convenient for the user to configure parameters such as duration, frame, video template, contained materials, filter and the like of the video based on own will, and enriches the style of the video.
In addition, the handset may save the video upon detecting that the user has performed an operation such as clicking on control 309 in user interface 15 shown in fig. 3F.
Mode two
Referring to fig. 3A-3B, fig. 3K-3N, and fig. 3F, fig. 3K-3N are schematic diagrams of a man-machine interaction interface according to an embodiment of the present application.
In some embodiments, after detecting that the user performs an operation such as clicking on control 3025 in user interface 12 shown in fig. 3K, the handset may display user interface 16 shown in fig. 3L as an example, where user interface 16 is used to display a page corresponding to the found category in the gallery application. In fig. 3L, the control 3023 changes from the bolded display to the normal display, and the control 3025 changes from the normal display to the bolded display.
In other embodiments, after detecting the operation of opening the gallery application indicated by the user (such as clicking on the icon 301 of the gallery application), the mobile phone may display the user interface 16 shown in fig. 3L, where the user interface 16 is used to display a page corresponding to the found category in the gallery application. In fig. 3L, control 3025 is shown bolded.
Among other things, the user interface 16 may include: the control 312, the control 312 is used for entering a display page of the picture material and/or the video material stored in the mobile phone.
In embodiments of the present application, the specific implementation of the user interface 16 may include a variety of types. For ease of illustration, in FIG. 3L, the user interface 16 is divided into five sections.
The first section includes a search box for providing a user with a way to search for picture material and/or video material by keywords such as photos, characters, places, etc.
The second section includes controls for entering into the creation of a new video using the template approach (illustrated in FIG. 3L using the text "template creation" and an icon for example), and controls 312 for entering into the creation of a new video using the puzzle approach (illustrated in FIG. 3L using the text "template creation" and an icon for example).
The third section displays a picture divided by portrait. The title of the third section is illustrated in fig. 3L by taking the words "portrait" and the word "more" as examples.
The fourth part includes pictures and/or videos divided according to the location, such as pictures and/or videos of "Shenzhen city" and pictures and/or videos of "Gui Linshi" and pictures and/or videos of "City" in the location shown in fig. 3L. The title of the fourth section is illustrated in fig. 3L by taking the word "place" and the word "more" as examples.
In the fifth section, a control 3022, a control 3023, a control 3024, and a control 3025 are displayed.
In addition, the title of the user interface 16 is illustrated in FIG. 3L using the word "found" as an example. Also included in the user interface 16 may be: controls for implementing editing user interface 16, such as adding new groupings in user interface 16 or deleting existing groupings (fig. 3L is illustrated with three black dots as an example).
Upon detecting that the user has performed an operation such as clicking on control 312 in user interface 16 shown in fig. 3L, the handset may display user interface 17 shown in the example of fig. 3M, with user interface 17 being configured to display picture material and/or video material that may be used to generate new video in a free authoring manner.
In the present embodiment, the specific implementation of the user interface 17 may include a plurality of types. For ease of illustration, in FIG. 3M, user interface 17 includes a display area 313, and a window 314 overlaid on display area 313.
The display area 313 includes picture material and/or video material, and a control for enlarging and displaying the picture material/video material is displayed at the upper left corner of each picture material/video material (illustrated by using two diagonal and opposite-pointing arrows in fig. 3M as an example), and a control for selecting the picture material/video material is displayed at the lower right corner of each picture material/video material (illustrated by using a rounded rectangular box in fig. 3M as an example).
Parameters such as the number of display of the picture materials in the display area 313, the display area of the picture materials, the display position of the picture materials, the display content of the video materials, the display number of the video materials, the display area of the video materials, the display position of the video materials, and the sequence of the materials of each type are not limited. For convenience of explanation, in fig. 3M, the display area 313 shows: the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037 and the video material 3038 can be referred to in the first embodiment, and the description is omitted here.
Window 314 may include: control 3141 (illustrated in fig. 3M using an icon "0/50" as an example, where "0" indicates that any one of the picture materials/video materials is not selected, and "50" indicates that there are 50 of the picture materials/video materials in the mobile phone), control 3141 is used to indicate the total number of the picture materials/video materials stored in the mobile phone and indicate the number of the picture materials/video materials currently selected by the user, and control 3142 is used to enter into a display interface where new video starts to be made, and preview area 3143, preview area 3143 is used to display the picture materials and/or video materials selected by the user.
After detecting that the user performs an operation of selecting a picture material/video material in the display area 313 shown in fig. 3M, the mobile phone may display a display change occurring based on the user operation in the user interface 17 exemplarily shown in fig. 3N.
For pictures and/or videos not selected by the user (illustrated in fig. 3N by taking other picture materials/other video materials other than the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037, and the video material 3038 as examples), other picture materials/other material videos of the display area in the user interface 17 remain the same display screen.
For the picture material and/or the video material (illustrated in fig. 3N by using video material 3031, picture material 3032, picture material 3033, video material 3034, picture material 3035, picture material 3036, picture material 3037 and video material 3038) selected by the user, the video material 3031, picture material 3032, picture material 3033, video material 3034, picture material 3035, picture material 3036, picture material 3037 and control for selecting the picture material/video material, which is located in the right lower corner of each picture material/video material, in the user interface 17 changes in display (illustrated in fig. 3N by using an addition in a rounded rectangle).
The control 3141 in the user interface 17 displays that the number of picture materials/video materials selected by the user has changed (fig. 3N illustrates by taking the icon "8/50" as an example, where "8" indicates that the user selects eight picture materials/video materials, and "50" indicates that there are 50 picture materials/video materials in the mobile phone that can generate new videos in a freely authored manner).
The preview area 2143 in the user interface 17 displays a change in the selected picture/video material (shown in fig. 3N by displaying video material 3031, picture material 3032, picture material 3033, and video material 3034, and by dragging a slider bar to display picture material 3035, picture material 3036, picture material 3037, and video material 3038, for example).
After detecting that the user performs an operation of generating a new video in the user interface 17 shown in fig. 3N (e.g., clicks a control 3142 in the user interface 17), the mobile phone may determine that the type of the video template is a parent-child type based on the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037, and the video material 3038 selected by the user, so that the video is generated by the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037, and the video material 3038 based on the video template of the parent-child type, and may display the user interface 15 exemplarily shown in fig. 3F. Wherein reference is made herein to the description of the generated video in mode 1 for a specific implementation of the generated video.
In summary, the electronic device may automatically generate a video based on picture elements and/or video material selected by the user in the gallery application.
The specific implementation of the user interface 15 may be referred to in the foregoing description, and will not be described herein. Thus, the electronic device can display the generated video to the user through the preview area 306.
In addition, the user interface 15 is also used to display controls for editing the generated video. Therefore, the electronic equipment can provide the function of manually editing the generated video for the user, is convenient for the user to configure parameters such as duration, frame, video template, contained materials, filter and the like of the video based on own will, and enriches the style of the video. In addition, the handset may save the video upon detecting that the user has performed an operation such as clicking on control 309 in user interface 15 shown in fig. 3F.
Mode three
Referring to fig. 3A-3B, fig. 3O-3Q, fig. 3T, fig. 3F, fig. 3O-3Q, fig. 3T are schematic diagrams of a man-machine interaction interface according to an embodiment of the present application.
In some embodiments, after detecting that the user performs an operation such as clicking on control 3024 in user interface 12 shown in fig. 3O, the handset may display user interface 18 shown in fig. 3P as an example, where user interface 18 is used to display a page corresponding to a time of day category in a gallery application. In fig. 3P, the control 3023 changes from the bolded display to the normal display, and the control 3024 changes from the normal display to the bolded display.
In other embodiments, after detecting the operation of opening the gallery application indicated by the user (such as clicking on the icon 301 of the gallery application), the mobile phone may display the user interface 18 shown in fig. 3P, where the user interface 18 is used to display a page corresponding to a time category in the gallery application. In fig. 3P, control 3024 is shown bolded.
Among other things, the user interface 18 may include: control 3151 is used to enter a display page where new video is authored in the manner provided by embodiments of the present application.
In embodiments of the present application, the specific implementation of the user interface 18 may include a variety of types. For ease of illustration, in FIG. 3P, the user interface 18 is divided into three sections.
The first section includes a search box for providing a user with a way to search for picture material and/or video material by keywords such as photos, characters, places, etc.
The second section includes a control 3152 (illustrated in fig. 3P by way of example with the text "weekend hours", the date "9 months in 2020", and a piece of picture material), the control 3152 being configured to display video 1 generated from the picture material and/or video material for a period of time in the cell phone, and a control 3153 (illustrated in fig. 3P by way of example with the text "weekend hours", the date "5 months in 2020", and a piece of picture material) for displaying video 3 generated from the picture material and/or video material for a period of time in the cell phone (illustrated in fig. 3P by way of example with the text "weekend hours", the date "4 months in 2020"), and a piece of picture material). It should be noted that, the picture materials/video materials in the video 1, the video 2 and the video 3 may or may not be repeated, which is not limited in the embodiment of the present application.
It should be noted that, video 1, video 2 and video 3 are all generated by the electronic device according to the scheme provided in the application.
In the third section, controls 3022, 3023, 3024, and 3025 are displayed.
The title of the user interface 18 is illustrated in fig. 3P by way of example with the word "time".
In some embodiments, upon detecting that the user has performed an operation such as clicking on control 3151 in user interface 18 shown in fig. 3P, the handset may display window 316 on user interface 18, as exemplarily shown in fig. 3Q, window 316 for displaying picture material and/or video material that may be generated in a movie or a puzzle.
After detecting that the user has performed an operation such as clicking on the text "compose movie" in window 316 shown in fig. 3Q, the handset may display user interface 17 shown in the example of fig. 3M. The specific implementation of the user interface 17 may be referred to in the foregoing description, and will not be described herein.
After detecting that the user performs an operation of selecting a picture material/video material in the display area 313 shown in fig. 3M, the mobile phone may display a display change occurring based on the user operation in the user interface 17 exemplarily shown in fig. 3N. The specific implementation of the display change of the user interface 17 may be referred to in the foregoing description, and will not be described herein.
After detecting that the user performs an operation of generating a new video in the user interface 17 shown in fig. 3N (e.g., clicks a control 3142 in the user interface 17), the mobile phone may determine that the type of the video template is a parent-child type based on the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037, and the video material 3038 selected by the user, so that the video is generated by the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037, and the video material 3038 based on the video template of the parent-child type, and may display the user interface 15 exemplarily shown in fig. 3F. Wherein reference is made herein to the description of the generated video in mode 1 for a specific implementation of the generated video.
In other embodiments, the handset may display the user interface 19 illustrated by example in FIG. 3T upon detecting that the user has performed an operation such as clicking on control 3152 in the user interface 18 illustrated in FIG. 3P.
The user interface 19 may include a control 317, where the control 317 is used to enter an interface where the video 1 can be played, and the video 1 is a video generated based on the scheme of the present application.
Upon detecting that the user has performed an operation such as clicking on control 317 in user interface 19 shown in fig. 3Q, the handset may display user interface 15 shown in the example of fig. 3F.
In summary, the electronic device may automatically generate a video based on picture elements and/or video material selected by the user in the gallery application.
The specific implementation of the user interface 15 may be referred to in the foregoing description, and will not be described herein. Thus, the electronic device can display the generated video to the user through the preview area 306.
In addition, the user interface 15 is also used to display controls for editing the generated video. Therefore, the electronic equipment can provide the function of manually editing the generated video for the user, is convenient for the user to configure parameters such as duration, frame, video template, contained materials, filter and the like of the video based on own will, and enriches the style of the video.
In addition, the handset may save the video upon detecting that the user has performed an operation such as clicking on control 309 in user interface 15 shown in fig. 3F.
It should be noted that, parameters of the user interface, such as the control size, the control position, the display content, and the jump mode, include, but are not limited to, the foregoing descriptions.
Based on the description of the first mode, the second mode and the third mode, the mobile phone can store the generated video in a gallery application.
Referring to fig. 3A and fig. 3R-fig. 3S, fig. 3R-fig. 3S are schematic diagrams of a man-machine interaction interface according to an embodiment of the present application.
After detecting that the user performs an operation of opening the gallery application in the user interface 11 shown in fig. 3A (e.g., clicking on the icon 301 of the gallery application), the mobile phone may display the user interface 12 'shown in fig. 3R as an example, where the user interface 12' is used to display a page of an album in the gallery application.
The user interface 12' is substantially the same as the interface layout of the user interface 12 shown in fig. 3B, and the specific implementation manner may be referred to the description of the user interface 12 shown in fig. 3B in the first embodiment, which is not described herein. Unlike the user interface 12 shown in fig. 3B, the number of videos stored in the user interface 12 'is increased by 1, so the user interface 12' in fig. 3R shows that the number of all photos increases from "182" to "183" and the number of videos increases from "49" to "50".
After detecting that the user performs an operation such as clicking on the control 3021 in the user interface 12 ' shown in fig. 3R, the mobile phone may display the user interface 13 ' shown in fig. 3S by way of example, with the user interface 13 ' being a display interface for pictures and videos in the mobile phone.
The user interface 13 is substantially the same as the user interface 13 shown in fig. 3C, and the specific implementation manner may be described with reference to the user interface 13 in fig. 3C in the first embodiment, which is not described herein. Similar to the user interface 13 shown in fig. 3C, the picture/video stored in the user interface 13 'is moved to the next, so that the first material displayed by the user interface 13' in fig. 3S is the newly generated video 3039 in time sequence from near to far from the current time.
The handset may play the video 3039 upon detecting that the user has performed an operation such as clicking on the video 3039 in the user interface 13' shown in fig. 3S.
In the embodiment of the application, each video template may correspond to a piece of music. Typically, the music corresponding to different video templates is different. The electronic equipment can default that the music corresponding to each video template is unchanged, and can also change the music corresponding to each video template based on the selection of a user, so that the electronic equipment can be flexibly set according to actual conditions. The music may be preset for the electronic device or may be manually added by the user, which is not limited in the embodiment of the present application.
In one aspect, the video template is also associated with a mirror, speed, and transition. Generally, whether or not the music corresponding to the video templates is the same, the different video templates, at least one of the corresponding fortune mirror, speed, and transition, are different.
For music corresponding to any video template, each piece of music can be matched with the set fortune mirror, speed and transition. The transfer mirror and the transfer can be related to the types of the materials, the transfer mirror adopted by the video material and the transfer mirror adopted by the picture material can be the same or different, and the transfer adopted by the video material and the transfer adopted by the picture material can be the same or different. In addition, the video material can generally set a play effect corresponding to the speed.
Referring to fig. 4A-4J, fig. 4A-4J show the effect of the image material 3033 after the mirror operation.
The mobile phone stores the picture material 3033 shown in fig. 4A, where the picture material 3033 may be referred to in the description of the embodiment of fig. 3C, which is not described herein.
When the mobile phone displays the picture pixel 3033 by using the mirror effect moving diagonally, the mobile phone may change from displaying the interface 11 illustrated in the example of fig. 4B to displaying the interface 12 illustrated in the example of fig. 4C, where the interface 11 is the area a1 of the picture material 3033, the interface 12 is the area a2 of the picture material 3033, and the area a1 and the area a2 are located at different positions of the picture material 3033.
In addition to the mirror effect of the diagonal movement, the electronic device may also use mirror effects of upward, leftward, rightward, and the like, which is not limited in the embodiment of the present application.
When the mobile phone displays the picture pixel 3033 by using the enlarged mirror effect, the mobile phone may change from displaying the interface 1 illustrated in fig. 4B to displaying the interface 13 illustrated in fig. 4D, where the interface 11 is the area a1 of the picture material 3033, and the interface 13 is an enlarged view of the area a3 of the picture material 3033.
In addition to the amplified mirror effect, the electronic device may also use a scaled mirror effect, which is not limited in the embodiments of the present application.
In addition, when the picture material 20 is a vertical picture as exemplarily shown in fig. 4E, and the generated video adopts a banner, the electronic device may display the picture material 20 by using a mirror effect that moves from top to bottom. For example, the mobile phone may change from displaying the interface 21 illustrated in fig. 4F to displaying the interface 22 illustrated in fig. 4G, where the interface 21 is the region b1 of the picture material 20, the interface 22 is the region b2 of the picture material 20, and the region b1 and the region b2 are located at different positions of the picture material 20. Alternatively, the shape of the region composed of the region b1 and the region b2 may be set to be square. If the picture material 20 includes a person, a face, or the like, the electronic device may include as many regions as possible, which are formed by the regions b1 and b2, and which correspond to the person or the face in the material.
When the picture material 30 is a banner picture as exemplarily shown in fig. 4H, and the generated video adopts a frame of a vertical frame, the electronic device may display the picture material 30 by using a mirror effect that moves from left to right. For example, the mobile phone may change from displaying the interface 31 illustrated in fig. 4I to displaying the interface 32 illustrated in fig. 4J, where the interface 31 is the region c1 of the picture material 30, the interface 32 is the region c2 of the picture material 303, and the region c1 and the region c2 are located at different positions of the picture material 30. Alternatively, the shape of the region composed of the region c1 and the region c2 may be set to be square. If the picture material 30 includes a person, a face, or the like, the electronic device may include as many regions as possible, which are formed by the regions c1 and c2, and which correspond to the person or the face in the material.
Therefore, the method is beneficial to maximizing the display of the materials of the video generated by the electronic equipment, enriches the content of the video and ensures the motion picture feel and the picture feel brought by the video.
Referring to fig. 5, fig. 5 shows an effect of different speeds of the video material 3038. The video material 3038 may refer to the embodiment of fig. 3C, which is not described herein.
As shown in fig. 5, assuming that the electronic device plays the video material 3038 in both the time period t0 to t1 and the time period t2 to t3 based on the material, and the time period t2 to t3 is three times the time period t0 to t1, the speed at which the electronic device plays the video material 3038 in the time period t0 to t1 is three times the speed at which the video material 3038 is played in the time period t2 to t 3.
It should be noted that, the speed may include a speed of any ratio in addition to the triple speed, which is not limited in the embodiment of the present application.
Referring to fig. 6, fig. 6 shows a schematic diagram of the effect of transition between the picture material 3033 and the picture material 3032. The picture material 3033 and the picture material 3032 may refer to the description of the embodiment of fig. 3C, which is not described herein.
As shown in fig. 6, assuming that the electronic device plays the picture material 3033 in the period of t4 to t5 based on the video generated by the material, plays the picture material 3032 in the period of t6 to t7, and transitions to the picture material 3032 with a transition effect of superimposed blurring in the period of t5 to t6, the electronic device plays the picture material 3033 in the period of t4 to t5, plays the gradually enlarged picture material 3032 superimposed on the blurred picture material 3033 in the period of t5 to t6, and plays the picture material 3032 in the period of t6 to t 7.
It should be noted that, in addition to the "superimposed blur" effect, the transition may also include an effect of focus blur, which is not limited in the embodiment of the present application.
Therefore, the electronic equipment can realize scene scheduling and lens scheduling of materials according to the set operation mirror, speed and transition.
On the other hand, the video templates are related to scene types. Generally, whether the music corresponding to the video templates is the same or not, the video templates of different types and the corresponding scene types are different; the video templates of the same type and the corresponding scene types are the same.
When the user selects the music corresponding to the default video template, the electronic equipment does not need to adjust the duration of each segment of the video template, so that the music stuck point can be realized. When the user selects other music as the music corresponding to the video template, the electronic equipment needs to perform beat detection on the music selected by the user to obtain the beat speed of the music selected by the user, then judges whether the duration of each segment of the video template is equal to the integral multiple of the obtained beat speed, and adjusts the duration of the segment with the duration not equal to the integral multiple of the obtained beat speed so that the duration of each segment in the video template is the integral multiple of the beat speed.
For any piece of music, the embodiment of the application can detect the Beat of the music by adopting a BPM (Beat Per Minute) detection method to obtain the Beat rate (BPM), wherein the electronic equipment analyzes the audio by a digital signal processing (digital signal processing, DSP) method to obtain the Beat point of the music. A typical algorithm would divide the original audio into several segments, then obtain the spectrum by fast fourier transform, and finally filter analysis based on the sound energy to obtain the beat point of the music.
It should be noted that, the scene type corresponding to each segment in each video template is set in advance based on actual experience (for example, the user perceives strongly about the scene type corresponding to a single segment and the scene types corresponding to a plurality of continuous segments in some positions).
For music corresponding to any video template, according to the embodiment of the present application, the beat point of the music is taken as a dividing line, the whole piece of music is divided into a plurality of segments, and each segment is matched with a set scene type.
Wherein each segment is an integer multiple of the tempo of the music, thereby realizing a musical tempo for each segment. It can be understood that the musical beat point is a beat or a beat, which refers to a rule of combining strong beat and weak beat, specifically refers to a total length of notes of each bar in the music score, and the notes can be, for example, halve notes, quarter notes, eighth notes, and the like. Typically, a piece of music may consist of a plurality of shots, and the shots of a piece of music are typically fixed.
It will be appreciated that the selection of the material is random, and in practice there is a high probability that the material cannot fully satisfy the scene type set by each segment. Therefore, when the foregoing problems occur, the electronic device may adjust the arrangement sequence of the materials in various ways.
In some embodiments, the electronic device may prioritize each clip. Wherein the high priority fragments may include, but are not limited to: the beginning, chorus, end, or accent of the music. Furthermore, the electronic device may preferably satisfy the scene type set by the high-priority segment, place the material corresponding to the scene type set by the high-priority segment in the high-priority segment, and then place the remaining material in the remaining segment according to the scene type set by the remaining segment, where the scene type of the remaining material may be the same as or different from the scene type set by the remaining segment.
In other embodiments, the electronic device may preferably satisfy the scene type set by the segment with the front position, place the material corresponding to the scene type set by the segment with the front position in the segment with the front position, and then place the remaining material in the remaining segment according to the scene type set by the remaining segment, where the scene type of the remaining material may be the same as or different from the scene type set by the remaining segment.
The electronic device may preferably satisfy the scene type set in the segment with the front position among the remaining segments.
For music corresponding to any video template, the embodiment of the application may divide the whole piece of music into a plurality of segments by taking the beat point of the music as a dividing line, match the plurality of continuous segments with set scene types, and the scene types of the rest segments may not be limited. Thus, the lens feeling and the movie feeling of the generated video are enhanced. Wherein the plurality of consecutive segments may be segments of a beginning part, an ending part, or a chorus part of the music.
Taking the scene type division into three types of near, middle and far scenes as exemplarily shown in fig. 7 as an example, the scene types corresponding to each of a plurality of continuous segments are introduced. Wherein A represents the scene type corresponding to the close scene, B represents the scene type corresponding to the middle scene, and C represents the scene type corresponding to the far scene.
For example, the scene types corresponding to 5 consecutive segments corresponding to the beginning and/or the end of the music may be CCCBA, so that the generated video has a suspicion effect in the beginning portion or an unfinished effect in the end portion.
As another example, the embodiment of the present application may respectively be ABBC for the scene types corresponding to the 4 consecutive segments corresponding to the beginning and/or the end of the music, so that the generated video provides for the effect of expanding the narration for the video at the beginning or at the end.
As another example, in the embodiment of the present application, the scene types corresponding to 5 consecutive segments corresponding to the segment after the beginning and/or the segment before the end of the music may be BBBBBs, so that the generated video has the effect of expanding the description in the corresponding segment.
For another example, in the embodiment of the present application, the scene types corresponding to the 5 continuous segments corresponding to the chorus part of the music may be ccccca, so that the generated video promotes the description of the video to the climax effect at the chorus part.
It should be noted that the embodiments of the present application include, but are not limited to, specific implementation manners of scene types corresponding to the plurality of continuous pieces of music.
Therefore, the electronic equipment can adjust the arrangement sequence of the materials according to the scene type of the fragments set by the beat points of the music.
In summary, the electronic device arranges the materials according to the set scene sequence in the video template, adds scene feeling and lens feeling of the materials according to the set fortune mirror, speed and transition in the video template, and generates the video with the playing effect corresponding to the video template, so that the generated video has expressive force and tension in the aspects of description of film scenario, expression of figure idea emotion, processing of figure relationship and the like, thereby enhancing the artistic appeal of the generated video.
For ease of illustration, specific implementations of video templates are described with reference to tables 1 and 2, taking parent-child type video templates and travel type video templates as examples. In tables 1 and 2, the scene types are exemplified by three types of near, middle and far scenes as exemplarily shown in fig. 7, and for convenience of description, a represents the scene type corresponding to the near scene, B represents the scene type corresponding to the middle scene, and C represents the scene type corresponding to the far scene.
TABLE 1 parent-child type video template
Figure BDA0002711163620000251
/>
Figure BDA0002711163620000261
In table 1, at the beginning of a video, the effect of "white fade-in" and the effect of "slice-head fade-out" are used for transition with respect to a video material. For picture materials, the transition adopts the effect of 'white gradually brightening'.
At time 6x, for video material, the transition adopts a "fast downshifting" effect. For picture materials, the transition adopts the effect of 'upper and lower fuzzy oblique angle pushing'.
At time 14x, for video material, the transition takes the effect of "stretch-in". Aiming at the picture materials, the transition adopts the effect of 'left-right fuzzy pushing'.
At time 22x, for video material, the transition takes a "fast up" effect. For picture materials, the transition adopts the effect of 'push up and focus blurring/zooming behind the scenes'.
At 32x, for video material, the transition takes the effect of "left-hand-out. For picture materials, the transition adopts the effect of right axis rotation blurring.
At 34x, the transition takes the effect of "right rotation" for the video material. For picture materials, the transition adopts the effect of 'rotating and blurring to the left axis'.
At time 36x, for video material, the transition takes the effect of "fast left-slide". For picture materials, the transition adopts the effect of perspective blurring.
At time 38x, for video material, the transition employs a "blurring" effect. For picture materials, no effect is adopted for transition.
At time 40x, for video material, the transition takes the effect of "left-hand-out. For picture materials, no effect is adopted for transition.
At time 42x, for video material, the transition takes the effect of "white fade out" and the effect of "right rotate". For picture materials, the transition adopts the effect of 'whitening and fading out'.
At time 44x, the transition takes the effect of "fast left-slide" for the video material. For picture materials, no effect is adopted for transition.
At time 46x, for video material, the transition employs a "blurring" effect. For picture materials, no effect is adopted for transition.
At time 48x, for video material, the transition takes the effect of "white fade out". For picture materials, the transition adopts the effect of 'whitening and fading out'.
At time 49x, for video material, the transition takes the effect of "left rotation". For picture materials, the transition adopts the effect of perspective blurring.
At time 52x, the transition takes the effect of "left rotation" for the video material. For picture materials, no effect is adopted for transition.
At time 56x, the transition takes the effect of "fast left" for the video material. For picture materials, no effect is adopted for transition.
At time 60x, the transition takes the effect of "fast left" for the video material. For picture materials, no effect is adopted for transition.
At 62x, for video material, the transition takes the effect of "stretch-in". For picture materials, no effect is adopted for transition.
Table 2 video templates for travel types
Figure BDA0002711163620000271
/>
Figure BDA0002711163620000281
/>
Figure BDA0002711163620000291
/>
Figure BDA0002711163620000301
The specific implementation manner of the transition in table 2 can be referred to the description manner of the transition in table 1, and will not be described herein.
It should be noted that the video templates include, but are not limited to, parameters related to scene type, fortune mirror, speed, and transition.
In addition, the video template can adaptively adjust the moving mode of the video lens based on the frame of the material, so as to achieve the optimal playing effect. For example, when generating a video of a banner, the electronic device may use a manner in which the lens moves from top to bottom to achieve a maximum regional display effect of the material of the vertical pair; when the vertical-width video is generated, the electronic equipment can use a mode that the lens moves from left to right to realize the maximum regional display effect of the materials of the banner. Therefore, the method is beneficial to maximizing the display of the materials of the video, enriches the content of the video and ensures the motion picture feel and the picture feel brought by the video.
In this embodiment of the present application, each scene type in the video template corresponds to a segment, and the duration of the segments may be the same or different. The electronic device may first place the user-selected material based on the size of the duration of each segment in the video template. Typically, video material may be placed in priority picture material in longer-duration clips. The electronic equipment adjusts the arrangement sequence of the placed materials based on the scene types corresponding to the fragments, so that the scene types of the materials are matched with the scene types of the fragments, and therefore the situation that the materials selected by the user at least occur once in the generated video and the adjacent fragments cannot place the same materials is ensured.
It should be noted that, the embodiments of the present application are not limited to the above implementation to adjust the arrangement order of the materials in the video.
In addition, when the number of materials selected by the user is large and the duration of the video is set to be small, the duration of the segment corresponding to the Jing Bie type can be set to be small, so that all materials can appear in the video once. When the number of the materials selected by the user is small and the duration setting of the video is large, the electronic device can select one or more fragments from the video materials to repeatedly appear in the generated video for N times, wherein N is a positive integer greater than 1. If the time length of the longer video cannot be met, the electronic device can repeat all the arranged materials in the generated video for M times, wherein M is a positive integer greater than 1.
The electronic device can be provided with a minimum duration and a maximum duration for the duration of the music corresponding to the video template, so that the material selected by the user can be ensured to appear at least once in the generated video.
Based on the foregoing description, the playback effect of the video 3039 in fig. 3S is related to the video template. Typically, video templates differ and video 3039 plays differently. When the user selects the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037 and the video material 3038, the video 3039 may include: video material 3031, picture material 3032, picture material 3033, video material 3034, picture material 3035, picture material 3036, picture material 3037, and video material 3038.
Taking the scene type division into three types of near, middle and far scenes as exemplarily shown in fig. 7 as an example, the scene types corresponding to each of a plurality of continuous segments are introduced. Wherein A represents the scene type corresponding to the close scene, B represents the scene type corresponding to the middle scene, and C represents the scene type corresponding to the far scene.
In this embodiment, the electronic device may identify that the scene type corresponding to the video material 3031 is BCBBB, the scene type of the picture material 3032 is B, the scene type of the picture material 3033 is B, the scene type corresponding to the video material 3034 is CCCC, the scene type of the picture material 3035 is B, the scene type of the picture material 3036 is a, the scene type of the picture material 3037 is a, and the scene type corresponding to the video material 3038 is BCCCC.
The parent-child type video templates may be used for identifying and generating the video 3039 in the electronic device based on the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037 and the video material 3038.
In some embodiments, if the parent-child type video template shown in table 1 is adopted, the electronic device places the video material 3031, the picture material 3032, the picture material 3033, the picture material 3037 and the video material 3038 at the positions corresponding to the music according to the scene type corresponding to each piece of music given in table 1 based on the scene type of each of the video material 3031, the picture material 3032, the picture material 3034, the picture material 3035, the picture material 3036, the picture material 3037 and the video material 3038 respectively to the positions corresponding to the music, so as to obtain the video 3039.
In other embodiments, the electronic device may enhance the playing effect of the generated video according to the scene types of the plurality of continuous segments corresponding to the music set in the video template, which is beneficial to enhancing the lens feeling and the movie feeling of the video.
It should be noted that, in addition to the two modes, the electronic device may set the scene type in the video template according to the actual situation and the experience value, and the setting mode of the scene type in the video template in the embodiment of the present application is not limited.
In the following, with reference to fig. 8A to 8E, a playback effect of a video generated by the electronic device based on a material selected by a user will be illustrated.
Referring to fig. 8A-8E, fig. 8A-8E are schematic diagrams illustrating a playing sequence of each material when the electronic device plays the generated video.
As shown in fig. 8A, when the user selects the picture material 11, the picture material 12, the picture material 13, the picture material 14, and the picture material 15, the electronic apparatus determines: the scene type in the video template is CCCBA, and the duration of the scene type CCCBA is 4x, 2x, and 2x, respectively, where x=0.48 seconds, and the scene type of the picture material 11 is B, the scene type of the picture material 12 is B, the scene type of the picture material 13 is C, the scene type of the picture material 14 is a, and the scene type of the picture material 15 is C.
Based on the scene type in the video template and the respective scene types of the picture material 11, the picture material 12, the picture material 13, the picture material 14 and the picture material 15, the electronic device can learn that one scene type C with the duration of 2x is absent in all the materials, so that all the materials cannot be accurately matched with the scene type in the video template. Since all material needs to appear at least once, the electronic device can change the scene type CCCBA in the video template to CBCBA.
Thus, the electronic device adjusts the arrangement order of the picture material 11, the picture material 12, the picture material 13, the picture material 14, and the picture material 15 based on the scene type CBCBA, and generates a video exemplarily shown in fig. 8A.
In fig. 8A, the playing order of the picture material 11, the picture material 12, the picture material 13, the picture material 14, and the picture material 15 in the generated video is:
between 0 th and 4 th x: a picture material 13;
between 4 x-6 x: a picture material 11;
between 6 x-8 x: a picture material 15;
between 8 x-9 x: a picture material 12;
between 9 x-11 x: picture material 14.
Further, the scene types corresponding to the video exemplarily shown in fig. 8A are cbcbcbas, respectively.
As shown in fig. 8B, when the user selects the picture material 21, the picture material 22, the picture material 23, the picture material 24, and the picture material 25, the electronic apparatus determines: the scene type in the video template is CCCBA, and the duration of the scene type CCCBA is 4x, 2x, and 2x, respectively, where x=0.48 seconds, and the scene type of the picture material 21 is C, the scene type of the picture material 22 is B, the scene type of the picture material 23 is C, the scene type of the picture material 24 is a, and the scene type of the picture material 25 is C.
The electronic device may learn that all the materials can exactly match the scene type in the video template based on the scene type in the video template and the respective scene types of the picture material 21, the picture material 22, the picture material 23, the picture material 24 and the picture material 25. Thus, the electronic device adjusts the arrangement order of the picture material 21, the picture material 22, the picture material 23, the picture material 24, and the picture material 25 based on the scene type CCCBA, generating a video exemplarily shown in fig. 8B.
In fig. 8B, the playing order of the picture material 21, the picture material 22, the picture material 23, the picture material 24, and the picture material 25 in the generated video is:
Between 0 th and 4 th x: a picture material 23;
between 4 x-6 x: a picture material 21;
between 6 x-8 x: a picture material 25;
between 8 x-9 x: a picture material 22;
between 9 x-11 x: picture material 24.
Further, the scene types corresponding to the video exemplarily shown in fig. 8B are CCCBA, respectively.
As shown in fig. 8C, when the user selects the picture material 31, the video material 32, the picture material 32, and the picture material 33, the electronic apparatus determines: the scene type in the video template is CCCBA, and the duration of the scene type CCCBA is 4x, 2x, and 2x, respectively, where x=0.48 seconds, and the scene type of the picture material 31 is B, the scene type of the video material 31 is B, the duration of the video material 31 is equal to x, the scene type of the video material 32 is C, the duration of the video material 32 is greater than or equal to 4x, the scene type of the picture material 32 is a, and the scene type of the picture material 33 is C.
Based on the scene type in the video template and the scene type of each of the picture material 31, the video material 32, the picture material 32 and the picture material 33, the electronic device can learn that one scene type C with the duration of 2x is absent in all the materials, so that all the materials cannot be accurately matched with the scene type in the video template. Since all material needs to appear at least once, the electronic device can change the scene type CCCBA in the video template to CBCBA.
Thus, the electronic device adjusts the arrangement order of the picture material 31, the video material 32, the picture material 32, and the picture material 33 based on the scene type CBCBA, generating a video exemplarily shown in fig. 8C.
In fig. 8C, the playing order of the picture material 31, the video material 32, the picture material 32, and the picture material 33 in the generated video is:
between 0 th and 4 th x: video material 32;
between 4 x-6 x: a picture material 31;
between 6 x-8 x: a picture material 33;
between 8 x-9 x: video material 32;
between 9 x-11 x: picture material 32.
Further, the scene types corresponding to the video exemplarily shown in fig. 8C are CBCBA, respectively.
As shown in fig. 8D, when the user selects the picture material 41, the video material 42, the picture material 42, and the picture material 43, the electronic apparatus determines: the scene type in the video template is CCCBA, and the duration of the scene type CCCBA is 4x, 2x, and 2x, respectively, where x=0.48 seconds, and the scene type of the picture material 41 is C, the scene type of the video material 41 is B, the duration of the video material 41 is equal to x, the scene type of the video material 42 is C, the duration of the video material 42 is greater than or equal to 4x, the scene type of the picture material 42 is a, and the scene type of the picture material 43 is C.
The electronic device may learn that all the materials can accurately match the scene type in the video template based on the scene type in the video template and the respective scene types of the picture material 41, the video material 42, the picture material 42 and the picture material 43. Thus, the electronic device adjusts the arrangement order of the picture material 41, the video material 42, the picture material 42, and the picture material 43 based on the scene type CCCBA, generating a video exemplarily shown in fig. 8D.
In fig. 8D, the playing order of the picture material 41, the video material 42, the picture material 42, and the picture material 43 in the generated video is:
between 0 th and 4 th x: video material 42;
between 4 x-6 x: a picture material 41;
between 6 x-8 x: a picture material 43;
between 8 x-9 x: video material 42;
between 9 x-11 x: picture material 42.
Further, the scene types corresponding to the video exemplarily shown in fig. 8D are CCCBA, respectively.
As shown in fig. 8E, when the user selects the picture material 51, the video material 52, and the picture material 52, the electronic apparatus determines: the scene type in the video template is CCCBA, and the duration of the scene type CCCBA is 4x, 2x, and 2x, respectively, where x=0.48 seconds, and the scene type of the picture material 51 is C, the scene type of the video material 51 is BC, the duration of the segment corresponding to the scene type C in the video material 51 is 2x, the duration of the segment corresponding to the scene type B in the video material 51 is x, the scene type of the video material 52 is C, the time duration of the video material 42 is greater than or equal to 4x, and the scene type of the picture material 52 is a.
The electronic device may learn that all the materials can accurately match the scene type in the video template based on the scene type in the video template and the respective scene types of the picture material 51, the video material 52, and the picture material 52. Thus, the electronic device adjusts the arrangement order of the picture material 51, the video material 52, and the picture material 52 based on the scene type CCCBA, generating a video exemplarily shown in fig. 8E.
In fig. 8E, the playing order of the picture material 51, the video material 52, and the picture material 52 in the generated video is:
between 0 th and 4 th x: video material 52;
between 4 x-6 x: a segment corresponding to the scene type C in the picture material 51;
between 6 x-8 x: a picture material 51;
between 8 x-9 x: a clip corresponding to the scene type B in the video material 51;
between 9 x-11 x: picture material 52.
Further, the scene types corresponding to the video exemplarily shown in fig. 8E are CCCBA, respectively.
Based on the foregoing description, after determining that the scene type in the video template is CCCBA, the electronic device may perform matching to a preset degree on the scene type of the material and the scene type in the video template based on the scene type corresponding to the material selected by the user, in consideration of factors such as a playing effect of the video, a duration of the video, the scene type in the video, a use condition of the material, the number of materials, the scene type of the material, whether the material supports repeated use, and the like, so as to generate the video. That is, the scene type corresponding to the video generated by the electronic device is identical or partially identical to the scene type in the video template. The preset degree may be 100% (i.e. precise matching) or 90% (i.e. fuzzy matching), and is usually greater than or equal to 50%. In the embodiment of the application, the electronic device adjusts the arrangement sequence of the materials in the generated video based on the arrangement sequence of the scene types in the video template, and then combines the technologies of moving mirrors, speed, transition and the like in the video template to generate the video with coherent sight and high quality feeling.
In summary, the video generation method of the embodiment of the application strengthens the shot feeling and the movie feeling of the video, and is beneficial to improving the use experience of users.
Based on the foregoing description, embodiments of the present application may provide a video generating method.
Referring to fig. 9, fig. 9 is a schematic diagram illustrating a video generating method according to an embodiment of the present application. As shown in fig. 9, the video generating method according to the embodiment of the present application may include:
s101, the electronic equipment displays a first interface of a first application, wherein the first interface comprises a first control and a second control.
S102, after receiving a first operation acting on a first control, the electronic device determines that the arrangement sequence of a first material, a second material and a third material is a first sequence; and generating a first video from the first material, the second material and the third material in a first order.
S103, after receiving a second operation acting on the second control, the electronic device determines that the arrangement sequence of the first material, the second material and the third material is a second sequence, and the second sequence is different from the third sequence; and generating a second video from the first material, the second material and the third material in a second order.
The first material, the second material and the third material are different image materials stored in the electronic equipment, and the third sequence is the time sequence of storing the first material, the second material and the third material into the electronic equipment, and the first sequence is different from the third sequence.
In this embodiment of the present application, specific implementation manners of the first material, the second material, and the third material may be referred to the foregoing description. Specific implementation of the first control may refer to any one of the controls 30811, 30812, 30813 and 30814 shown in fig. 3F, and specific implementation of the second control may refer to any one of the controls 30811, 30812, 30813 and 30814 shown in fig. 3F, where the first control is different from the second control. The first order and the second order may be the same or different, which is not limited in the embodiment of the present application. The playing effects of the first video and the second video are different, and the video 1, the video 2, the video 3 and the video generated based on the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037 and the video material 3038 selected by the user can be specifically referred to. In some embodiments, the first application is a gallery application of the electronic device.
According to the embodiment of the application, the electronic equipment is used for matching the proper video template by identifying the scene type of the material, adjusting the arrangement sequence of the material based on the scene type set by each segment in the video template, combining the motion mirror, the speed and the transition set by each segment in the video template, automatically generating the video with coherent sight and high quality sense, not relying on manual editing of a user, enhancing the lens sense and the movie sense of the video, and improving the use experience of the user.
In some embodiments, the first video is divided into a plurality of segments with the beat of the music as a boundary; the first material, the second material and the third material appear at least once in the first video, and the materials appearing in any two adjacent segments of the first video are different; the first material, the second material, and the third material appear at least once in the second video, and the materials appearing in any two adjacent segments of the second video are different.
In some embodiments, the method further comprises: the electronic equipment displays a second interface of the first application; after receiving a third operation acting on the second interface, the electronic device generates a first video from the first material, the second material and the third material.
In this embodiment of the present application, the specific implementation procedure of the second interface may refer to the description of the user interface 13 illustrated in fig. 3E in the first embodiment, or may refer to the description of the user interface 17 illustrated in fig. 3N in the second embodiment, or may refer to the description of the user interface 17 illustrated in fig. 3N in the third embodiment. The implementation of the third operation may be described with reference to the text "movie" in the window 305 of the click user interface 13 illustrated in the example of fig. 3E in the first mode, or with reference to the description of the control 3142 in the click user interface 17 illustrated in the example of fig. 3N in the second mode, or with reference to the description of the control 3142 in the click user interface 17 illustrated in the example of fig. 3N in the third mode.
In some embodiments, the method further comprises: the electronic equipment determines to generate a first video from the first material, the second material, the third material and the fourth material; the fourth material is an image material which is stored in the electronic device and is different from the first material, the second material and the third material.
In the embodiment of the present application, the specific implementation process of the foregoing solution may be referred to in the third mode, which is the description of video 1, video 2, and video in the user interface 18 shown in fig. 3P.
In some embodiments, a third control is also included in the first interface; the method further comprises the steps of: after receiving a fourth operation acting on the third control, the electronic device displays a third interface, wherein the third interface comprises: options of configuration information, the configuration information includes: at least one parameter of duration, filter, frame, material, or title; after receiving a fifth operation on the options of the configuration information, the electronic device generates a third video based on the configuration information, the first material, the second material, and the third material in the first order.
In this embodiment of the present application, the specific implementation manner of the third control may be referred to the descriptions of the control 3082, the control 3083, the control 3084, and the control 3085 shown in the example of fig. 3F, which are not described herein. The third interface may be described with reference to the user interface 21 shown in fig. 3G, or the user interface 22 shown in fig. 3H, or the user interface 23 shown in fig. 3I, or the user interface 24 shown in fig. 3J, which will not be described here.
For example, the electronic device may adjust parameters such as duration of video 1, frame, whether new material is added, whether existing material is deleted, etc., through user interface 21, which is shown by way of example in fig. 3G. As another example, the electronic device may adjust the music of video 1 through user interface 22, which is shown by way of example in fig. 3H. As another example, the electronic device may adjust the filter of video 1 through user interface 23, shown schematically in fig. 3I. As another example, the electronic device may adjust whether a title is added to video 1 via user interface 24, which is shown by way of example in fig. 3J.
In some embodiments, the first interface further includes a fourth control therein; the method further comprises the steps of: after generating the first video, the electronic device saves the first video in response to a fourth operation on a fourth control. In this embodiment of the present application, the specific implementation manner of the fourth control may refer to the description of the control 309 shown in the example of fig. 3F, which is not described herein.
In some embodiments, the method specifically comprises: the electronic equipment determines a scene type corresponding to the first material, a scene type corresponding to the second material and a scene type corresponding to the third material; the electronic equipment determines a material matched with the scene type corresponding to the first segment based on the scene type corresponding to the first material, the scene type corresponding to the second material, the scene type corresponding to the third material and the scene type set by each segment in the first video template, wherein the first segment is any segment in the first video template; the arrangement sequence of the materials corresponding to all the fragments in the first video template is a first sequence; the electronic equipment determines materials matched with the scene type corresponding to the second segment based on the scene type corresponding to the first material, the scene type corresponding to the second material, the scene type corresponding to the third material and the scene type set by each segment in the second video template, wherein the second segment is any segment in the second video template; the arrangement sequence of the materials corresponding to all the fragments in the second video template is a second sequence; wherein the first video template is different from the second video template, each segment in the second video is opposite to each segment in the second video template, and each segment in the second video corresponds to each segment in the second video template.
In this embodiment of the present application, the above-mentioned schemes can refer to the descriptions of the video generated based on the video 1, the video 2, the video 3, and the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037, and the video material 3038 selected by the user, which are not described herein.
In some embodiments, the method further comprises: the electronic equipment generates a first video from the first material, the second material and the third material according to the first sequence and the mirror effect, the speed effect and the transition effect which are set by each segment in the first video template; and the electronic equipment generates a second video from the first material, the second material and the third material according to the second sequence and the mirror effect, the speed effect and the transition effect which are set by each segment in the second video template.
In this embodiment of the present application, the foregoing description may be referred to the foregoing description, the specific implementation of the mirror effect may be referred to the description illustrated in the example of fig. 5, the specific implementation of the speed effect may be referred to the description illustrated in the example of fig. 6, and the specific implementation of the transition effect may be referred to the description illustrated in the example of fig. 7, which is not repeated herein.
In some embodiments, when the first material is a picture material, the method specifically includes: when the scene type corresponding to the first material is the same as the scene type corresponding to the first segment, or the scene type corresponding to the first material is adjacent to the sequence of the scene type corresponding to the first segment according to a preset rule, the electronic device determines the first material as a material matched with the scene type corresponding to the first segment; and when the scene type corresponding to the first material is the same as the scene type corresponding to the second segment, or the scene type corresponding to the first material is adjacent to the sequence of the scene type corresponding to the second segment according to a preset rule, the electronic equipment determines the first material as a material matched with the scene type corresponding to the second segment.
In the embodiment of the present application, the specific implementation process of the above solution may be referred to the descriptions exemplarily shown in fig. 8A to 8E, which are not described herein. Specific implementations of the first pixel may be seen in the picture pixels exemplarily mentioned in fig. 8A-8E.
In some embodiments, when the first material is a video material, the method specifically includes: when the scene type corresponding to the fourth material is the same as the scene type corresponding to the first segment or the scene type corresponding to the fourth material is adjacent to the sequence of the scene type corresponding to the first segment according to a preset rule and the duration of the fourth material is equal to the duration of the first segment, the electronic device intercepts the fourth material from the first material and determines the fourth material as a material matched with the scene type corresponding to the first segment; when the scene type corresponding to the fourth material is the same as the scene type corresponding to the second segment or the scene type corresponding to the fourth material is adjacent to the sequence of the scene type corresponding to the second segment according to a preset rule, and the duration of the fourth material is equal to that of the second segment, the electronic device intercepts the fourth material from the second material and determines the fourth material as a material matched with the scene type corresponding to the second segment; the fourth material is part or all of the first material.
In the embodiment of the present application, the specific implementation process of the above solution may be referred to the descriptions exemplarily shown in fig. 8A to 8E, which are not described herein. The specific implementation of the first pixel may be referred to as a video pixel as exemplarily mentioned in fig. 8A to 8E, and the specific implementation of the fourth pixel may be referred to as a video pixel 51 or a video pixel 52.
In some embodiments, the scene types include, in order of a preset rule: the close range, the middle range and the far range, the view type adjacent to the close range is the far range, the view type adjacent to the middle range is the close range and the far range, and the view type adjacent to the far range is the close range. In this embodiment of the present application, the division of the scene types is not limited to the above implementation, and specific reference may be made to the above description, which is not repeated here.
Illustratively, the present application provides an electronic device comprising: a memory and a processor; the memory is used for storing program instructions; the processor is configured to invoke the program instructions in the memory to cause the electronic device to perform the video generation method of the previous embodiment.
Illustratively, the present application provides a chip system for use with an electronic device including a memory, a display screen, and a sensor; the chip system includes: a processor; the electronic device performs the video generation method of the previous embodiment when the processor executes the computer instructions stored in the memory.
Illustratively, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, causes an electronic device to implement the video generation method in the previous embodiments.
Illustratively, the present application provides a computer program product comprising: executing instructions, the executing instructions being stored in a readable storage medium, the executing instructions being readable from the readable storage medium by at least one processor of the electronic device, the executing instructions being executable by the at least one processor to cause the electronic device to implement the video generation method of the previous embodiments.
In the above-described embodiments, all or part of the functions may be implemented by software, hardware, or a combination of software and hardware. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer readable storage medium. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program to instruct related hardware, the program may be stored in a computer readable storage medium, and the program may include the above-described method embodiments when executed. And the aforementioned storage medium includes: ROM or random access memory RAM, magnetic or optical disk, etc.

Claims (15)

1. A video generation method, comprising:
the method comprises the steps that the electronic equipment displays a first interface of a first application, wherein the first interface comprises a first control and a second control;
after the electronic device receives a first operation acting on the first control, determining that the arrangement sequence of a first material, a second material and a third material is a first sequence, wherein the first material, the second material and the third material are different image materials stored in the electronic device, the first sequence is different from a third sequence, and the third sequence is the time sequence of storing the first material, the second material and the third material into the electronic device; generating a first video from the first material, the second material and the third material according to the first sequence;
After receiving a second operation acting on the second control, the electronic device determines that the arrangement sequence of the first material, the second material and the third material is a second sequence, and the second sequence is different from the third sequence; generating a second video from the first material, the second material and the third material according to the second sequence;
the first sequence is the sequence in which materials with scene types respectively matched with the scene types of each segment in the first video template are arranged according to the sequence of all segments in the first video template;
the second sequence is the sequence in which the materials with the scene type respectively matched with the scene type of each segment in the second video template are arranged according to the sequence of all segments in the second video template;
the first video template is different from the second video template, each segment in the first video corresponds to each segment in the first video template, and each segment in the second video corresponds to each segment in the second video template.
2. The method of claim 1, wherein the first video is divided into a plurality of segments with a beat of music as a boundary;
The first material, the second material and the third material appear at least once in the first video, and the materials appearing in any two adjacent segments of the first video are different;
the first material, the second material, and the third material appear at least once in the second video, and the materials appearing in any two adjacent segments of the second video are different.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
the electronic equipment displays a second interface of the first application;
and after receiving a third operation acting on the second interface, the electronic device generates the first video from the first material, the second material and the third material.
4. The method according to claim 1 or 2, characterized in that the method further comprises:
the electronic equipment determines that the first material, the second material and the third material generate the first video from the first material, the second material, the third material and the fourth material;
the fourth material is an image material which is stored in the electronic equipment and is different from the first material, the second material and the third material.
5. The method of claim 1 or 2, wherein the first interface further comprises a third control; the method further comprises the steps of:
after receiving a fourth operation acting on the third control, the electronic device displays a third interface, wherein the third interface comprises: options for configuration information, the configuration information comprising: at least one parameter of duration, filter, frame, material, or title;
and after receiving a fifth operation acting on the options of the configuration information, the electronic equipment generates a third video from the first material, the second material and the third material according to the first sequence based on the configuration information.
6. The method of claim 1 or 2, wherein the first interface further comprises a fourth control; the method further comprises the steps of:
after generating the first video, the electronic device responds to a fourth operation on the fourth control to save the first video.
7. The method according to claim 1 or 2, wherein prior to said determining the first order and the second order, the method further comprises:
And the electronic equipment determines the scene type corresponding to the first material, the scene type corresponding to the second material and the scene type corresponding to the third material.
8. The method of claim 7, wherein the method further comprises:
the electronic equipment generates the first video from the first material, the second material and the third material according to the first sequence and the mirror effect, the speed effect and the transition effect which are set by each segment in the first video template;
and the electronic equipment generates the second video from the first material, the second material and the third material according to the second sequence and the mirror effect, the speed effect and the transition effect which are set by each segment in the second video template.
9. The method according to claim 1 or 2, wherein when the first material is a picture material, the method specifically comprises:
when the scene type corresponding to the first material is the same as the scene type corresponding to the first segment, or the scene type corresponding to the first material is adjacent to the sequence of the scene type corresponding to the first segment according to a preset rule, the electronic device determines the first material as a material matched with the scene type corresponding to the first segment, wherein the first segment is any segment in the first video template;
And when the scene type corresponding to the first material is the same as the scene type corresponding to the second segment, or the scene type corresponding to the first material is adjacent to the sequence of the scene type corresponding to the second segment according to a preset rule, the electronic device determines the first material as a material matched with the scene type corresponding to the second segment, and the second segment is any one segment in the second video template.
10. The method according to claim 1 or 2, wherein when the first material is video material, the method specifically comprises:
when the scene type corresponding to the fourth material is the same as the scene type corresponding to the first segment or the scene type corresponding to the fourth material is adjacent to the sequence of the scene type corresponding to the first segment according to a preset rule, and the duration of the fourth material is equal to the duration of the first segment, the electronic device intercepts the fourth material from the first material, determines the fourth material as a material matched with the scene type corresponding to the first segment, and the first segment is any segment in the first video template;
when the scene type corresponding to the fourth material is the same as the scene type corresponding to the second segment or the scene type corresponding to the fourth material is adjacent to the sequence of the scene type corresponding to the second segment according to a preset rule, and the duration of the fourth material is equal to the duration of the second segment, the fourth material is intercepted from the second material, the fourth material is determined to be a material matched with the scene type corresponding to the second segment, and the second segment is any segment in the second video template;
The fourth material is part or all of the first material.
11. The method of claim 10, wherein the scene types include, in order of the predetermined rule: the scene type adjacent to the near scene is the far scene, the scene type adjacent to the middle scene is the near scene and the far scene, and the scene type adjacent to the far scene is the near scene.
12. The method of any of claims 1-2, 11, wherein the first application is a gallery application of the electronic device.
13. An electronic device, comprising: a memory and a processor;
the memory is used for storing program instructions;
the processor is configured to invoke program instructions in the memory to cause the electronic device to perform the video generation method of any of claims 1-12.
14. A chip system, wherein the chip system is applied to an electronic device comprising a memory, a display screen and a sensor; the chip system includes: a processor; the electronic device performs the video generation method of any of claims 1-12 when the processor executes the computer instructions stored in the memory.
15. A computer readable storage medium having stored thereon a computer program, characterized in that the computer program, when executed by a processor, causes the electronic device to implement the video generation method of any of claims 1-12.
CN202011057180.9A 2020-09-29 2020-09-29 Video generation method and electronic equipment Active CN114363527B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011057180.9A CN114363527B (en) 2020-09-29 2020-09-29 Video generation method and electronic equipment
PCT/CN2021/116047 WO2022068511A1 (en) 2020-09-29 2021-09-01 Video generation method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011057180.9A CN114363527B (en) 2020-09-29 2020-09-29 Video generation method and electronic equipment

Publications (2)

Publication Number Publication Date
CN114363527A CN114363527A (en) 2022-04-15
CN114363527B true CN114363527B (en) 2023-05-09

Family

ID=80949616

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011057180.9A Active CN114363527B (en) 2020-09-29 2020-09-29 Video generation method and electronic equipment

Country Status (2)

Country Link
CN (1) CN114363527B (en)
WO (1) WO2022068511A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115185429A (en) * 2022-05-13 2022-10-14 北京达佳互联信息技术有限公司 File processing method and device, electronic equipment and storage medium
CN116055799B (en) * 2022-05-30 2023-11-21 荣耀终端有限公司 Multi-track video editing method, graphical user interface and electronic equipment
CN116055715B (en) * 2022-05-30 2023-10-20 荣耀终端有限公司 Scheduling method of coder and decoder and electronic equipment
CN117216312B (en) * 2023-11-06 2024-01-26 长沙探月科技有限公司 Method and device for generating questioning material, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104581380A (en) * 2014-12-30 2015-04-29 联想(北京)有限公司 Information processing method and mobile terminal
WO2018032921A1 (en) * 2016-08-19 2018-02-22 杭州海康威视数字技术股份有限公司 Video monitoring information generation method and device, and camera
CN111048016A (en) * 2018-10-15 2020-04-21 广东美的白色家电技术创新中心有限公司 Product display method, device and system
CN111083138A (en) * 2019-12-13 2020-04-28 北京秀眼科技有限公司 Short video production system, method, electronic device and readable storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009044423A (en) * 2007-08-08 2009-02-26 Univ Of Electro-Communications Scene detection system and scene detecting method
US20130047081A1 (en) * 2011-10-25 2013-02-21 Triparazzi, Inc. Methods and systems for creating video content on mobile devices using storyboard templates
CN107437076B (en) * 2017-08-02 2019-08-20 逄泽沐风 The method and system that scape based on video analysis does not divide
US10477177B2 (en) * 2017-12-15 2019-11-12 Intel Corporation Color parameter adjustment based on the state of scene content and global illumination changes
CN109618222B (en) * 2018-12-27 2019-11-22 北京字节跳动网络技术有限公司 A kind of splicing video generation method, device, terminal device and storage medium
CN110825912B (en) * 2019-10-30 2022-04-22 北京达佳互联信息技术有限公司 Video generation method and device, electronic equipment and storage medium
CN111541946A (en) * 2020-07-10 2020-08-14 成都品果科技有限公司 Automatic video generation method and system for resource matching based on materials

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104581380A (en) * 2014-12-30 2015-04-29 联想(北京)有限公司 Information processing method and mobile terminal
WO2018032921A1 (en) * 2016-08-19 2018-02-22 杭州海康威视数字技术股份有限公司 Video monitoring information generation method and device, and camera
CN111048016A (en) * 2018-10-15 2020-04-21 广东美的白色家电技术创新中心有限公司 Product display method, device and system
CN111083138A (en) * 2019-12-13 2020-04-28 北京秀眼科技有限公司 Short video production system, method, electronic device and readable storage medium

Also Published As

Publication number Publication date
CN114363527A (en) 2022-04-15
WO2022068511A1 (en) 2022-04-07

Similar Documents

Publication Publication Date Title
CN109951633B (en) Method for shooting moon and electronic equipment
CN114397979B (en) Application display method and electronic equipment
CN113556461B (en) Image processing method, electronic equipment and computer readable storage medium
CN114363527B (en) Video generation method and electronic equipment
CN110119296B (en) Method for switching parent page and child page and related device
CN113727017B (en) Shooting method, graphical interface and related device
CN114390139B (en) Method for presenting video by electronic equipment in incoming call, electronic equipment and storage medium
CN112262563A (en) Image processing method and electronic device
CN113170037B (en) Method for shooting long exposure image and electronic equipment
CN116009999A (en) Card sharing method, electronic equipment and communication system
CN114866860B (en) Video playing method and electronic equipment
CN115525783B (en) Picture display method and electronic equipment
CN116561085A (en) Picture sharing method and electronic equipment
CN115734032A (en) Video editing method, electronic device and storage medium
CN115037872B (en) Video processing method and related device
WO2023065832A1 (en) Video production method and electronic device
WO2023116669A1 (en) Video generation system and method, and related apparatus
WO2022228010A1 (en) Method for generating cover, and electronic device
WO2023280021A1 (en) Method for generating theme wallpaper, and electronic device
CN116193275B (en) Video processing method and related equipment
CN117762281A (en) Method for managing service card and electronic equipment
CN117009099A (en) Message processing method and electronic equipment
CN115914823A (en) Shooting method and electronic equipment
CN117950846A (en) Resource scheduling method and related device
CN116700568A (en) Method for deleting object and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant