CN115996274A - Video production method and electronic equipment - Google Patents

Video production method and electronic equipment Download PDF

Info

Publication number
CN115996274A
CN115996274A CN202111208990.4A CN202111208990A CN115996274A CN 115996274 A CN115996274 A CN 115996274A CN 202111208990 A CN202111208990 A CN 202111208990A CN 115996274 A CN115996274 A CN 115996274A
Authority
CN
China
Prior art keywords
video
lens
mirror
group
fortune
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111208990.4A
Other languages
Chinese (zh)
Inventor
徐千尧
张韵叠
苏达
徐迎庆
高佳思
周雪怡
刘雨佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Huawei Technologies Co Ltd
Original Assignee
Tsinghua University
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Huawei Technologies Co Ltd filed Critical Tsinghua University
Priority to CN202111208990.4A priority Critical patent/CN115996274A/en
Priority to PCT/CN2022/115814 priority patent/WO2023065832A1/en
Publication of CN115996274A publication Critical patent/CN115996274A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/42Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by switching between different modes of operation using different resolutions or aspect ratios, e.g. switching between interlaced and non-interlaced mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/268Signal distribution or switching

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application provides a video production method and electronic equipment, wherein the method is applied to the electronic equipment and comprises the following steps: acquiring video materials, wherein the video materials at least comprise a first video and a second video; determining a transition mode when the first video is switched to the second video according to a first fortune mirror in the first video and a second fortune mirror in the second video, wherein the first fortune mirror is the last fortune mirror in the first video, and the second fortune mirror is the first fortune mirror in the second video; and synthesizing the first video and the second video into one synthesized video based on the transition mode. The scheme of the method and the device can ensure continuity and fluency of the front and rear mirrors and improve user experience; in addition, the scheme provided by the application does not need to be completed manually by a user, so that the operation can be simplified, the manufacturing time is shortened, and the cost is reduced.

Description

Video production method and electronic equipment
Technical Field
The present disclosure relates to the field of video production, and in particular, to a method for producing video and an electronic device.
Background
With the advent of the 5G age, "video" has become increasingly popular with consumers. At present, the production of the video mainly comprises two modes, and the other mode is completed by adopting a template mode, but the mode is relatively fixed, and the conditions of clamping, etc. possibly occur, so that the user experience is reduced; the other way is by manual operation, but the operation is complex, time-consuming and cost-intensive.
Disclosure of Invention
The video manufacturing method and the electronic device can ensure continuity and fluency of front and back operation mirrors, improve user experience, simplify operation, shorten manufacturing time and reduce cost.
In a first aspect, a method for producing a video is provided, where the method is applied to an electronic device, and includes: acquiring video materials, wherein the video materials at least comprise a first video and a second video;
determining a transition mode when the first video is switched to the second video according to a first fortune mirror in the first video and a second fortune mirror in the second video, wherein the first fortune mirror is the last fortune mirror in the first video, and the second fortune mirror is the first fortune mirror in the second video;
and synthesizing the first video and the second video into one synthesized video based on the transition mode, wherein the first video is adjacent to the second video in the synthesized video, and the first video is positioned before the second video.
According to the scheme provided by the application, the transition mode when the first video is switched to the second video is determined according to the types of the first moving mirror in the first video and the second moving mirror in the second video, the first moving mirror is the last moving mirror in the first video, and the second moving mirror is the first moving mirror in the second video, so that the continuity and fluency of the front moving mirror and the rear moving mirror can be ensured, and the user experience is improved; in addition, the scheme provided by the application does not need to be completed manually by a user, so that the operation can be simplified, the manufacturing time is shortened, and the cost is reduced.
With reference to the first aspect, in some possible implementations, the determining, according to a first mirror in the first video and a second mirror in the second video, a transition manner when the first video is switched to the second video includes:
and if the types of the first and second mirrors are the same, determining that the transition mode of the first video to the second video is the same as the transition direction of the first or second mirror.
According to the scheme, if the types of the first moving mirror in the first video and the second moving mirror in the second video are the same, the transition mode of switching the first video to the second video is determined to be the same as the transition direction of the first moving mirror or the second moving mirror, and the continuity and fluency of the front moving mirror and the rear moving mirror can be ensured, so that the user experience is improved.
With reference to the first aspect, in some possible implementations, the determining, according to a first mirror in the first video and a second mirror in the second video, a transition manner when the first video is switched to the second video includes:
if the types of the first and second mirrors are opposite, determining that the transition mode of switching the first video to the second video is to reverse the first or second mirror; or alternatively, the first and second heat exchangers may be,
And if the types of the first and second mirrors are opposite, determining that the transition mode of switching the first video to the second video is a mode of using grid lifting connection.
According to the scheme, if the types of the first moving mirror in the first video and the second moving mirror in the second video are opposite, the transition mode of switching the first video to the second video is determined to be the reverse playing mode of the first moving mirror or the second moving mirror, or the mode of grid lifting connection is used, the continuity and fluency of the front moving mirror and the rear moving mirror can be guaranteed, and therefore user experience is improved.
With reference to the first aspect, in some possible implementations, the determining, according to a first mirror in the first video and a second mirror in the second video, a transition manner when the first video is switched to the second video includes:
and if the types of the first and second mirrors are not in one direction, determining that the transition mode of the first video to the second video is consistent with the direction of a third mirror, wherein the third mirror is the second mirror or the mirror with more intense change in the first and second mirrors.
According to the scheme, if the types of the first moving mirror in the first video and the second moving mirror in the second video are not in the same direction, the transition mode of switching the first video to the second video is determined to be consistent with the direction of the second moving mirror, or the transition mode is consistent with the direction of the moving mirror with stronger change in the first moving mirror and the second moving mirror, so that the consistency and fluency of the front moving mirror and the rear moving mirror can be ensured, and the user experience is improved.
With reference to the first aspect, in some possible implementations, the determining, according to a first mirror in the first video and a second mirror in the second video, a transition manner when the first video is switched to the second video includes:
if the first moving mirror or the second moving mirror is stationary, determining that a transition mode when the first video is switched to the second video is consistent with the direction of a fourth moving mirror, wherein the fourth moving mirror is a non-stationary moving mirror in the first moving mirror and the second moving mirror; or alternatively, the first and second heat exchangers may be,
and if the first moving mirror or the second moving mirror is static, determining that a transition mode when the first video is switched to the second video is a mode using grid lifting connection.
According to the scheme, if the first moving mirror in the first video or the second moving mirror in the second video is stationary, the transfer mode when the first video is switched to the second video is determined to be consistent with the direction of the non-stationary moving mirror, or the mode of grid lifting connection is used, the consistency and fluency of the front moving mirror and the rear moving mirror can be ensured, and therefore user experience is improved.
With reference to the first aspect, in some possible implementations, the method further includes:
and matching music of a corresponding style according to the fortune mirror in the synthesized video.
According to the scheme provided by the application, the style corresponding to the synthesized video is determined according to the fortune mirror in the synthesized video, and the music corresponding to the style is associated, so that the experience of a user can be further improved.
With reference to the first aspect, in some possible implementations, the matching the music of the corresponding style according to the fortune mirror in the composite video includes:
dividing the moving mirrors into n groups according to the sequence of the moving mirrors included in the synthesized video, wherein n is an integer greater than or equal to 2;
aiming at the ith style in m styles, calculating the proportion of the target lens type of each group of lens in the n groups of lens, wherein the target lens type is the lens type corresponding to the ith style, i is more than or equal to 1 and less than or equal to m;
Calculating the sum of the proportions of the target lens types of each group of lens to obtain a value corresponding to the ith style;
and matching the music corresponding to the one style with the highest numerical value in the m styles to the synthesized video.
According to the scheme provided by the application, aiming at the ith style in m styles, the music corresponding to the style with the highest value is matched to the synthesized video by calculating the sum of the proportions of the target fortune mirror types of each group of fortune mirrors in n groups of fortune mirrors, and the music corresponding to the style is associated, so that the experience of a user can be further improved.
With reference to the first aspect, in some possible implementations, the method further includes:
giving corresponding weights to the proportions, wherein the weights of the 1 st group of fortune mirrors and the n th group of fortune mirrors in the n groups of fortune mirrors are higher than the weights of the other groups of fortune mirrors;
the calculating the sum of the proportions of the target lens types of each group of lens comprises the following steps:
and calculating the sum of products of the proportions of the target lens types of each group of lens and the corresponding weights.
According to the scheme, the emotion influence degree of the head and the tail is larger than that of the middle, the 1 st group of fortune mirrors and the n th group of fortune mirrors in the n groups of fortune mirrors are given higher weight, the sum of the products of the proportion of the target fortune mirror types of each group of fortune mirrors and the corresponding weights is calculated, music corresponding to one style with the highest numerical value is matched to the synthesized video, and music corresponding to the style is associated, so that the experience of a user can be further improved.
In a second aspect, an apparatus is provided, the apparatus being included in an electronic device, the apparatus having functionality to implement the above aspect and possible implementations of the above aspect. The functions may be realized by hardware, or may be realized by hardware executing corresponding software. The hardware or software includes one or more modules or units corresponding to the functions described above.
In a third aspect, an electronic device is provided, comprising: one or more processors; a memory; one or more applications; and one or more computer programs. Wherein one or more computer programs are stored in the memory, the one or more computer programs comprising instructions. The instructions, when executed by an electronic device, cause the electronic device to perform the method in any of the possible implementations of the first aspect described above.
In a fourth aspect, there is provided a system on a chip comprising at least one processor, wherein program instructions, when executed in the at least one processor, cause the functions of the method of any one of the possible implementations of the first aspect to be implemented on the electronic device.
In a fifth aspect, there is provided a computer storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any one of the possible implementations of the first aspect.
In a sixth aspect, there is provided a computer program product for, when run on an electronic device, causing the electronic device to perform the method of any one of the possible designs of the first aspect.
Drawings
Fig. 1 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Fig. 2 is a schematic software structure of an electronic device according to an embodiment of the present application.
Fig. 3A is a schematic diagram of a zoom lens according to an embodiment of the present application.
Fig. 3B is a schematic diagram of a lens barrel according to an embodiment of the present application.
Fig. 4A is a schematic diagram of a panning lens according to an embodiment of the present application.
Fig. 4B is a schematic diagram of a panning lens according to an embodiment of the present application.
Fig. 5A is a schematic diagram of a left shift lens according to an embodiment of the present application.
Fig. 5B is a schematic diagram of a right shift lens according to an embodiment of the present application.
Fig. 6A is a schematic diagram of a lens reducing device according to an embodiment of the present application.
Fig. 6B is a schematic diagram of an elevating lens according to an embodiment of the present application.
Fig. 7 is a schematic diagram of a follow-up lens according to an embodiment of the present application.
Fig. 8A to 8J are schematic diagrams of a set of GUIs provided in an embodiment of the present application.
Fig. 9A to 9J are schematic diagrams of another set of GUIs provided in an embodiment of the present application.
Fig. 10 is a schematic diagram of a method for producing video according to an embodiment of the present application.
Fig. 11 is a schematic block diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. Wherein, in the description of the embodiments of the present application, "/" means or is meant unless otherwise indicated, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in the description of the embodiments of the present application, "plurality" means two or more than two.
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature.
The method for manufacturing the video provided by the embodiment of the application can be applied to electronic devices such as mobile phones, tablet computers, cameras, wearable devices, vehicle-mounted devices, augmented reality (augmented reality, AR)/Virtual Reality (VR) devices, notebook computers, ultra-mobile personal computer (UMPC), netbooks, personal digital assistants (personal digital assistant, PDA) and the like, and the embodiment of the application does not limit the specific types of the electronic devices.
By way of example, fig. 1 shows a schematic diagram of an electronic device 100. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors. The determining of the transition mode and matching of the music corresponding to the synthetic video in the embodiment of the application can be implemented by the processor 110.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 100 may listen to music, or to hands-free conversations, through the speaker 170A.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When electronic device 100 is answering a telephone call or voice message, voice may be received by placing receiver 170B in close proximity to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may also be provided with three, four, or more microphones 170C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In this embodiment, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
Fig. 2 is a software configuration block diagram of the electronic device 100 according to the embodiment of the present application. The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively. The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications for cameras, gallery, calendar, phone calls, map, navigation, WLAN, bluetooth, music, video, short messages, first application, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide the communication functions of the electronic device 100. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media library (media library), three-dimensional graphics processing library (e.g., openGL ES), 2D graphics engine (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The method provided by the embodiment of the application is realized on the basis of the same or different lens-carrying types, wherein the lens-carrying types mainly comprise lens pushing, lens pulling, lens shaking, lens moving, lens lifting, lens descending and lens following.
For ease of understanding, the types of mirrors to which embodiments of the present application relate are described below. Fig. 3A is a schematic view of a zoom lens, and fig. 3B is a schematic view of a zoom lens; FIG. 4A is a schematic view of a panning shot, and FIG. 4B is a schematic view of a panning shot; FIG. 5A is a schematic view of a panning lens, and FIG. 5B is a schematic view of a panning lens; FIG. 6A is a schematic view of an ascending lens, and FIG. 6B is a schematic view of a descending lens; fig. 7 is a schematic view of a follow-up lens.
Referring to fig. 3A, the screen shows a cow, which is a subject, and it can be seen from (a) in fig. 3A to (b) in fig. 3A that the area occupied by the subject cow in the entire screen becomes larger gradually, that is, from a distant view to a middle view, the group of lenses can be regarded as a push lens. Referring to fig. 3B, it can be seen from (a) in fig. 3B to (B) in fig. 3B that the subject cow occupies an area gradually smaller in the entire screen, and the group of lenses can be regarded as a pull lens.
Referring to fig. 4A, the screen shows a cow, which is a subject, and the next grass and tree, and it can be seen from (a) in fig. 4A to (b) in fig. 4A that the subject cow moves rightward with respect to the entire screen, and the relative positions and relative sizes of objects (including cow, grass, tree) in the entire screen change, the group of lenses can be regarded as a panning lens. Referring to fig. 4B, from (a) in fig. 4B to (B) in fig. 4B, the subject cow moves leftward with respect to the entire screen, and the relative position and relative size of objects (including cow, grass, tree) in the entire screen change, the group of lenses can be regarded as a panning lens.
Referring to fig. 5A, the screen shows a cow looking out of the distance, which is a subject, and grass and trees on the side, and it can be seen from (a) in fig. 5A to (b) in fig. 5A that the subject cow moves rightward with respect to the entire screen, and that the relative position and relative size of objects (including cow, grass, trees) in the entire screen are not changed, the group of lenses can be regarded as a left-shift lens. Referring to fig. 5B, it can be seen from (a) in fig. 5B to (B) in fig. 5B that the subject cow moves leftward with respect to the entire screen, and the relative position and relative size of objects (including cow, grass, tree) in the entire screen do not change, the group of lenses can be regarded as a right-shift lens.
Referring to fig. 6A, the screen shows a basketball stand that is a subject, and as can be seen from (a) in fig. 6A to (b) in fig. 6A, the subject basketball stand moves upward with respect to the entire screen, the set of lenses can be considered as a lens-down. Referring to fig. 6B, it can be seen from (a) in fig. 6B to (B) in fig. 6B that the subject basketball stand is moved downward with respect to the entire screen, the group of lenses can be regarded as a lens raising.
Referring to fig. 7, the screen shows the back shadow and trees of a running girl, which is a subject, and it can be seen from (a) in fig. 7 to (b) in fig. 7 that the subject girl runs toward the trees while the angle and size of the girl remain substantially unchanged in the screen, and the area occupied by the trees in the whole screen becomes larger gradually, i.e., the lens follows the movement of the girl, the group of lenses can be regarded as follow-up lenses.
The following embodiments of the present application will take an electronic device having a structure shown in fig. 1 and fig. 2 as an example, and specifically describe a method for producing video provided in the embodiments of the present application in conjunction with the accompanying drawings and application scenarios.
Fig. 8A to 8J illustrate a set of graphical user interfaces (graphical user interface, GUI) of the handset, wherein a method of making a video is implemented by a first application in the handset is illustrated from fig. 8A to 8J.
Referring to the GUI shown in fig. 8A, the GUI is a desktop of the mobile phone. When the mobile phone detects that the user clicks on the icon 801 of the first application on the desktop, the first application may be started, and a GUI, which may be referred to as a video production preparation interface, is displayed as shown in fig. 8B.
Referring to the GUI illustrated in FIG. 8B, the display is shown: the video production module and various styles of materials are popular, and a user can select corresponding videos from the popular materials and then select and click on icons produced by the videos, or can directly click on the icons produced by the videos. When the handset detects that the user clicks on the icon 802 for video production, a GUI as shown in fig. 8C may be displayed.
Referring to the GUI shown in fig. 8C, the interface displays video and pictures stored locally by the handset. Wherein the numbers in brackets behind the video or picture displayed at the lowest of the interfaces represent the number of videos or pictures selected by the user. When the handset detects an operation in which the user clicks on the icon 803 of video 1 (assuming that the video 1 includes a left shot) and the icon 804 of video 2 (assuming that the video 2 includes a left shot), a GUI as shown in fig. 8D may be displayed.
Referring to the GUI shown in fig. 8D, the number in brackets after the video displayed at the lowest side of the interface is 2, indicating that the number of videos selected by the user is 2. When the mobile phone detects an operation in which the user clicks on the icon 805 to start editing, a GUI as shown in fig. 8E may be displayed.
Referring to the GUI shown in fig. 8E, the interface displays the processed video, and the corresponding progress bar is displayed below it for a long time. When the handset detects that the user clicks on the played icon 806, a GUI as shown in fig. 8F may be displayed.
Referring to the GUI shown in fig. 8F, when the time progress bar displays that the current playing time is 1 st second, the screen displays a cow and a tree beside the cow in a gazette distance, the video continues to play, and when the time progress bar displays that the current playing time is 4 th second, the GUI shown in fig. 8G may be displayed.
Referring to the GUI shown in fig. 8G, the time of the current playing is shown as 4 th second, and at this time, the screen displays a ox looking out of the distance, and the position of the ox in the whole screen is shifted to the right compared with the position of the GUI shown in fig. 8F in the whole screen, which means that the shot corresponding to the video 1 is a left-shift shot. The video continues to play, and when the time progress bar displays the current playing time is 5 th seconds, a GUI as shown in fig. 8H may be displayed.
Referring to the GUI shown in fig. 8H, the time progress bar displays the current playing time being 5 seconds, at which time the screen displays a cow looking out of a distance and a basketball stand (it will be understood that the basketball stand is a shot picture included in video 2), which can be understood as a transition picture, that is, the current playing time simultaneously displays the shot pictures included in video 1 and video 2, at the time of specific transition, as the cow in video 1 shifts to the right with respect to the whole picture, the basketball stand in video 2 starts to enter from left to right, so that an overall motion consistency is formed, and for the user, the sensory experience can be improved. The video continues to play, and when the time progress bar shows that the current playing time is 8 seconds, a GUI as shown in fig. 8I may be displayed.
Referring to the GUI shown in fig. 8I, the time progress bar displays the current playing time as 8 seconds, and the screen displays a basketball stand, which is the shot screen in video 2. It can be appreciated that, since video 1 and video 2 are both left shots, as the video is played, the cow fades out of the picture gradually, and the basketball stand shifts to the right gradually. The video continues to play, and when the time progress bar shows that the current playing time is 11 th seconds, a GUI as shown in fig. 8J may be displayed.
Referring to the GUI of fig. 8J, the time progress bar shows that the current playing time is 11 seconds, and the screen still shows a basketball stand, and the position of the basketball stand in the whole screen is shifted to the right compared with the position of the GUI of fig. 8I in the whole screen.
In summary, the shots corresponding to the video 1 and the video 2 are left shots, and the mood intensity of the shots is weaker, so that people can easily feel calm, and the method is applicable to music with lazy style after noon. Thus, music related to afternoon lazy style can be adaptively added while processing the video.
Fig. 9A to 9J illustrate another set of GUIs of the mobile phone, wherein a method of making a video by a first application in the mobile phone is illustrated from fig. 9A to 9J.
Referring to the GUI shown in fig. 9A, the GUI is a desktop of the mobile phone. When the mobile phone detects that the user clicks on the icon 901 of the first application on the desktop, the first application may be started, and a GUI as shown in fig. 9B, which may be referred to as a video production preparation interface, is displayed.
Referring to the GUI illustrated in FIG. 9B, the display is shown: the video production module and various styles of materials are popular, and a user can select corresponding videos from the popular materials and then select and click on icons produced by the videos, or can directly click on the icons produced by the videos. When the handset detects the user's click on the icon 902 for video production, a GUI as shown in fig. 9C may be displayed.
Referring to the GUI shown in fig. 9C, the interface displays video and pictures stored locally by the handset. Wherein the numbers in brackets behind the video or picture displayed at the lowest of the interfaces represent the number of videos or pictures selected by the user. When the handset detects an operation in which the user clicks on the icon 903 of video 3 (assuming that the video 3 includes a pull) and the icon 904 of video 4 (assuming that the video 4 includes a pull), a GUI as shown in fig. 9D may be displayed.
Referring to the GUI shown in fig. 9D, the number in brackets after the video displayed at the lowest side of the interface is 2 at this time, indicating that the number of videos selected by the user is 2. When the mobile phone detects an operation in which the user clicks on the icon 905 to start editing, a GUI as shown in fig. 9E may be displayed.
Referring to the GUI shown in fig. 9E, the interface displays the processed video, and the corresponding progress bar is displayed below it for a long time. When the handset detects operation of the user clicking on the played icon 906, a GUI as shown in fig. 9F may be displayed.
Referring to the GUI shown in fig. 9F, when the time progress bar displays that the current playing time is 1 st second, the screen displays a ox looking out of a distance, the video continues to play, and when the time progress bar displays that the current playing time is 3 rd second, the GUI shown in fig. 9G may be displayed.
Referring to the GUI shown in fig. 9G, the time progress bar shows that the current playing time is 3 rd seconds, at this time, the screen still shows a ox looking out of the distance, and the proportion of the ox in the whole screen is reduced compared with the proportion of the GUI shown in fig. 9F in the whole screen, which means that the shot corresponding to the video 1 is a zoom lens. The video continues to play, and when the time progress bar displays that the current playing time is 4 th seconds, a GUI as shown in fig. 9H may be displayed.
Referring to the GUI shown in fig. 9H, the time progress bar displays the current playing time being 4 seconds, at which time the screen displays a cow and a tree (it will be understood that the tree is a shot screen included in video 4) in a gazette, which can be understood as a transition screen, that is, the current playing time simultaneously displays the shot screens included in video 3 and video 4, and at the time of specific transition, as the cow in video 3 decreases relative to the whole screen, the tree in video 4 starts to enter from far to near, so that an overall motion consistency is formed, and for the user, the sensory experience can be improved. The video continues to play, and when the time progress bar shows that the current playing time is 7 seconds, a GUI as shown in fig. 9I may be displayed.
Referring to the GUI shown in fig. 9I, the time of the current playing time displayed by the timeline bar is 7 th second, and at this time, the screen displays a tree, which is the shot screen in the video 4. It can be understood that, since the video 3 and the video 4 are both shots, as the video is played, the cattle and the trees become smaller gradually, and the cattle completely fades out of the picture at the current moment. The video continues to play, and when the time progress bar shows that the current playing time is 9 th seconds, a GUI as shown in fig. 9J may be displayed.
Referring to the GUI shown in fig. 9J, the time progress bar shows that the current playing time is 9 th second, at which time the screen still shows a tree, and the proportion of the tree in the whole screen is still reduced compared with the proportion of the GUI shown in fig. 9I in the whole screen.
In summary, the shots corresponding to the video 3 and the video 4 are all shots, and the shots have strong mood fluctuation due to the shots, and belong to the calm attribute under the condition of not fast speed change; the continuous lens can generate a feeling of silence in a distant place, so that the device can be suitable for music in a distant place style. Therefore, music relating to a remote style can be adaptively added when processing the video.
Based on this, the foregoing fig. 8A to 8J and fig. 9A to 9J describe the method for producing video according to the embodiments of the present application by taking the pan-pan lens and the zoom-pull lens as examples, respectively, and it is understood that the process of producing video includes, but is not limited to, the above-described illustrated lens, and may include a zoom lens, an pan lens, a follow lens, and the like, which are not described in detail herein. It should also be appreciated that in some embodiments, video may also be produced based on different mirror types, without limitation.
The following describes the internal implementation process and judgment logic for implementing video production in the embodiment of the present application with reference to fig. 10. Fig. 10 is a schematic diagram of a method 1000 for producing a video according to an embodiment of the present application.
S1010, obtaining video materials.
The video material in the embodiment of the application may be a video locally stored in a mobile phone, or may be a video downloaded from a network, which is not limited.
The video material in the embodiment of the application may include at least 2 videos, and the shots included in the at least 2 videos may be 1 or more.
Illustratively, if the video material includes 2 videos, each of the 2 videos may include 1 shot, and the 1 shots included may be the same or different. As these 2 videos may each include 1 shot; alternatively, one of the 2 videos includes 1 footage and the other video includes 1 footage.
Illustratively, if the video material includes 2 videos, each of the 2 videos may include 2 shots, and the 2 shots included may be the same or different. As each of the 2 videos may include a shot, or one of the 2 videos includes a shot and a pull, and the other video includes 2 shots.
S1020, determining a transition mode when the video is switched according to the type of the selected video from the video materials.
If 2 videos are selected from the video materials in the embodiment of the present application, the 2 videos may be the video 1 and the video 2 in fig. 8C; alternatively, these 2 videos may be video 3 and video 4 in fig. 9C described above.
After selecting a video from the video material, a transition mode when the video is switched can be determined based on the type of the operation mirror included in the video. Specifically, the transition may be performed according to the following rule.
It is understood that the selected video may include 1 or more mirrors, and if the selected video includes 1 mirror, determining a transition mode when the video is switched according to 1 mirror of the previous video and 1 mirror of the next video; if the selected video includes multiple mirrors, the transition mode when the video is switched can be determined according to the last mirror of the previous video and the first mirror of the next video.
Thus, the two front and rear mirrors described in the various rules below are the last mirror of the previous video and the first mirror of the next video, respectively.
Rule one:
if the front and rear mirrors are identical, the transition is also kept in the same direction of movement, so that an overall consistency of movement is formed.
For example, as shown in fig. 8A to 8J, the front and rear mirrors are both left-shifted lenses, and the object in the image is moved rightward, so that the direction of entering the transition is opposite to the direction of moving the lenses, i.e. from left to right.
For another example, as shown in fig. 9A to 9J, both the front and rear lenses are zoom lenses, and the visual sensation to the user is that the objects in the screen are small, and the direction of transition entry is opposite to the direction in which the lenses move, i.e., the transition entry is from far to near.
Rule II:
if the front and rear motion mirrors are opposite, two methods can be adopted for transferring, one is to reverse one of the motion mirrors, and the other is to use transfer without direction (such as fuzzy change, fade-in fade-out, etc.), and the speed changes at the place near the transfer, so as to form a lifting joint to generate a pause effect in the middle, or the lifting joint weakens the front and rear motion potential.
For example, assuming that the previous lens is shifted left and the next lens is shifted right, one method is to reverse the next lens shifted right during transition, and the final playing effect is a complete and smooth lens shifted left or reverse the previous lens shifted left, and the final displaying effect is a complete and smooth lens shifted right.
Another method is that no direction transition can be selected during transition, for example, the playing speed can be slowed down at the end of playing the previous left shift, the playing of the next right shift is slow, and the speed is continuously increased until the playing of the next right shift is completely finished.
Rule III:
if the front and rear mirrors are not in one direction, the transition may be consistent with the front or rear direction, and may be preferentially consistent with the rear or more rapid and intense mirrors.
For example, assuming that the former lens is a zoom lens and the latter lens is a shift left lens, the direction of transition may be preferentially opposite to the direction in which the shift left lens moves, i.e., enter from left to right; if the previous lens is faster and stronger, the direction of transition can be opposite to the direction in which the lens is moved, i.e. from far to near.
In some embodiments, if the speeds of the front and rear mirrors are identical, the direction of the transition may be opposite to the direction in which the lens moves, without limitation.
Rule IV:
if one of the front and rear mirrors is stationary, the direction of transition may be consistent with the direction of the mirror, or a transition without direction may be used, where the speed varies near the transition, to form a tab connection (alternatively referred to as tab cut or tab join).
For example, assuming that the former lens is a stationary lens and the latter lens is a left-shift lens, the direction of transition may be preferentially opposite to the direction in which the latter left-shift lens moves, i.e., enter from left to right; alternatively, a transition without direction may be selected during the transition, for example, the transition may be slow to play after entering the next left shot, and may be continuously accelerated until the transition is completed to play after entering the next left shot.
For another example, assuming that the previous lens is a left shift lens and the next lens is a still lens, the direction of transition may be preferentially opposite to the direction in which the previous left shift lens is moved, i.e., enter from left to right; alternatively, no directional transition may be selected for use during transition, such as by slowing down the play speed at the end of the previous panning until the speed is 0.
For the four rules described above, they can be summarized in a tabular manner, as shown in table 1.
TABLE 1
Figure BDA0003308118550000131
Figure BDA0003308118550000141
Referring to table 1 above, it can be seen that, the vertical term indicates the former lens, the horizontal term indicates the latter lens, and the middle is the transition mode.
Specifically, if the previous lens and the next lens are both lens-pushing, the transition mode between the two lenses may be zoomin, i.e. may enter from the near to the far.
If the former and the latter lens are lens-pulling, the transition mode between the two lenses may be zoomout, i.e. may be from far to near.
If the former lens and the latter lens are pull lens and push lens, the transition mode between the two lenses can be lifting connection, i.e. the playing speed is slowed down when the end of the former lens is played, the latter lens is slowly played, and the speed is continuously increased until the latter lens is completely played.
If the former and the latter lenses are lens pushing and horizontal panning lifting and lens following respectively, the transition mode between the two lenses is reverse movement, i.e. the transition direction is opposite to the movement direction of the horizontal panning lifting and lens.
If the front lens and the rear lens are both horizontally rocked, lifted and shot, if the front lens and the rear lens are in the same direction, the transition mode between the two lenses is that the rear material reversely moves and enters, and if the front lens and the rear lens are both left-shifted shots, the transition direction is opposite to the movement direction of the rear lens, namely, the transition mode enters from left to right; if the transition mode is reverse, the transition mode between the two lenses is that the reverse is adjusted to be in the same direction, and if the former lens and the latter lens are respectively a left shift lens and a right shift lens, the latter right shift lens can be reversely placed during transition, and the final playing effect is that the front left shift lens is completely and smoothly left shift lens or the front left shift lens is reversely placed, and the final displaying effect is that the front left shift lens is completely and smoothly right shift lens.
If the previous lens and the next lens are respectively moved up and down horizontally and the lens and the still lens are moved back and forth, the transition mode between the two lenses is that the front material is moved back and forth, for example, assuming that the previous lens is moved left and the next lens is the still lens, the transition direction can be preferentially opposite to the movement direction of the previous lens, i.e. the transition mode enters from left to right.
If the former and the latter lens are both stationary lenses, the transition mode between the two lenses is cut, i.e. direct switching.
For other transition manners of the front and rear mirrors, reference may be made to the above examples, and no further description is given here.
S1021, synthesizing the selected video into a synthesized video based on the transition mode.
In the embodiment of the present application, in the case where the selected video may include 2 videos, the 2 videos are synthesized into one synthesized video based on the transition mode determined in step S1020.
Alternatively, the user may select two or more videos to compose one composite video in a certain order. In the synthesis process, the method provided by the embodiment of the application can be adopted to determine the transition mode of two adjacent videos for any two adjacent videos.
According to the scheme provided by the embodiment of the application, the transition mode when the video is switched is determined according to the type of the lens of the video, and the video is synthesized into one synthesized video based on the transition mode, so that the consistency and fluency of the lens before and after the synthesized video can be ensured, and the user experience is improved; in addition, the scheme provided by the embodiment of the application does not need to be completed manually by a user, so that the operation can be simplified, the manufacturing time is shortened, and the cost is reduced.
Further optionally, in some embodiments, the method 1000 may further comprise the steps of:
s1030, dividing the synthesized video into four groups according to the sequence, and preferentially distributing the synthesized video to the middle two groups under the condition that the synthesized video cannot be equally divided.
The composite video in embodiments of the present application may include multiple shots (which may be understood as the fortune mirror referred to hereinabove), such as 2, 3, 10, 12, etc. If the composite video is equally divided, e.g., the composite video includes 12 shots, it may be sequentially divided into 4 groups, each group including 3 shots.
If the synthesized video cannot be equally divided, if the synthesized video includes 14 shots, the 1 st to 3 rd shots and the 12 th to 14 th shots can be divided into the 1 st group and the 4 th group, the middle 8 th shots can be divided into the 2 nd group and the 3 rd group, for example, the 4 th to 7 th shots can be divided into the 2 nd group, and the 8 th to 11 th shots can be divided into the 3 rd group.
If the synthesized video comprises 9 shots, the 1 st to 3 rd shots and the 6 th to 9 th shots can be divided into the 1 st group and the 4 th group, and the 4 th to 6 th shots are respectively used as the 2 nd group and the 3 rd group, namely the 4 th to 6 th shots are copied and respectively used as the 2 nd group and the 3 rd group; alternatively, the 1 st to 2 nd lenses and the 8 th to 9 th lenses may be divided into the 1 st and 4 th groups, the 3 rd to 7 th lenses may be respectively divided into the 2 nd and 3 rd groups, the 3 rd to 5 th lenses may be divided into the 2 nd group, the 6 th to 7 th lenses may be divided into the 3 rd group, the 3 rd to 4 th lenses may be divided into the 2 nd group, and the 5 th to 7 th lenses may be divided into the 3 rd group.
S1040, calculating the corresponding proportion of the 4 groups of lenses in each group of styles.
The styles in embodiments of the present application may include a variety of styles, such as jump, distance, dream, fun, etc. In order to facilitate understanding of the following, the present application first briefly describes moods and styles corresponding to different shots.
Pushing the lens: the generated emotion fluctuation is strong, the emotion attribute is an active attribute, and the continuous or even whole lens-pushing emotion is most active, so that a jump feeling is brought to people.
Pulling the lens: the generated emotion fluctuation is strong, and the emotion attribute is a calm attribute under the condition of not fast speed change. Continuous or even full shots can create a feeling of silence in the distant place. A quick pull of the lens may create a slight dynamic sensation.
Lifting lens: the emotional attribute of the generated emotional fluctuation is floating or wagging, which may lead to a hope and unfinished feeling. Can be easily reminiscent of some sweet moods.
Lens lowering: the generated emotion fluctuation gives people a heavy feeling, and is often placed at the tail part, so that the people have a thick and heavy history feeling.
Follow, shake, move the lens: the generated emotion has weak intensity, and generates floating feeling with the lens, active feeling with the lens, and calm feeling with the lens. Such as shift-shift lens combinations, the emotional intensity of the person is weak, and the combination is mostly used for lazy lenses after noon. The combination of the shaking-moving-shaking-moving lens has weak emotion intensity of people, but is more active, and can be used for some food scenes.
In addition, the emotion intensity or emotion type expressed by the whole fortune mirror combination included in the video is the combination of all fortune mirrors, and the emotion influence degree of the head and the tail is greater than the emotion influence degree in the middle. For example, a combination of lift-shift-push lens, where the lift lens is a floating sensation in one, and the push lens is a strong active sensation, may eventually create a neutral sweet sensation.
In addition, the emotion fluctuation generated by the combined effect of pushing and pulling is strong, the pushing and pulling emotion is the strongest in the whole, the pushing and pulling quantity is different, the emotion intensity is different, and the strong spatial sense can be generated more evenly. For example, the combination of push-pull shots can be matched with very active music to represent cities; a combination of a heel-pan-pull-push lens may produce a rhythmic sensation; a combination of heel-lift-pull-push shots may have a sense of athletic activity.
For the correspondence of different styles and mirror combinations, table 2 shows.
TABLE 2
Figure BDA0003308118550000161
In table 2 above, 4 sets of shots corresponding to styles included under different subjects are shown. Hereinafter, the case where the composite video is equally divided into 4 groups of shots and not equally divided into 4 groups of shots will be described as an example.
1) It is assumed that a certain composite video is divided into 4 groups of lenses, wherein the 4 groups of lenses are push-pull (i.e. push-pull, push-pull and pull), push-pull, pull-heel and lift-shake respectively.
For jump style:
the lens of the 1 st group is push-pull, and the proportion of the lens of the group is 0.667 corresponding to the push-pull of the lens 1 in the table;
the group 2 lens is push-pull, and the proportion of the group of lenses is 0.333 corresponding to the push lens of the lens 2 in the table;
the 3 rd lens group is a pull heel, and corresponds to the pushing lens of the lens 3 in the table, and the proportion of the lens group is 0;
the lens group 4 is a lift and shake, and the ratio of the lens group is 0 corresponding to the lens pushing of the lens group 4 in the table.
For the distant style:
the lens of the 1 st group is push-pull, and the proportion of the lens of the group is 0.333 corresponding to the pull lens of the lens 1 in the table;
the group 2 lens is push-pull, and the proportion of the group of lenses is 0.333 corresponding to the pull lens of the lens 2 in the table;
the 3 rd lens group is a pull heel, and the ratio of the lens group is 0.667 corresponding to the pull lens of the lens 3 in the table;
the lens group 4 is a lift-shake, and the ratio of the lens group is 0 corresponding to the lens group 4 in the table.
For the dream style:
the 1 st lens group is push-pull, and corresponds to the following lens of the lens 1 in the table, and the proportion of the lens group is 0;
The group 2 lens is a push-pull movement, and the proportion of the group of lenses is 0 corresponding to the panning of the lens 2 in the table;
the 3 rd lens group is a pull heel, and corresponds to the panning lens of the lens 3 in the table, and the proportion of the lens group is 0;
the lens group 4 is a lens group with a ratio of 0.667 corresponding to the lens group 4 in the above table.
Similarly, for other styles in the table, the proportion of each group of shots can be calculated according to the method described above, and will not be described in detail herein.
2) And supposing that a certain synthesized video is divided into 4 groups of lenses and is not equally divided, wherein the 4 groups of lenses are respectively push-pull, pull-pull, heel-lowering, lifting and shaking.
For jump style:
the lens of the 1 st group is push-pull, and the proportion of the lens of the group is 0.667 corresponding to the push-pull of the lens 1 in the table;
the group 2 lenses are pushed and pulled, and the proportion of the group of lenses is 0.500 corresponding to the pushing lenses of the lenses 2 in the table;
the 3 rd lens group is the pull-heel drop, and corresponds to the push lens of the lens 3 in the table, and the proportion of the lens group is 0;
the lens group 4 is a lift and shake, and the ratio of the lens group is 0 corresponding to the lens pushing of the lens group 4 in the table.
For the distant style:
the lens of the 1 st group is push-pull, and the proportion of the lens of the group is 0.333 corresponding to the pull lens of the lens 1 in the table;
the group 2 lenses are push-pull and push-push, and the proportion of the group of lenses is 0.250 corresponding to the pull lenses of the lenses 2 in the table;
the 3 rd lens group is the pull-down lens corresponding to the lens 3 in the table, and the ratio of the lens group is 0.667;
the lens group 4 is a lift-shake, and the ratio of the lens group is 0 corresponding to the lens group 4 in the table.
For the dream style:
the 1 st lens group is push-pull, and corresponds to the following lens of the lens 1 in the table, and the proportion of the lens group is 0;
the group 2 lenses are pushed and pulled, and the proportion of the group of lenses is 0 corresponding to the panning of the lens 2 in the table;
the 3 rd lens group is the lens of the lens 3 in the table, and the proportion of the lens group is 0;
the lens group 4 is a lens group with a ratio of 0.667 corresponding to the lens group 4 in the above table.
Similarly, for other styles in the table, the proportion of each group of shots can be calculated according to the method described above, and will not be described in detail herein.
3) It is assumed that a certain composite video is divided into 3 groups of lenses, wherein the 3 groups of lenses are respectively push-pull, push-pull and lift-shake. The middle group lens can be copied in a push-pull manner, namely, the group 2 lens and the group 3 lens respectively.
For jump style:
the lens of the 1 st group is push-pull, and the proportion of the lens of the group is 0.667 corresponding to the push-pull of the lens 1 in the table;
the group 2 lens is push-pull, and the proportion of the group of lenses is 0.333 corresponding to the push lens of the lens 2 in the table;
the 3 rd group lens is push-pull, and the proportion of the group lens is 0.333 corresponding to the push lens of the lens 3 in the table;
the lens group 4 is a lift and shake, and the ratio of the lens group is 0 corresponding to the lens pushing of the lens group 4 in the table.
For the distant style:
the lens of the 1 st group is push-pull, and the proportion of the lens of the group is 0.333 corresponding to the pull lens of the lens 1 in the table;
the group 2 lens is push-pull, and the proportion of the group of lenses is 0.333 corresponding to the pull lens of the lens 2 in the table;
the 3 rd lens group is push-pull, and the proportion of the lens group is 0.333 corresponding to the pull lens of the lens 3 in the table;
The lens group 4 is a lift-shake, and the ratio of the lens group is 0 corresponding to the lens group 4 in the table.
For the dream style:
the 1 st lens group is push-pull, and corresponds to the following lens of the lens 1 in the table, and the proportion of the lens group is 0;
the group 2 lens is a push-pull movement, and the proportion of the group of lenses is 0 corresponding to the panning of the lens 2 in the table;
the 3 rd lens group is a push-pull movement, and corresponds to the panning of the lens 3 in the table, and the proportion of the lens group is 0;
the lens group 4 is a lens group with a ratio of 0.667 corresponding to the lens group 4 in the above table.
Similarly, for other styles in the table, the proportion of each group of shots can be calculated according to the method described above, and will not be described in detail herein.
4) Suppose a composite video is divided into 2 groups of shots, wherein the 2 groups of shots are push-pull and lifting and shaking respectively. The 2 groups of lenses may be referred to as group 1 and group 4, respectively.
For jump style:
the lens of the 1 st group is push-pull, and the proportion of the lens of the group is 0.667 corresponding to the push-pull of the lens 1 in the table;
the lens group 4 is a lift and shake, and the ratio of the lens group is 0 corresponding to the lens pushing of the lens group 4 in the table.
For the distant style:
the lens of the 1 st group is push-pull, and the proportion of the lens of the group is 0.333 corresponding to the pull lens of the lens 1 in the table;
the lens group 4 is a lift-shake, and the ratio of the lens group is 0 corresponding to the lens group 4 in the table.
For the dream style:
the 1 st lens group is push-pull, and corresponds to the following lens of the lens 1 in the table, and the proportion of the lens group is 0;
the lens group 4 is a lens group with a ratio of 0.667 corresponding to the lens group 4 in the above table.
Similarly, for other styles in the table, the proportion of each group of shots can be calculated according to the method described above, and will not be described in detail herein.
S1050, for each style, summing the proportions of the 4 groups of shots by a certain weight to obtain the total score of each group of styles.
S1060, matching the music corresponding to the style with the highest score.
In this embodiment of the present application, the weights of the proportions corresponding to the 1 st group and the 4 th group are higher than those of the 2 nd group and the 3 rd group, for example, the weights of the proportions corresponding to the 1 st group and the 4 th group may be 0.3, and the weights of the proportions corresponding to the 2 nd group and the 3 rd group may be 0.2; alternatively, the weights of the proportions corresponding to the 1 st and 4 th groups may be 0.3 and 0.4, respectively, and the weights of the proportions corresponding to the 2 nd and 3 rd groups may be 0.2 and 0.1, respectively; and are not limited.
In this embodiment, the weights of the proportions corresponding to the 1 st group and the 4 th group may be 0.3, and the weights of the proportions corresponding to the 2 nd group and the 3 rd group may be 0.2, so as to calculate the total score of each group of styles and match the total score.
1) It is assumed that a certain composite video is divided into 4 groups of lenses, wherein the 4 groups of lenses are respectively push-pull, pull-heel, lift and shake.
For the jump style, based on the ratio calculated in step S1040 described above, the total score for that style can be obtained: 0.3×0.667+0.2×0.333+0.2×0+0.3×0= 0.2667;
for the distant style, based on the ratio calculated in the above step S1040, the total score in the style can be obtained: 0.3×0.333+0.2×0.333+0.2×0.667+0.3×0= 0.2999;
for the dream style, based on the ratio calculated in step S1040 described above, the total score in that style can be obtained: 0.3×0+0.2×0+0.2×0+0.3×0.667= 0.2001;
for other styles, the total score of each style can be calculated according to the method, and the music corresponding to the style with the highest total score can be matched with the video. Assuming that the total score of the remote style is highest, the music corresponding to the remote style can be matched with the video.
2) It is assumed that a certain composite video is divided into 4 groups of lenses and not all the lenses, wherein the 4 groups of lenses are respectively push-pull, pull-pull, heel-lowering, lifting and shaking.
For the jump style, based on the ratio calculated in step S1040 described above, the total score for that style can be obtained: 0.3×0.667+0.2×0.500+0.2×0+0.3×0=0.3001;
for the distant style, based on the ratio calculated in the above step S1040, the total score in the style can be obtained: 0.3×0.333+0.2×0.250+0.2×0.667+0.3×0= 0.2833;
for the dream style, based on the ratio calculated in step S1040 described above, the total score in that style can be obtained: 0.3×0+0.2×0+0.2×0+0.3×0.667= 0.2001;
for other styles, the total score of each style can be calculated according to the method, and the music corresponding to the style with the highest total score can be matched with the video. Assuming that the total score of the skip style is highest, the music corresponding to the skip style may be matched to the video.
3) It is assumed that a certain composite video is divided into 3 groups of lenses, wherein the 3 groups of lenses are respectively push-pull, push-pull and lift-shake.
For the jump style, based on the ratio calculated in step S1040 described above, the total score for that style can be obtained: 0.3×0.667+0.2×0.333+0.2×0.333+0.3×0=0.3333;
for the distant style, based on the ratio calculated in the above step S1040, the total score in the style can be obtained: 0.3×0.333+0.2×0.333+0.2×0.333+0.3×0= 0.2331;
For the dream style, based on the ratio calculated in step S1040 described above, the total score in that style can be obtained: 0.3×0+0.2×0+0.2×0+0.3×0.667= 0.2001;
for other styles, the total score of each style can be calculated according to the method, and the music corresponding to the style with the highest total score can be matched with the video. Assuming that the total score of the time style is highest, the music corresponding to the time style can be matched with the video.
4) Suppose a composite video is divided into 2 groups of shots, wherein the 2 groups of shots are push-pull and lifting and shaking respectively.
For the jump style, based on the ratio calculated in step S1040 described above, the total score for that style can be obtained: 0.3×0.667+0.2×0+0.2×0+0.3×0= 0.2001;
for the distant style, based on the ratio calculated in the above step S1040, the total score in the style can be obtained: 0.3×0.333+0.2×0+0.2×0+0.3×0=0.0999;
for the dream style, based on the ratio calculated in step S1040 described above, the total score in that style can be obtained: 0.3×0+0.2×0+0.2×0+0.3×0.667= 0.2001;
for other styles, the total score of each style can be calculated according to the method, and the music corresponding to the style with the highest total score can be matched with the video. Assuming that the total score of the city style is highest, the music corresponding to the city style may be matched to the video.
It should be understood that the values shown in the above embodiments are only illustrative, and other values are also possible, and the embodiments of the present application should not be limited in any way.
According to the scheme provided by the embodiment of the application, the style corresponding to the synthesized video is determined according to the mirror combination of the synthesized video, and the music corresponding to the style is associated, so that the experience of a user can be further improved.
It will be appreciated that the electronic device, in order to achieve the above-described functions, includes corresponding hardware and/or software modules that perform the respective functions. The steps of an algorithm for each example described in connection with the embodiments disclosed herein may be embodied in hardware or a combination of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application in conjunction with the embodiments, but such implementation is not to be considered as beyond the scope of the embodiments of the present application.
The present embodiment may divide the functional modules of the electronic device according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules described above may be implemented in hardware. It should be noted that, in this embodiment, the division of the modules is schematic, only one logic function is divided, and another division manner may be implemented in actual implementation.
In the case of dividing each functional module with corresponding each function, the electronic device referred to in the above-described embodiment may include: the device comprises an acquisition module, a determination module and a synthesis module.
Wherein the acquisition module may be used to support the electronic device to perform step S1010, etc. described above, and/or other processes for the techniques described herein.
The determination module may be used to support the electronic device to perform step S1020, etc., described above, and/or other processes for the techniques described herein.
The composition module may be used to support the electronic device to perform step S1021, etc., described above, and/or other processes for the techniques described herein.
It should be noted that, all relevant contents of each step related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein.
The electronic device provided in this embodiment is configured to perform the method in the embodiment of the present application, so that the same effect as that of the implementation method can be achieved.
In case an integrated unit is employed, the electronic device may comprise a processing module, a storage module and a communication module. The processing module may be configured to control and manage actions of the electronic device, for example, may be configured to support the electronic device to perform steps performed by the foregoing units. The memory module may be used to support the electronic device to execute stored program code, data, etc. And the communication module can be used for supporting the communication between the electronic device and other devices.
Wherein the processing module may be a processor or a controller. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with the disclosure of embodiments of the present application. A processor may also be a combination that performs computing functions, e.g., including one or more microprocessors, digital signal processing (digital signal processing, DSP) and microprocessor combinations, and the like. The memory module may be a memory. The communication module can be a radio frequency circuit, a Bluetooth chip, a Wi-Fi chip and other equipment which interact with other electronic equipment.
In one embodiment, when the processing module is a processor and the storage module is a memory, the electronic device according to this embodiment may be a device having the structure shown in fig. 1.
Fig. 11 shows another possible composition diagram of an electronic device 800 according to the above embodiment, as shown in fig. 11, the electronic device 800 may include a communication unit 810, an input unit 820, a processing unit 830, an output unit (or may also be referred to as a display unit) 840, a peripheral interface 850, a storage unit 860, a power supply 870, a video decoder 880, and an audio decoder 890.
The communication unit 810 is configured to establish a communication channel through which the electronic device 800 connects to and downloads media data from a remote server. The communication unit 810 may include a WLAN module, a bluetooth module, an NFC module, a baseband module, and other communication modules, and a Radio Frequency (RF) circuit corresponding to the communication modules, for performing wireless local area network communication, bluetooth communication, NFC communication, infrared communication, and/or cellular communication system communication, such as wideband code division multiple access (wideband code division multiple access, W-CDMA) and/or high-speed downlink packet access (high speed downlink packet access, HSDPA). The communication module 810 is used to control communication of components in an electronic device and may support direct memory access.
The input unit 820 may be used to enable user interaction with and/or information input into an electronic device. In a specific embodiment of the present invention, the input unit may be a touch panel, or may be other man-machine interaction interfaces, such as physical input keys, a microphone, or other external information capturing devices, such as a camera.
The processing unit 830 is a control center of the electronic device, and may connect various parts of the entire electronic device using various interfaces and lines, by running or executing software programs and/or modules stored in the storage unit, and invoking data stored in the storage unit to perform various functions of the electronic device and/or process data.
The output unit 840 includes, but is not limited to, an image output unit and a sound output unit. The image output unit is used for outputting characters, pictures and/or videos. In an embodiment of the invention, the touch panel used in the input unit 820 may also be used as the display panel of the output unit 840. For example, when the touch panel detects a gesture operation of touch or approach thereon, the gesture operation is transmitted to the processing unit to determine the type of the touch event, and then the processing unit provides a corresponding visual output on the display panel according to the type of the touch event. Although in fig. 11, the input unit 820 and the output unit 840 implement the input and output functions of the electronic device as two independent components, in some embodiments, the touch panel may be integrated with the display panel to implement the input and output functions of the electronic device. For example, the image output unit may display various graphical user interfaces as virtual control components, including but not limited to windows, scroll shafts, icons, and scrapbooks, for a user to operate by touch.
The storage unit 860 may be used to store software programs and modules, and the processing unit executes the software programs and modules stored in the storage unit, thereby performing various functional applications of the electronic device and realizing data processing.
The present embodiment also provides a computer storage medium having stored therein computer instructions which, when executed on an electronic device, cause the electronic device to perform the above-described related method steps to implement the method in the above-described embodiments.
The present embodiment also provides a computer program product which, when run on a computer, causes the computer to perform the above-mentioned related steps to implement the method in the above-mentioned embodiments.
In addition, embodiments of the present application also provide an apparatus, which may be specifically a chip, a component, or a module, and may include a processor and a memory connected to each other; the memory is configured to store computer-executable instructions, and when the device is operated, the processor may execute the computer-executable instructions stored in the memory, so that the chip performs the methods in the above method embodiments.
The electronic device, the computer storage medium, the computer program product, or the chip provided in this embodiment are used to execute the corresponding methods provided above, so that the beneficial effects thereof can be referred to the beneficial effects in the corresponding methods provided above, and will not be described herein.
It will be appreciated by those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts shown as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions to cause a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A method for producing video, wherein the method is applied to electronic equipment and comprises the following steps:
acquiring video materials, wherein the video materials at least comprise a first video and a second video;
determining a transition mode when the first video is switched to the second video according to a first fortune mirror in the first video and a second fortune mirror in the second video, wherein the first fortune mirror is the last fortune mirror in the first video, and the second fortune mirror is the first fortune mirror in the second video;
and synthesizing the first video and the second video into one synthesized video based on the transition mode, wherein the first video is adjacent to the second video in the synthesized video, and the first video is positioned before the second video.
2. The method of claim 1, wherein the determining a transition mode when the first video is switched to the second video according to a first mirror in the first video and a second mirror in the second video comprises:
and if the types of the first and second mirrors are the same, determining that the transition mode of the first video to the second video is the same as the transition direction of the first or second mirror.
3. The method of claim 1, wherein the determining a transition mode when the first video is switched to the second video according to a first mirror in the first video and a second mirror in the second video comprises:
if the types of the first and second mirrors are opposite, determining that the transition mode of switching the first video to the second video is to reverse the first or second mirror; or alternatively, the first and second heat exchangers may be,
and if the types of the first and second mirrors are opposite, determining that the transition mode of switching the first video to the second video is a mode of using grid lifting connection.
4. The method of claim 1, wherein the determining a transition mode when the first video is switched to the second video according to a first mirror in the first video and a second mirror in the second video comprises:
and if the types of the first and second mirrors are not in one direction, determining that the transition mode of the first video to the second video is consistent with the direction of a third mirror, wherein the third mirror is the second mirror or the mirror with more intense change in the first and second mirrors.
5. The method of claim 1, wherein the determining a transition mode when the first video is switched to the second video according to a first mirror in the first video and a second mirror in the second video comprises:
if the first moving mirror or the second moving mirror is stationary, determining that a transition mode when the first video is switched to the second video is consistent with the direction of a fourth moving mirror, wherein the fourth moving mirror is a non-stationary moving mirror in the first moving mirror and the second moving mirror; or alternatively, the first and second heat exchangers may be,
And if the first moving mirror or the second moving mirror is static, determining that a transition mode when the first video is switched to the second video is a mode using grid lifting connection.
6. The method according to any one of claims 1 to 5, further comprising:
and matching music of a corresponding style according to the fortune mirror in the synthesized video.
7. The method of claim 6, wherein the matching the corresponding style of music according to the fortune mirror in the composite video comprises:
dividing the moving mirrors into n groups according to the sequence of the moving mirrors included in the synthesized video, wherein n is an integer greater than or equal to 2;
aiming at the ith style in m styles, calculating the proportion of the target lens type of each group of lens in the n groups of lens, wherein the target lens type is the lens type corresponding to the ith style, i is more than or equal to 1 and less than or equal to m;
calculating the sum of the proportions of the target lens types of each group of lens to obtain a value corresponding to the ith style;
and matching the music corresponding to the one style with the highest numerical value in the m styles to the synthesized video.
8. The method of claim 7, wherein the method further comprises:
Giving corresponding weights to the proportions, wherein the weights of the 1 st group of fortune mirrors and the n th group of fortune mirrors in the n groups of fortune mirrors are higher than the weights of the other groups of fortune mirrors;
the calculating the sum of the proportions of the target lens types of each group of lens comprises the following steps:
and calculating the sum of products of the proportions of the target lens types of each group of lens and the corresponding weights.
9. An electronic device, comprising:
one or more processors;
one or more memories;
the one or more memories store one or more computer programs comprising instructions that, when executed by the one or more processors, cause the electronic device to perform the method of any of claims 1-8.
10. A chip system comprising at least one processor, wherein program instructions, when executed in the at least one processor, cause the functions of the method of any one of claims 1 to 8 to be carried out on the electronic device.
11. A computer storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any one of claims 1 to 8.
12. A computer program product, characterized in that the computer program product, when run on a computer, causes the computer to perform the method according to any of claims 1 to 8.
CN202111208990.4A 2021-10-18 2021-10-18 Video production method and electronic equipment Pending CN115996274A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111208990.4A CN115996274A (en) 2021-10-18 2021-10-18 Video production method and electronic equipment
PCT/CN2022/115814 WO2023065832A1 (en) 2021-10-18 2022-08-30 Video production method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111208990.4A CN115996274A (en) 2021-10-18 2021-10-18 Video production method and electronic equipment

Publications (1)

Publication Number Publication Date
CN115996274A true CN115996274A (en) 2023-04-21

Family

ID=85988967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111208990.4A Pending CN115996274A (en) 2021-10-18 2021-10-18 Video production method and electronic equipment

Country Status (2)

Country Link
CN (1) CN115996274A (en)
WO (1) WO2023065832A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105516618B (en) * 2014-09-27 2019-02-26 北京金山安全软件有限公司 Method and device for making video and communication terminal
CN109729365B (en) * 2017-10-27 2021-03-26 腾讯科技(深圳)有限公司 Video processing method and device, intelligent terminal and storage medium
CN109688463B (en) * 2018-12-27 2020-02-18 北京字节跳动网络技术有限公司 Clip video generation method and device, terminal equipment and storage medium
CN111107392B (en) * 2019-12-31 2023-02-07 北京百度网讯科技有限公司 Video processing method and device and electronic equipment
CN111787395B (en) * 2020-05-27 2023-04-18 北京达佳互联信息技术有限公司 Video generation method and device, electronic equipment and storage medium
CN113473005B (en) * 2021-06-16 2022-08-09 荣耀终端有限公司 Shooting transfer live-action insertion method, equipment and storage medium

Also Published As

Publication number Publication date
WO2023065832A1 (en) 2023-04-27

Similar Documents

Publication Publication Date Title
US11722449B2 (en) Notification message preview method and electronic device
CN110119296B (en) Method for switching parent page and child page and related device
CN116723266A (en) Suspension window management method and related device
CN111543042B (en) Notification message processing method and electronic equipment
CN112262563B (en) Image processing method and electronic device
CN112445448B (en) Flexible screen display method and electronic equipment
JP2022532102A (en) Screenshot method and electronic device
CN114390139B (en) Method for presenting video by electronic equipment in incoming call, electronic equipment and storage medium
WO2022007862A1 (en) Image processing method, system, electronic device and computer readable storage medium
CN110633043A (en) Split screen processing method and terminal equipment
CN114363527B (en) Video generation method and electronic equipment
CN110059211B (en) Method and related device for recording emotion of user
CN116009999A (en) Card sharing method, electronic equipment and communication system
CN116137639A (en) Cross-equipment audio data transmission method and electronic equipment
CN115543145A (en) Folder management method and device
CN114640747A (en) Call method, related device and system
CN115242994B (en) Video call system, method and device
CN113438366A (en) Information notification interaction method, electronic device and storage medium
CN115525783B (en) Picture display method and electronic equipment
CN115996274A (en) Video production method and electronic equipment
WO2023134525A1 (en) Appearance setting method and electronic device
CN115037872B (en) Video processing method and related device
WO2024060968A1 (en) Service widget management method and electronic device
WO2024109573A1 (en) Method for floating window display and electronic device
EP4116811A1 (en) Display method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination