CN113810764B - Video editing method and video editing device - Google Patents

Video editing method and video editing device Download PDF

Info

Publication number
CN113810764B
CN113810764B CN202110926932.9A CN202110926932A CN113810764B CN 113810764 B CN113810764 B CN 113810764B CN 202110926932 A CN202110926932 A CN 202110926932A CN 113810764 B CN113810764 B CN 113810764B
Authority
CN
China
Prior art keywords
sub
lut
videos
video
sampling period
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110926932.9A
Other languages
Chinese (zh)
Other versions
CN113810764A (en
Inventor
赵冠楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202110926932.9A priority Critical patent/CN113810764B/en
Publication of CN113810764A publication Critical patent/CN113810764A/en
Priority to PCT/CN2022/092061 priority patent/WO2023015997A1/en
Application granted granted Critical
Publication of CN113810764B publication Critical patent/CN113810764B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application provides a video editing method and a video editing device, which are beneficial to improving the intelligent optimization effect of an LUT. The method comprises the following steps: and acquiring a Log file of the target video. And segmenting the target video based on the Log file to obtain a plurality of sub-videos. Picture attributes of the plurality of sub-videos are determined, the picture attributes including scene attributes and color attributes. And determining the LUTs of the multiple sub-videos according to the picture attributes of the multiple sub-videos and a preset mapping relation, wherein the mapping relation is used for representing the corresponding relation between the multiple picture attributes and the multiple LUT elements. Rendering the target video based on the LUTs of the plurality of sub-videos.

Description

Video editing method and video editing device
Technical Field
The present application relates to the field of multimedia technologies, and in particular, to a video editing method and a video editing apparatus.
Background
Along with the increasing intellectualization of electronic equipment, the function of shooting videos and pictures by the electronic equipment is convenient for people to record landscapes and people at any time, and can meet the daily requirements of users. In order to optimize the photographed picture, users often desire to add a look-up table (LUT) to the picture, and the picture processed by the LUT may bring better visual effect.
Currently, taking an electronic device as a mobile phone as an example, after a user uses the mobile phone to shoot a video, the user may obtain a video file in a video recorder (recder) 709 standard format, and the user may manually select an LUT element for the video from a list below a gallery editing interface.
However, if a whole video segment uses the same LUT element, some situations may exist in which the style of the video segment is not matched with that of the LUT element, so that the intelligent optimization effect cannot be achieved, and the visual experience is affected.
Disclosure of Invention
The application provides a video editing method and a video editing device, which are beneficial to improving the intelligent optimization effect of an LUT.
In a first aspect, a video editing method is provided, including: and acquiring a Log file of the target video. And segmenting the target video based on the Log file to obtain a plurality of sub-videos. Picture attributes of the plurality of sub-videos are determined, the picture attributes including scene attributes and/or color attributes. And determining the LUTs of the multiple sub-videos according to the picture attributes of the multiple sub-videos and a preset mapping relation, wherein the mapping relation is used for representing the corresponding relation between the multiple picture attributes and the multiple LUT elements. Rendering the target video based on the LUTs of the plurality of sub videos.
In the embodiment of the application, the video recommendation device can divide the target video into a plurality of sub-videos and recommend an appropriate LUT for each sub-video, which is beneficial to improving the video optimization effect of the LUT.
With reference to the first aspect, in some implementations of the first aspect, segmenting the target video based on the Log file to obtain a plurality of sub videos includes: sampling the Log file according to at least one sampling period, and determining a plurality of sampling frames of each sampling period in the at least one sampling period, wherein the plurality of sampling frames are used for obtaining image frames by sampling the target video according to the sampling period. The candidate segmentation symbols are marked for the target video at each sampling period based on a number of sampling frames for each sampling period. And determining the segmentation symbol of the target video based on the candidate segmentation symbols of the target video at each sampling period, wherein the segmentation symbol divides the target video into a plurality of sub-videos.
In the embodiment of the application, the video recommendation device can sample the target video in sampling periods with different accuracies, so that the finally divided sub-videos can be better adapted to different LUTs.
With reference to the first aspect, in some implementations of the first aspect, sampling the Log file according to at least one sampling period, and determining a plurality of sampling frames of each sampling period in the at least one sampling period includes: and sampling the Log file according to a first sampling period in at least one sampling period, and determining a plurality of first sampling frames. A degree of difference between two adjacent first sample frames of the plurality of first sample frames is calculated. The candidate segmentation symbol is marked between two adjacent first sample frames for which the degree of difference is greater than or equal to a first threshold.
With reference to the first aspect, in some implementations of the first aspect, sampling the Log file according to at least one sampling period, and determining a plurality of sampling frames of each sampling period in the at least one sampling period includes: and sampling the Log file according to a first sampling period in at least one sampling period, and determining a plurality of first frames and a plurality of tail frames. And calculating the difference degree between two adjacent first frames and two adjacent last frames in the plurality of first frames and the plurality of last frames. And marking candidate segmentation symbols in a time period when the difference degree between two adjacent first frames and two adjacent last frames is greater than or equal to a first threshold value.
With reference to the first aspect, in certain implementations of the first aspect, determining a segmentation symbol of the target video based on candidate segmentation symbols of the target video at each sampling period includes: and determining a plurality of candidate segmentation symbols according to each sampling period in at least one sampling period, and determining coincident candidate segmentation symbols in the candidate segmentation symbols as the segmentation symbols of the target video.
With reference to the first aspect, in certain implementations of the first aspect, the at least one sampling period includes three sampling periods, which are a first sampling period 1s, a second sampling period 100ms, and a third sampling period 10ms.
With reference to the first aspect, in some implementations of the first aspect, the light sensation levels of the plurality of sub-videos are determined according to color attributes of the plurality of sub-videos. Determining LUTs of the plurality of sub-videos according to the picture attributes of the plurality of sub-videos and a preset mapping relation, wherein the LUTs comprise: and selecting the LUT corresponding to the scene attributes and/or the light sensation levels of the plurality of sub-videos from the mapping relation according to the scene attributes and/or the light sensation levels of the plurality of sub-videos.
In the embodiment of the application, the electronic device can recommend the LUT with the proper style for each sub-video respectively according to the preset scene attribute and the mapping relation between the light sensation level and the recommended LUT, so that the method is more intelligent and can exert the optimization effect of the LUT.
With reference to the first aspect, in certain implementations of the first aspect, after determining the LUTs for the multiple sub-videos, a color difference result of the LUTs for two adjacent segments of the multiple sub-videos is determined according to the LUTs for the multiple sub-videos. If the LUT with the color difference result larger than or equal to the second threshold value exists, the LUT of at least one segment of the two adjacent segments of the sub-video is adjusted. Rendering the target video based on the LUTs of the plurality of sub-videos, comprising: and rendering the target video based on the adjusted LUT, wherein the adjusted LUT and the LUT before adjustment belong to the same candidate LUT set.
With reference to the first aspect, in some implementations of the first aspect, determining a color difference result of LUTs of two adjacent segments of the sub video from the LUTs of the multiple sub videos includes: and determining the color difference result of the LUTs of the two adjacent segments of the sub-videos in the plurality of sub-videos according to the brightness, the exposure, the light source intensity, the black-highlight ratio and the contrast of the LUTs of the two adjacent segments of the sub-videos in the plurality of sub-videos.
In a second aspect, there is provided a video editing apparatus comprising: for performing the method of any one of the possible implementations of the first aspect described above. In particular, the apparatus comprises means for performing the method of any one of the possible implementations of the first aspect described above.
In a third aspect, there is provided another video editing apparatus, including a processor, coupled to a memory, and configured to execute instructions in the memory to implement the method in any one of the possible implementations of any one of the above aspects. Optionally, the apparatus further comprises a memory. Optionally, the apparatus further comprises a communication interface, the processor being coupled to the communication interface.
In one implementation, the video editing apparatus is an electronic device. When the video editing apparatus is an electronic device, the communication interface may be a transceiver, or an input/output interface.
In another implementation, the video editing apparatus is a chip configured in an electronic device. When the video editing apparatus is a chip configured in an electronic device, the communication interface may be an input/output interface.
In a fourth aspect, a processor is provided, comprising: input circuit, output circuit and processing circuit. The processing circuit is configured to receive a signal via the input circuit and transmit a signal via the output circuit, so that the processor performs the method of any one of the possible implementations of the first aspect.
In a specific implementation process, the processor may be a chip, the input circuit may be an input pin, the output circuit may be an output pin, and the processing circuit may be a transistor, a gate circuit, a flip-flop, various logic circuits, and the like. The input signal received by the input circuit may be received and input by, for example and without limitation, a receiver, the signal output by the output circuit may be output to and transmitted by a transmitter, for example and without limitation, and the input circuit and the output circuit may be the same circuit that functions as the input circuit and the output circuit, respectively, at different times. The embodiment of the present application does not limit the specific implementation manner of the processor and various circuits.
In a fifth aspect, a processing apparatus is provided that includes a processor and a memory. The processor is configured to read instructions stored in the memory, and may receive a signal via the receiver and transmit a signal via the transmitter to perform the method of any one of the possible implementations of the first aspect.
Optionally, there are one or more processors and one or more memories.
Alternatively, the memory may be integrated with the processor, or provided separately from the processor.
In a specific implementation process, the memory may be a non-transitory (non-transitory) memory, such as a Read Only Memory (ROM), which may be integrated on the same chip as the processor, or may be separately disposed on different chips, and the embodiment of the present application does not limit the type of the memory and the arrangement manner of the memory and the processor.
It will be appreciated that the associated data interaction process, e.g., sending the indication information, may be a process of outputting the indication information from the processor, and receiving the capability information may be a process of receiving the input capability information from the processor. In particular, the data output by the processor may be output to a transmitter and the input data received by the processor may be from a receiver. The transmitter and receiver may be collectively referred to as a transceiver, among others.
The processing means in the above fifth aspect may be a chip, the processor may be implemented by hardware or may be implemented by software, and when implemented by hardware, the processor may be a logic circuit, an integrated circuit, or the like; when implemented in software, the processor may be a general-purpose processor implemented by reading software code stored in a memory, which may be integrated with the processor, located external to the processor, or stand-alone.
In a sixth aspect, there is provided a computer program product comprising: computer program (also called code, or instructions), which when executed, causes a computer to perform the method of any of the possible implementations of the first aspect described above.
In a seventh aspect, a computer-readable storage medium is provided, which stores a computer program (which may also be referred to as code or instructions) that, when executed on a computer, causes the computer to perform the method in any of the possible implementations of the first aspect.
Drawings
FIG. 1 is a stylistic schematic of a different LUT;
fig. 2 is a schematic structural diagram of an electronic device to which the embodiment of the present application is applicable;
fig. 3 is a schematic flow chart of a video editing method provided by an embodiment of the present application;
fig. 4 is a schematic diagram of a partitioned video provided by an embodiment of the present application;
fig. 5 is a schematic diagram of another divided video provided by an embodiment of the present application;
fig. 6 is a schematic interface diagram of video editing according to an embodiment of the present application;
fig. 7 is a schematic flow chart of another video editing method provided by the embodiment of the present application;
fig. 8 is a schematic interface diagram of a video recording interface according to an embodiment of the present disclosure;
fig. 9 is a software structure diagram of an electronic device according to an embodiment of the present application;
fig. 10 is a schematic flow chart of a video editing method provided by an embodiment of the present application;
fig. 11 is a schematic block diagram of a video editing apparatus provided in an embodiment of the present application;
fig. 12 is a schematic block diagram of another video editing apparatus provided in an embodiment of the present application;
fig. 13 is a schematic block diagram of still another video editing apparatus provided in an embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
Most filters are a color pattern generated by basic "exposure, contrast, color temperature", etc. in conjunction with a filter algorithm, and the LUT is a color conversion template similar to a filter, such as a Red Green Blue (RGB) mapping table. The LUT can transform the actually sampled pixel gray value into another corresponding gray value through certain transformation (such as threshold, inversion, contrast adjustment, linear transformation, etc.), which can play the role of highlighting useful information of the image and enhancing the optical contrast of the image.
An image includes a plurality of pixels, each represented by an RGB value. The display screen of the electronic device may display the image according to the RGB values of each pixel point in the image. That is, these RGB values will dictate how the display is lit to blend out various colors for presentation to the user.
The LUT is an RGB mapping table used to represent the correspondence between RGB values before and after adjustment. For example, please refer to table one, which shows an example of a LUT.
Watch 1
Figure BDA0003209587070000041
When the original RGB value is (14, 22, 24), the output RGB value is (6, 9, 4) through the mapping of the LUT shown in table one. When the original RGB value is (61, 34, 67), the output RGB value is (66, 17, 47) through the mapping of the LUT shown in table one. When the original RGB value is (94, 14, 171), the output RGB value is (117, 82, 187) through the mapping of the LUT shown in table one. When the original RGB value is (241, 216, 222), the output RGB value is (255, 247, 243) through the mapping of the LUT shown in table one.
It should be noted that when different LUTs are used to process the same image, different styles of image effects can be obtained. For example, fig. 1 is a style diagram of a different LUT, and shows the processing result of the original image 100 under three different color lookup tables, LUT 1, LUT 2, and LUT 3. As shown in fig. 1, processing an original image 100 captured by a camera using LUT 1 can obtain an image 101 shown in fig. 1. The LUT 2 is used to process the raw image 100 captured by the camera to obtain the image 102 shown in fig. 1. The LUT3 is used to process the original image 100 captured by the camera to obtain the image 103 shown in fig. 1. As is clear from comparison of the images 101, 102, and 103 shown in fig. 1, the images 101, 102, and 103 are different in image effect or style.
At present, a user can lose part of information after a video shot through electronic equipment (such as a mobile phone) with different models and a lens is converted into a Rec.709 standard format, and an original Log video file which is not converted into the Rec.709 standard format has a very high color gamut range and can furthest store light and colors during shooting.
After the user shoots the video and enters the gallery editing page, the user can manually select an LUT element for the video file, and for non-professional users, confusion may be generated in the selection process, and the LUT element which is not suitable for the video content is selected. In addition, a video segment may contain multiple angles and scenes, and if the same LUT element is used for the entire video segment, some segments may not match the LUT style, which is not favorable for the optimization of the LUT on the video.
In view of this, the present application provides a video editing method and a video editing apparatus. The whole video can be segmented into a plurality of sub-videos by combining with an actual shooting scene based on an original Log video file, and an LUT element is recommended for each sub-video, so that the intelligent optimization effect of the LUT on the video is improved.
Before describing the video editing method provided by the embodiment of the present application, the following description is made.
First, in the embodiments shown below, terms and english abbreviations such as picture attributes, scene attributes, color attributes, sampling frames, etc. are illustrative examples given for convenience of description, and should not be construed as limiting the present application in any way. This application is not intended to exclude the possibility that other terms may be defined in existing or future protocols to carry out the same or similar functions.
Second, the first, second and various numerical numbers in the embodiments shown below are merely for convenience of description and are not intended to limit the scope of the embodiments of the present application. E.g., to distinguish between different sampled frames, to distinguish between different thresholds, etc.
Third, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated object, indicating that there may be three relationships, for example, a and/or B, which may indicate: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, and c, may represent: a, or b, or c, or a and b, or a and c, or b and c, or a, b and c, wherein a, b and c can be single or multiple.
Fig. 2 is a schematic structural diagram of an electronic device to which the embodiment of the present application is applied. As shown in fig. 2, the electronic device 200 may include: the mobile terminal includes a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. It is to be understood that the illustrated structure of the present embodiment does not constitute a specific limitation to the electronic apparatus 200. In other embodiments of the present application, the electronic device 200 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, a Display Processing Unit (DPU), and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors. In some embodiments, the electronic device 200 may also include one or more processors 110. The processor may be, among other things, a neural center and a command center of the electronic device 200. The processor can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution. A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. This avoids repeated accesses, reduces the latency of the processor 110, and thus increases the efficiency of the electronic device 200.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a USB interface, etc. The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 200, and may also be used to transmit data between the electronic device 200 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone.
It should be understood that the interface connection relationship between the modules illustrated in the embodiment of the present application is illustrative and is not limited to the structure of the electronic device 200. In other embodiments of the present application, the electronic device 200 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 200. The charging management module 140 may also supply power to the electronic device 200 through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In other embodiments, the power management module 141 may be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 200 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied on the electronic device 200. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier, etc. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 200, including Wireless Local Area Networks (WLANs), bluetooth, global Navigation Satellite System (GNSS), frequency Modulation (FM), NFC, infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves via the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 200 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 200 can communicate with networks and other devices through wireless communication techniques. The wireless communication technologies may include GSM, GPRS, CDMA, WCDMA, TD-SCDMA, LTE, GNSS, WLAN, NFC, FM, and/or IR technologies, among others. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a bei dou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 200 may implement display functions through the GPU, the display screen 194, the application processor, and the like. The application processor may include an NPU and/or a DPU. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute instructions to generate or change display information. The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent cognition of the electronic device 200 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like. The DPU, also referred to as a display sub-system (DSS), is used to adjust the color of the display screen 194, which may be adjusted via a three-dimensional look-up table of colors (3D look-up table,3D LUT). The DPU may also perform scaling, noise reduction, contrast enhancement, backlight brightness management, hdr processing, display parameter Gamma adjustment, and the like on the picture.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-o led, or a quantum dot light-emitting diode (QLED). In some embodiments, the electronic device 200 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 200 may implement the camera function via an ISP, one or more cameras 193, a video codec, a GPU, one or more display screens 194, and an application processor, among others.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 200. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, data files such as music, photos, videos, and the like are saved in the external memory card.
Internal memory 121 may be used to store one or more computer programs, including instructions. The processor 110 may execute the above-mentioned instructions stored in the internal memory 121, thereby causing the electronic device 200 to execute various functional applications, data processing, and the like. The internal memory 121 may include a program storage area and a data storage area. Wherein, the storage program area can store an operating system; the storage area may also store one or more application programs (e.g., gallery, contacts, etc.), etc. The storage data area may store data (e.g., photos, contacts, etc.) created during use of the electronic device 200, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. In some embodiments, the processor 110 may cause the electronic device 200 to execute various functional applications and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor 110.
The internal memory 121 is used to store the LUT set, the preset mapping relationship, and the preset LUT color difference information in the embodiment of the present application. The LUT set includes all LUT elements that can be supported by the electronic device 200, and the LUT elements may also be referred to as an LUT template. The preset mapping relationship is used to indicate a corresponding relationship between a plurality of picture attributes and a plurality of LUT elements, and can be shown in table two below. The preset LUT color difference information includes a color difference between each two LUT elements, and may be embodied in the form of a color difference table, for example.
The electronic device 200 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc. The audio module 170 is configured to convert digital audio information into an analog audio signal for output, and also configured to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110. The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 200 can listen to music through the speaker 170A or listen to a handsfree call. The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 200 receives a call or voice information, it is possible to receive a voice by placing the receiver 170B close to the human ear. The microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking near the microphone 170C through the mouth. The electronic device 200 may be provided with at least one microphone 170C. In other embodiments, the electronic device 200 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 200 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and the like. The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be the USB interface 130, may also be an open mobile equipment platform (OMTP) standard interface of 3.5mm, and may also be a CTIA (cellular telecommunications industry association) standard interface of the USA.
The sensors 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 200 determines the intensity of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 200 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 200 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 200. In some embodiments, the angular velocity of the electronic device 200 about three axes (i.e., x, y, and z axes) may be determined by the gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. Illustratively, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 200, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 200 through a reverse movement, thereby achieving anti-shake. The gyro sensor 180B may also be used for navigation, body sensing game scenes, and the like.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 200 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 200 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 200 may measure the distance by infrared or laser. In some embodiments, taking a scene, the electronic device 200 may utilize the distance sensor 180F to range to achieve fast focus.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys or touch keys. The electronic apparatus 200 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 200.
The motor 191 may be a rotor motor and/or a linear motor, such as an X-axis linear motor or a Z-axis linear motor. At least one motor 191 may be included in the electronic device.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be attached to and detached from the electronic device 200 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 200 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 200 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 200 employs esims, namely: an embedded SIM card, eSIM card, may be embedded in the electronic device 200.
Fig. 3 is a schematic flow chart of a video editing method 300 according to an embodiment of the present application. The steps of the video editing method 300 may be performed by the electronic device 200, and the electronic device 200 has the functions of intelligently segmenting the video and recommending appropriate LUTs for each segment of sub-video. The method 300 includes the steps of:
s301, a Log file of the target video is obtained.
S302, segmenting the target video based on the Log file to obtain a plurality of sub-videos.
S303, determining picture attributes of the multiple sub-videos, where the picture attributes include scene attributes and/or color attributes.
S304, determining LUTs of the multiple sub-videos according to the picture attributes of the multiple sub-videos and a preset mapping relation, wherein the mapping relation is used for representing the corresponding relation between the multiple picture attributes and the multiple LUT elements.
S305, rendering the target video based on the LUTs of the plurality of sub videos.
The electronic equipment can edit an original Log file based on a target video, the original Log file is a gray image and has a very high color gamut range, different filters can be added on the basis of the Log file to achieve different effects, and therefore the original Log file is used as an original editing video to bring a larger optimization space.
According to the embodiment of the application, the target video can be intelligently segmented, a plurality of sub videos can be determined, and LUT elements with proper styles can be recommended for each segment of sub video, so that the optimization effect of the LUT filter on the video is better.
In the embodiment of the present application, the electronic device 200 has a preset LUT set, and for each picture attribute, there is a corresponding candidate LUT set.
As an alternative embodiment, S302 includes: sampling the Log file according to at least one sampling period, and determining a plurality of sampling frames of each sampling period in the at least one sampling period, wherein the plurality of sampling frames represent a plurality of paragraphs of the target video. The candidate segmentation symbols are marked for the target video at each sampling period based on a number of sampling frames for each sampling period. And determining the segmentation symbol of the target video based on the candidate segmentation symbols of the target video at each sampling period, wherein the segmentation symbol divides the target video into a plurality of sub-videos.
The embodiment of the application can determine a plurality of sub-videos in a sampling mode. Fig. 4 is a schematic diagram of a partitioned video according to an embodiment of the present application.
Illustratively, the electronic device first samples a target video according to a first sampling period, and obtains a plurality of first sampling frames. The electronic device may then calculate a degree of disparity between each two adjacent sampled frames of the plurality of first sampled frames. If the calculated difference degree is greater than or equal to the first threshold, the electronic device considers two adjacent sampling frames as different paragraphs and marks a candidate segmentation symbol between the two adjacent sampling frames (as shown by candidate segmentation symbol 1 in fig. 4). If the calculated difference degree is smaller than the first threshold value, the electronic equipment considers that the two adjacent sampling frames are the same paragraph, and the candidate segmentation symbol is not marked between the two adjacent sampling frames.
Wherein, the first sampling period may be 1s or 100 frames, and the first threshold may be 20%.
Illustratively, the electronic device may continue to mark the candidate segmentation symbols for the target video by the second sampling period and the third sampling period (as shown by candidate segmentation symbol 2 and candidate segmentation symbol 3 in fig. 4). As can be seen from fig. 4, overlapped candidate segmentation symbols may exist in sampling periods with different accuracies, the electronic device may obtain the overlapped candidate segmentation symbols with different accuracies as the segmentation symbols of the target video according to the length of the target video, and the left and right sides of each segmentation symbol are different sub-paragraphs, so that the operation of dividing a plurality of sub-paragraphs is completed.
Wherein, the second sampling period may be 100ms or 20 frames, and the third sampling period may be 10ms or 10 frames.
It should be understood that, in the embodiment of the present application, different candidate segmented symbols occurring within 1ms are considered as coincident candidate segmented symbols, that is, if candidate segmented symbols at different sampling periods occur within a time range of 1ms, the candidate segmented symbols at the different sampling periods may be considered as coincident candidate segmented symbols. As shown in fig. 4, four segments of sub-videos, i.e., sub-video 1, sub-video 2, sub-video 3, and sub-video 4 in the figure, can be finally obtained through division.
Fig. 5 is a schematic diagram of another partitioned video provided in an embodiment of the present application. Taking the first sampling period as 1s, the second sampling period as 100ms, and the third sampling period as 10ms as an example, the electronic device may sample the video frame at the start time and the end time of each sampling period to obtain the first frame and the last frame. The legend 1 in fig. 5 indicates that the first sampling period with the difference degree between the first frame and the last frame exceeding 20% corresponds to a time period, i.e. 1s, that is, the electronic device considers that the video picture has changed within the sampling time of 1 s. Similarly, the legend 2 indicates the time period corresponding to the second sampling period with the difference degree between the first frame and the last frame exceeding 20%, namely 100ms, that is, the electronic device considers that the video picture has changed within the sampling time of 100 ms. The legend 3 indicates the time period corresponding to the third sampling period with the difference degree between the first frame and the last frame exceeding 20%, namely 10ms, that is, the video picture is considered to be changed within the sampling time of 10ms by the electronic device. The time periods in which the sampling periods represented by the three legends coincide indicate that the change of the video picture can be recognized under the sampling periods with the three accuracies, so that the electronic device can mark the final segment symbol in the coinciding time periods. As shown in fig. 5, four segments of sub-videos, i.e., sub-video 1, sub-video 2, sub-video 3, and sub-video 4 in the figure, can be finally obtained through division.
It should be understood that fig. 4 and 5 illustrate the sub-paragraph division by taking three sampling periods with different accuracies as an example, and the electronic device may further continue to increase the sampling accuracy for marking the candidate segmentation symbols for the target video. Under the sampling period with lower precision, the same shot scene may be divided into different sub-videos, which may result in that LUTs of different styles are recommended for the same scene finally, and the uniformity of the overall style of the video is affected. A higher accuracy sampling period is therefore required to divide the target video into sub-videos that can be adapted to different LUTs. However, if only a higher sampling accuracy, i.e. a shorter sampling period is taken as an example, it is possible to divide the target video into too fragmented and too large a number of sub-videos. Therefore, the problems of inaccurate paragraph division and excessive division can be effectively avoided by adopting sampling periods with different precisions to divide a plurality of sub-videos.
After obtaining a plurality of sub-videos based on the above description, the electronic device may identify picture attributes of each of the sub-videos, wherein the picture attributes include a scene attribute and a color attribute.
With respect to the scene attributes, the electronic device may rely on an underlying Computer Vision (CV) algorithm to identify the scene attributes of a single frame of picture, including hundreds of specific scenes such as people, cities, oceans, night shows, gourmets, vehicles, flowers, and blue sky. For a single scene video, the CV algorithm samples and analyzes a few frames of pictures to confirm the scene attribute.
For the color attribute, the electronic device may also analyze the color attribute of the single frame image, including the dimensions of brightness, exposure, light source intensity, black-highlight ratio, etc., depending on the CV algorithm, and determine the light sensation level of the single frame image through calculation, where the light sensation level includes: high tone, middle tone, low tone, bright, exposure, few light sources, balance, insufficient, etc. Table two shows the correspondence of the color attribute to the light sensation level.
Watch 2
Color attributes Light sensation level
Black color (black)>5% high light<10% High key (partial brightness)
Black color (black)<5% high light>20% Middle tone (Exposure)
Black color>5% high light<10% Low key (few light source)
Gao Guang>20% High key (balance)
Black color>5% high light<10% Middle tune (lack of balance)
Black color>10% high light>20% High tone (Exposure)
It should be understood that after the sub-videos are divided, the picture content of each frame in each segment of the sub-video can be considered similar. Taking a first sub-video of the plurality of sub-videos as an example, the electronic device may sample the first sub-video based on a CV algorithm, for example, sample the first sub-video for 5s in length, uniformly sample 100 frames of pictures at a fixed period, and then analyze the scene attribute in the 100 frames of pictures. For example, the analysis finds that 80 frames in 100 frames of pictures are the scene attribute of a city, 10 frames are the scene attribute of a sea, and 10 frames are the scene attribute of a food, so that the electronic device can determine that the scene attribute of the first sub-video is the city. The process of determining the picture attributes of the other sub-videos is similar to that of the first sub-video, and is not described herein again.
Similarly, taking 100 frames of pictures sampled in the first sub-video as an example, the electronic device may determine the color attributes of the 100 frames based on a CV algorithm, calculate a light sensation level corresponding to each frame of picture according to the color attributes, similarly count the frames at different light sensation levels, and determine the light sensation level with the largest frame number ratio as the light sensation level of the first sub-video. The determination process of the light sensation level of the other sub-videos is similar to that of the first sub-video, and is not repeated here.
After determining the scene attributes and the light sensation levels, the electronic device may select LUTs corresponding to the scene attributes and the light sensation levels of the plurality of sub videos according to a preset mapping relationship.
It should be understood that, since the light sensation level is determined according to the color attribute, and the picture attribute includes the scene attribute and the color attribute, it can also be referred to as the picture attribute including the scene attribute and the light sensation level, and then S304 may specifically include: and determining LUTs of the sub-videos according to the scene attributes, the light sensation levels and the preset mapping relation.
Illustratively, the video is an indoor character scene, and the LUT of the portrait large aperture blurring effect is suitable for highlighting character details, and if the light in the video is more prominent and bright at the moment, the LUT with high saturation, such as a "portrait close-up" LUT, is preferentially recommended. If the light in the video is dim at the moment and the light source of the picture is less, the LUT of 'seven-zero years' capable of rendering the year feeling is preferentially recommended.
Illustratively, an outdoor natural scene is in the video, the light in the video is soft, and the color is balanced, so that a 'scene-like-picture' LUT which can restrain highlight and reserve more details of a shooting subject is recommended.
Illustratively, within the video is a city long shot, and the night scene under the night neon light shows a modern rich-breath "Saibobonk" LUT. Table three shows the correspondence of the partial scene attributes, the light sensation levels, and the recommendation LUT.
Watch III
Figure BDA0003209587070000121
It should be understood that for each picture attribute (hereinafter expressed in the form of [ scene attribute, light sensation level ]), the electronic device has a preset set of candidate LUTs. The recommended LUT in table two above is the LUT element in the set of candidate LUTs.
For example, for the picture attributes of [ portrait, high (biased) key ], the candidate LUT set is (LUT 1, LUT 2, LUT 3), where LUT 1 is the LUT with the highest priority for the picture attributes of [ portrait, high (biased) key ], in this LUT set, that is, the LUT with the highest matching degree with the picture attributes of [ portrait, high (biased) key ], and the electronic device preferentially recommends LUT 1 for the sub-video corresponding to this picture attribute.
For example, for the picture attribute of [ gourmet, middle key (partial deficiency) ], the candidate LUT set is (LUT 5, LUT3, LUT 4), where LUT 5 is the LUT with the highest priority for the picture attribute of [ gourmet, middle key (partial deficiency) ], that is, the LUT with the highest matching degree with the picture attribute of [ gourmet, middle key (partial deficiency) ], and the electronic device preferentially recommends LUT 5 for the sub-video corresponding to the picture attribute.
As an alternative embodiment, after determining the LUTs for the plurality of sub-videos, the method 300 further comprises: determining a color difference result of the LUTs of two adjacent segments of the sub-videos according to the LUTs of the sub-videos; if the color difference result is larger than or equal to the LUT of the second threshold value, adjusting the LUT of at least one segment of sub video in the two adjacent segments of sub video; s305 includes: and editing the target video based on the adjusted LUT, wherein the adjusted LUT and the LUT before adjustment belong to the same candidate LUT set.
In the embodiment of the present application, after the LUTs are recommended independently for the respective sub-videos, there may be a case where the overall style is not uniform. For example, the first half of the video uses a high saturation, high contrast LUT (e.g., nightscape in a city), the second half uses a softer, natural LUT (e.g., close-up), or the second half uses a rich black-and-white LUT, so that the overall style of the video is strange. In consideration of the situation, the electronic device can adjust the LUT of at least one segment of the adjacent two segments of the sub-videos according to the preset LUT color difference information, so that the whole filming style is more uniform, and the visual effect is better. It is to be understood that each LUT has its own color attributes including at least one of brightness, exposure, light source intensity, black-to-highlight ratio, or contrast. Here, the comparison is made of the color difference values of the two LUTs, not the color difference values of the two segments of sub-video after adding the LUT, nor the color difference values of the two segments of sub-video.
Wherein the LUT color difference information comprises a color difference result between each two LUTs. Taking two LUT elements in the set of LUTs supported by the electronic device as an example, they are labeled LUT 1 and LUT 2. Wherein the brightness of LUT 1 is alpha 1 Exposure degree of beta 1 Light source intensity of gamma 1 Black-to-highlight ratio of delta 1 Contrast of epsilon 1 . Luminance of LUT 2 is alpha 2 Exposure degree of beta 2 Light source intensity of gamma 2 Black-to-highlight ratio of delta 2 Contrast of epsilon 2 . The color difference result Δ between LUT 1 and LUT 2 can be expressed by the following equation:
Δ=(α 12 ) 2 +(β 12 ) 2 +(γ 12 ) 2 +(δ 12 ) 2 +(ε 12 ) 2
if the color difference result Δ of the LUTs recommended for the two adjacent segments of sub-video is greater than or equal to the second threshold, the electronic device considers that the styles of the two recommended LUTs are not suitable for splicing the two adjacent segments, and then the LUTs can be adjusted for at least one of the two adjacent segments of sub-video. Specifically, the electronic device may select one of the candidate LUTs of the sub video for which the LUT needs to be adjusted as the recommended LUT, and it should be understood that the adjusted LUT is the LUT with the highest priority for the picture attribute of the current sub video in the candidate LUT set of the sub video. The electronic device may preset a priority of the LUT element corresponding to each picture attribute, and recommend the LUT for each picture attribute in order of the priority.
After the LUT of at least one of the two adjacent segments of sub-video is adjusted, the electronic device still needs to judge the consistency of the overall slicing style, and if the overall style is not uniform, the LUT is continuously adjusted until the overall slicing style reaches a balance state.
The video editing method according to the embodiment of the present application will be described with reference to fig. 6 and 7 by using a specific interface example. Fig. 6 is a schematic interface diagram of video editing according to an embodiment of the present application. Fig. 6 includes 3 interfaces, interface a, interface b, and interface c. The user can click a gallery icon on the interface a to enter a video editing interface shown as an interface b, and in the interface b, the user can select one of a plurality of Log video files stored in the gallery as a target video to edit the Log. Illustratively, the user selects video 3 as the target video, so the complete target video can be displayed and played in video display area 601. After the user selects the "Log edit" option in the upper right corner of interface b, the electronic device may display a Log segment recommendation interface as shown in interface c.
Interface c includes a viewing area 602, a multi-segment sub video display area 603, and a LUT element display area 604. Illustratively, the target video is 8s in length, and the electronic device divides the target video into 4 segments of sub-videos shown in the multi-segment sub-video display area 603, each of which is 2s in length. The LUT element display area 604 displays LUT elements including "mist", "modern", "youth", "milky tea", etc. (the LUT element display area may also be slid left and right to display more LUT elements, not shown at all here).
After the electronic device receives a user click on the "intelligent recommendation" option, the electronic device may add a recommended LUT element to the current sub-video at interface c and display the LUT-processed video in the viewing area 602, and the user may view the target video after adding the LUT in the viewing area 602. Illustratively, the electronic device may recommend an "youth" style LUT element for the 2 nd segment sub-video and display in the viewing area 602.
When the user switches sub-videos while sliding left and right in the multi-segment sub-video display area 603, the currently recommended LUT of the LUT element display area 604 also changes. For example, the user switches to sub-video 2, the LUT elements display area 604 currently recommends that the LUT be in the "youth" style, and the currently recommended LUT may become in the "modern" style when the user slides to sub-video 3.
Fig. 7 is a schematic flow chart of another video editing method 700 provided in an embodiment of the present application, where the method 700 includes:
s701, responding to a first operation that a user clicks a 'gallery' icon, and displaying a video editing interface by the electronic equipment.
In conjunction with fig. 6 described above, the electronic device may receive an instruction from a user to click on the "gallery" icon in interface a, and in response to the instruction, the electronic device may display a video editing interface as shown in interface b in fig. 6. The interface b has already been described above, and is not described here.
S702, responding to a second operation that the user clicks a 'Log edition' option in the video editing interface, and displaying a Log segmentation recommendation interface by the electronic equipment.
In conjunction with fig. 6, the electronic device may receive an instruction from the user to click on the "Log edit" option on interface b in fig. 6, and in response to the instruction, the electronic device may display a Log segment recommendation interface as shown in interface c in fig. 6. The interface c has already been described above, and is not described here.
And S703, dividing the target video into a plurality of sections of sub-videos according to the length of the target video.
In this step, the electronic device may divide the target video according to the above-described sampling periods with different accuracies, so as to obtain a plurality of sub-videos with different picture attributes.
And S704, returning the multiple sections of sub-videos to the Log section recommendation interface.
In conjunction with fig. 6, the electronic device may display the returned multiple segments of sub-videos in the multiple segments of sub-video display area 603 shown in the interface c in fig. 6, that is, the multiple segments of sub-videos that have been divided are displayed in the multiple segments of sub-video display area 603.
And S705, recommending LUTs for each segment of sub video.
In this step, the electronic device may select, according to the picture attribute of each segment of the sub-video and a preset LUT mapping relationship, an LUT element of a suitable style for each segment of the sub-video from the candidate LUT set of each segment of the sub-video as a recommended LUT.
It should be understood that, in this step, the LUT recommended by the electronic device for each segment of sub-video is an LUT with a consistent overall slicing style after the style consistency detection.
And S706, returning the multiple sections of sub-videos added with the LUT to a Log section recommendation interface.
In conjunction with fig. 6 described above, the electronic device may display the returned LUT-added multiple segments of the sub-video in a viewing area 602 as shown in interface c in fig. 6.
And S707, returning the whole piece to a video editing interface through a Log segmentation recommendation interface.
In this step, if the user accepts the LUT recommended for each segment of sub-video, the rendered whole LUT can be returned to the video editing interface shown as interface b in fig. 6 by clicking the "save" icon 65 in fig. 6.
After the electronic equipment recommends the LUT for each segment of the sub-video, whether the LUT needs to be adjusted for the sub-video is determined according to style consistency check, so that the uniformity of the video styles is facilitated.
Besides intelligently adjusting the LUT for the sub-video through the electronic equipment, the user can change the LUT by himself and adjust the LUT to the favorite style of the user, so that the personalized requirements of different users can be met.
The process of how to edit the target video after acquiring the Log file of the target video is described in detail above with reference to fig. 1 to 7. Before obtaining the Log file of the target video, the method 300 may include: the user may record the target video through the electronic device 200.
Fig. 8 is a schematic interface diagram for recording a video according to an embodiment of the present disclosure. FIG. 8 includes 3 interfaces, interface a, interface b, and interface c. Taking the above-mentioned electronic device 200 as a mobile phone as an example, in response to the user clicking the "camera" icon 60 of the interface a, the mobile phone displays a video recording interface as shown in the interface b. Interface b includes a recording area 801, a mode area 802, and a control area 803.
The user may slide left and right in the mode area 802 to select the recording mode, and in some embodiments, the function of recording the video by the mobile phone may be implemented by using the video recording mode of the mobile phone camera. In other embodiments, the function of recording videos by the mobile phone can be realized by adopting a professional mode of a mobile phone camera. In still other embodiments, the function of recording video by the mobile phone can be realized by adopting a movie mode of a camera of the mobile phone. Currently, the b interface displays a video recording interface in a professional mode.
The control area 803 includes a Log control 804 for starting the Log function and an LUT control 805 for starting the LUT function, and since the video image captured by using the Log function can maximally store light and color during capturing, which is convenient for subsequent optimization of the video image, and the color of the video image captured by using the LUT function is processed and is rich in color, the Log control 804 and the LUT control 805 in the interface b cannot be simultaneously opened. That is, in the professional mode of the mobile phone camera, the Log function and the LUT function cannot be simultaneously operated. It should be noted that the Log control 804 shown in the interface b in fig. 6 is in an on state, and the LUT control 805 is in an off state.
In response to the operation of clicking the 'record' button 61 by the user, the mobile phone displays an interface c, and starts to start the Log function to record the target video in the professional mode.
Fig. 9 is a software structure diagram of an electronic device according to an embodiment of the present application. It will be appreciated that the hierarchical architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, an Android system may include an application layer (APP), a framework layer (FWK), a Hardware Abstraction Layer (HAL), and a kernel layer (kernel). In some embodiments, the handset also includes hardware (e.g., a display screen).
Exemplary application layers as described above may include a User Interface (UI) layer and a logic layer. As shown in fig. 9, the UI layer includes a camera, a gallery, and other applications. Where the camera includes a Log control (e.g., log control 804 in the above embodiment) and a LUT control (e.g., LUT control 805 in the above embodiment). The logic layer comprises an LUT template module, an encoding module, an LUT control module, an AI segmentation module, an AI recommendation module, a playing module and the like.
The hardware abstraction layer is an interface layer located between the kernel layer and the hardware, and can be used for abstracting the hardware. Illustratively, as shown in FIG. 9, the hardware abstraction layer includes a camera interface.
The kernel layer provides underlying drivers for various hardware of the handset. Illustratively, as shown in FIG. 9, the core layer includes a camera driver module.
The framework layer provides an Application Programming Interface (API) and a programming service for an application program of the application layer. The framework layer includes some predefined functions. The framework layer provides programming services to application layer calls through the API interface. It should be noted that, in the embodiment of the present application, the programming service may be, for example, a camera service (camera service). In some embodiments, as shown in FIG. 9, the framework layer includes a camera services framework and a media framework. Wherein the media frame includes an encoder.
In one possible implementation, after the user captures the target video according to the operation shown in fig. 8, the Log file of the target video may be stored in the internal memory 121 of the electronic device 200, and may be displayed on the video editing interface by clicking the "gallery" icon by the user (e.g., interface b in fig. 8 in the above embodiment). In response to the user clicking the operation of Log editing, the video editing operation performed by each layer in the layered architecture is as shown in fig. 10.
Fig. 10 is a schematic flow chart of still another video editing method 1000 provided by an embodiment of the present application. The method 1000 includes the steps of:
s1001, the LUT control module obtains a Log file of a target video and starts an LUT recommendation process.
S1002, the LUT control module sends the target video to the AI segmentation module. Accordingly, the AI segmentation module receives the target video.
S1003, the AI segmentation module identifies the content of the target video and divides the target video into a plurality of sub-videos based on segmentation algorithms with different precisions.
S1004, the AI segmentation module transmits the plurality of sub-videos to the LUT control module. Accordingly, the LUT control module receives the plurality of sub-videos.
S1005, the LUT control module sends the sub video to the AI recommendation module. Accordingly, the AI recommendation module receives the sub-video.
S1006, the LUT control module sends the sub-video to the encoding module. Accordingly, the encoding module receives the sub-video.
In this step, the encoding module may store the received sub-video.
S1007, the AI recommending module analyzes scene, light brightness, color and other information of the sub-video according to the CV algorithm, and recommends a candidate LUT set for the sub-video.
S1008, the AI recommendation module sends the candidate LUT set recommended for the sub-video to the LUT control module. Accordingly, the LUT control module receives a set of candidate LUTs corresponding to the sub-video.
It should be understood that in the above S1005, the LUT control module sequentially sends the plurality of sub-videos to the AI recommendation module, and the AI recommendation module performs S1007 and S1008 every time it receives one sub-video until a candidate LUT set is recommended for all sub-videos. Similarly, in S1006, the LUT control module sequentially sends the plurality of sub-videos to the encoding module, and the encoding module stores each sub-video until all sub-videos are stored.
It should be understood that, in this step, taking the first sub-video of the multiple sub-videos as an example, the AI recommendation module recommends a candidate LUT set for the first sub-video, and LUT elements in the candidate set have been sorted by priority, and the LUT element with the highest priority is the LUT element in the candidate LUT set for the first sub-video that is most suitable for the scene, light, color, and the like of the first sub-video. Illustratively, the candidate LUT set for the first sub-video is [ LUT 2, LUT 1, LUT 5], which indicates that the LUT element labeled 2 (i.e., LUT 2) is the LUT element most suitable for the first sub-video, with the priority of LUT 1 being the next to the priority of LUT 5. The form of the candidate LUT set of other sub-videos is similar to that of the first sub-video, and is not described here again.
S1009, the LUT control module determines a target LUT for each sub video among the candidate LUTs for each sub video.
It should be understood that in this step, the LUT control module first determines the LUT element with the highest priority in the candidate LUT set of each sub-video as the target LUT, and then integrates the LUT styles of all sub-videos to detect the consistency of the overall video style. And if the style conflict of the target LUT of the two adjacent segments of sub-videos is detected, adjusting the LUT of at least one segment of sub-video in the two adjacent segments of sub-videos.
Taking the first sub-video and the second sub-video of the multiple sub-videos as an example, assuming that the first sub-video and the second word video are two adjacent sub-videos, and the candidate LUT set of the first sub-video is [ LUT 2, LUT 1, LUT 5], the LUT control module determines LUT 2 as the target LUT of the first sub-video. The candidate LUT set of the second sub-video is [ LUT3, LUT4, LUT 5], then the LUT control module determines LUT3 as the target LUT for the second sub-video. If the LUT control module finds that the style of the target LUT (i.e., LUT 2) of the first sub video and the style of the target LUT (i.e., LUT 3) of the second word video conflict through style consistency detection, the target LUT is considered to be replaced for the first sub video and/or the second word video. Exemplarily, the target LUT of the first sub video may be replaced by LUT 1, i.e., LUT 1 is finally used by the first sub video. And the LUT control module carries out style consistency detection again, and if the overall style is detected to be uniform, the LUT of the sub-video is not adjusted.
It should be understood that, the above description is given by taking the adjacent first sub-video and second sub-video as an example, if it is detected that there is a style conflict in the LUTs of other adjacent sub-videos, the LUTs also need to be adjusted to determine the final target LUT of each sub-video, and the adjustment process is similar to that of the first sub-video, and is not repeated here.
S1010, the LUT control module sends a request message to the LUT template module, the request message requesting a target LUT for each sub video. Accordingly, the LUT template module receives the request message.
S1011, the LUT template module sends the target LUTs for all sub-videos to the encoding module. Accordingly, the encoding module receives the target LUTs for all sub-videos.
S1012, the encoding module encodes each sub-video according to the target LUT of each sub-video, and combines the encoded sub-videos to generate a complete video, that is, an encoded target video.
Since the encoding module receives and stores the plurality of sub-videos in S1006, in this step, the encoding module may encode each sub-video based on the target LUT of each sub-video, and obtain a complete encoded video.
S1013, the encoding module sends the complete video to the playing module, and accordingly, the playing module receives the complete video.
S1014, the playing module plays the complete video. In the case where the electronic device as shown in fig. 2 or fig. 8 includes a display screen, the play module receives an instruction to play a video to display the complete video on the display screen.
It should be understood that the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
The video editing method according to the embodiment of the present application is described in detail above with reference to fig. 1 to 10, and the video editing apparatus according to the embodiment of the present application will be described in detail below with reference to fig. 11 to 13.
Fig. 11 shows a schematic block diagram of a video editing apparatus 1100 provided in an embodiment of the present application, where the apparatus 1100 includes an obtaining module 1110 and a processing module 1120.
The obtaining module 1110 is configured to: and acquiring a Log file of the target video. The processing module 1120 is configured to: segmenting the target video based on the Log file to obtain a plurality of sub-videos; determining picture attributes of the plurality of sub-videos, wherein the picture attributes comprise scene attributes and/or color attributes; determining LUTs of the multiple sub-videos according to the picture attributes of the multiple sub-videos and a preset mapping relation, wherein the mapping relation is used for representing the corresponding relation between the multiple picture attributes and the multiple LUT elements; and editing the target video based on the LUTs of the plurality of sub videos.
Fig. 12 is a schematic block diagram illustrating another video editing apparatus 1200 according to an embodiment of the present application. Device 1200 includes an obtaining module 1110 and a processing module 1120, and compared to device 1000, processing module 1120 of device 1200 may specifically include an AI segmentation module 21, an AI recommendation module 22, a LUT control module 23, and an encoding module 24. Optionally, the processing module 1120 further comprises a LUT template module 25 and a play module 26.
Optionally, the AI segmentation module 21 is configured to: sampling the Log file according to at least one sampling period, and determining a plurality of sampling frames of each sampling period in the at least one sampling period, wherein the sampling frames are used for obtaining image frames by sampling the target video according to the sampling period; marking candidate segmentation symbols for the target video at each sampling period based on a plurality of sampling frames at each sampling period; and determining the segmentation symbol of the target video based on the candidate segmentation symbols of the target video at each sampling period, wherein the segmentation symbol divides the target video into a plurality of sub-videos.
Optionally, the AI segmentation module 21 is configured to: sampling the Log file according to a first sampling period in at least one sampling period, and determining a plurality of first sampling frames; calculating a difference degree between two adjacent first sampling frames in the plurality of first sampling frames; and marking the candidate segmented symbol between two adjacent first sample frames with the difference degree greater than or equal to a first threshold value.
Optionally, the AI segmentation module 21 is configured to: sampling the Log file according to a first sampling period in at least one sampling period, and determining a plurality of first frames and a plurality of tail frames; calculating the difference degree between two adjacent first frames and two adjacent last frames in the plurality of first frames and the plurality of last frames; candidate segment symbols are marked for a period of time when a degree of difference between two adjacent first and last frames is greater than or equal to a first threshold.
Optionally, the AI segmentation module 21 is configured to: and determining a plurality of candidate segmentation symbols according to each sampling period in at least one sampling period, and determining coincident candidate segmentation symbols in the candidate segmentation symbols as the segmentation symbols of the target video.
Optionally, the at least one sampling period comprises three sampling periods, namely a first sampling period 1s, a second sampling period 100ms and a third sampling period 10ms.
Optionally, the AI recommendation module 22 is configured to: determining the light sensation levels of the sub-videos according to the color attributes of the sub-videos; and selecting an LUT corresponding to the scene attributes and the light sensation levels of the plurality of sub-videos from the mapping relation according to the scene attributes and/or the light sensation levels of the plurality of sub-videos.
Optionally, the LUT control module 23 is configured to: determining a color difference result of the LUTs of two adjacent segments of the sub-videos according to the LUTs of the sub-videos; if the color difference result is larger than or equal to the LUT of the second threshold value, adjusting the LUT of at least one section of the two adjacent sections of the sub-video; the encoding module 24 is configured to: editing the target video based on an adjusted LUT, wherein the adjusted LUT and the LUT before adjustment belong to the same candidate LUT set.
Optionally, the LUT control module 23 is configured to: and determining the color difference result of the LUTs of the two adjacent segments of the sub-videos according to the brightness, the exposure, the light source intensity, the black-highlight proportion and the contrast of the LUTs of the two adjacent segments of the sub-videos.
Optionally, the encoding module 24 is configured to: coding the sub-videos according to LUTs of the sub-videos to obtain a plurality of coded sub-videos; and combining the plurality of encoded sub-videos to obtain an encoded target video.
Optionally, the playing module 26 is configured to: and playing the coded target video.
In an alternative example, it can be understood by those skilled in the art that the apparatus 1100 or the apparatus 1200 may be embodied as the electronic device in the above embodiments, or the functions of the electronic device in the above embodiments may be integrated in the apparatus 1100 or the apparatus 1200. The above functions may be implemented by hardware, or may be implemented by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the functions described above. The apparatus 1100 or the apparatus 1200 may be used to execute various processes and/or steps corresponding to the electronic device in the above method embodiments.
It should be appreciated that the apparatus 1100 or apparatus 1200 herein is embodied in the form of functional modules. The term module herein may refer to an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (e.g., a shared, dedicated, or group processor) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that support the described functionality. In an embodiment of the present application, the apparatus 1100 or the apparatus 1200 in fig. 6 may also be a chip or a chip system, for example: system on chip (SoC).
Fig. 13 shows a schematic block diagram of a further video editing apparatus 1300 according to an embodiment of the present application. The apparatus 1300 includes a processor 1310, a transceiver 1320, and a memory 1330. Wherein the processor 1310, the transceiver 1320, and the memory 1330 are in communication with each other through the interconnection, the memory 1330 is configured to store instructions, and the processor 1310 is configured to execute the instructions stored in the memory 1330 to control the transceiver 1320 to transmit and/or receive signals.
It should be understood that the apparatus 1300 may be embodied as an electronic device in the foregoing embodiments, or functions of the electronic device in the foregoing embodiments may be integrated in the apparatus 1300, and the apparatus 1300 may be configured to perform each step and/or flow corresponding to the electronic device in the foregoing method embodiments. Alternatively, the memory 1330 may include a read-only memory and a random access memory, and provide instructions and data to the processor. A portion of the memory may also include non-volatile random access memory. For example, the memory may also store device type information. The processor 1310 may be configured to execute the instructions stored in the memory, and when the processor executes the instructions, the processor may perform the steps and/or processes corresponding to the electronic device in the above method embodiments.
It should be understood that, in the embodiment of the present application, the processor 1310 may be a Central Processing Unit (CPU), and the processor may also be other general processors, digital Signal Processors (DSP), application Specific Integrated Circuits (ASIC), field Programmable Gate Arrays (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor executes instructions in the memory and combines hardware thereof to perform the steps of the above-described method. To avoid repetition, it is not described in detail here.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the apparatus and the module described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a portable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The above description is only for the specific implementation of the present application, but the scope of the embodiments of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the embodiments of the present application, and all the changes or substitutions should be covered by the scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A video editing method, comprising:
acquiring a Log file of a target video;
respectively sampling the Log file according to a first sampling period, a second sampling period and a third sampling period in at least one sampling period, and determining a plurality of first frames and a plurality of last frames in each sampling period in the at least one sampling period, wherein the sampling frames are used for obtaining image frames by sampling the target video according to the sampling periods;
respectively calculating the difference between two adjacent first frames and two adjacent last frames in a plurality of first frames and a plurality of last frames in each sampling period;
marking candidate segmentation symbols in a time period of which the difference degree between the two adjacent first frames and the two adjacent last frames under each sampling period is greater than or equal to a first threshold value;
determining a plurality of candidate segmentation symbols according to each sampling period in the at least one sampling period, determining coincident candidate segmentation symbols in the plurality of candidate segmentation symbols as segmentation symbols of the target video, wherein the segmentation symbols divide the target video into the plurality of sub-videos;
determining picture attributes of the plurality of sub-videos, wherein the picture attributes comprise scene attributes and/or color attributes;
determining a color lookup table (LUT) of the plurality of sub-videos according to the picture attributes of the plurality of sub-videos and a preset mapping relation, wherein the mapping relation is used for representing the corresponding relation between the plurality of picture attributes and a plurality of LUT elements;
determining a color difference result of the LUTs of the two adjacent segments of the sub-videos according to the LUTs of the sub-videos;
if the color difference result is larger than or equal to the LUT of the second threshold value, adjusting the LUT of at least one segment of sub video in the two adjacent segments of sub video;
and rendering the target video based on the adjusted LUT, wherein the adjusted LUT and the LUT before adjustment belong to the same candidate LUT set.
2. The method of claim 1, wherein the first sampling period is 1s, the second sampling period is 100ms, and the third sampling period is 10ms.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
determining the light sensation levels of the sub-videos according to the color attributes of the sub-videos;
determining the LUT of the plurality of sub-videos according to the picture attributes of the plurality of sub-videos and a preset mapping relation comprises:
and selecting the LUT corresponding to the scene attributes and/or light sensation levels of the plurality of sub-videos from the mapping relation according to the scene attributes and/or light sensation levels of the plurality of sub-videos.
4. The method of claim 1, wherein the determining the color difference result of the LUT for two adjacent segments of the sub-video from the LUTs for the plurality of sub-videos comprises:
and determining a color difference result of the LUTs of the two adjacent segments of the sub-videos according to at least one parameter of the brightness, the exposure, the light source intensity, the black-highlight proportion and the contrast of the LUTs of the two adjacent segments of the sub-videos.
5. A video editing apparatus, comprising:
the acquisition module is used for acquiring a Log file of a target video;
the processing module is used for segmenting the target video based on the Log file to obtain a plurality of sub videos; determining picture attributes of the plurality of sub-videos, wherein the picture attributes comprise scene attributes and/or color attributes; determining a color lookup table (LUT) of the plurality of sub-videos according to picture attributes of the plurality of sub-videos and a preset mapping relation, wherein the mapping relation is used for representing a corresponding relation between the plurality of picture attributes and a plurality of LUT elements; and rendering the target video based on the LUTs of the plurality of sub-videos;
the processing module comprises an artificial intelligence AI segmentation module, and is specifically used for:
respectively sampling the Log file according to a first sampling period, a second sampling period and a third sampling period in at least one sampling period, and determining a plurality of first frames and a plurality of tail frames in each sampling period in the at least one sampling period, wherein the sampling frames are used for sampling image frames obtained by sampling the target video according to the sampling periods;
respectively calculating the difference between two adjacent first frames and two adjacent last frames in a plurality of first frames and a plurality of last frames in each sampling period;
marking candidate segmentation symbols in a time period of which the difference degree between the two adjacent first frames and the two adjacent last frames under each sampling period is greater than or equal to a first threshold value;
determining a plurality of candidate segmentation symbols according to each sampling period in the at least one sampling period, determining coincident candidate segmentation symbols in the plurality of candidate segmentation symbols as segmentation symbols of the target video, wherein the segmentation symbols divide the target video into a plurality of sub-videos;
the processing module also comprises an LUT control module and an encoding module;
the LUT control module is to: determining a color difference result of the LUTs of the two adjacent segments of the sub-videos according to the LUTs of the sub-videos; if the color difference result is larger than or equal to the LUT of the second threshold value, adjusting the LUT of at least one section of the two adjacent sections of the sub-video;
the encoding module is configured to: and rendering the target video based on the adjusted LUT, wherein the adjusted LUT and the LUT before adjustment belong to the same candidate LUT set.
6. The apparatus of claim 5, wherein the first sampling period is 1s, the second sampling period is 100ms, and the third sampling period is 10ms.
7. The apparatus according to claim 5 or 6, wherein the processing module further comprises an AI recommendation module, specifically configured to:
determining the light sensation levels of the sub-videos according to the color attributes of the sub-videos;
and selecting an LUT corresponding to the scene attributes and/or the light sensation levels of the plurality of sub-videos from the mapping relation according to the scene attributes and/or the light sensation levels of the plurality of sub-videos.
8. The apparatus of claim 5, wherein the LUT control module is specifically configured to:
and determining the color difference result of the LUTs of the two adjacent segments of the sub-videos according to the brightness, the exposure, the light source intensity, the black-highlight proportion and the contrast of the LUTs of the two adjacent segments of the sub-videos.
9. A video editing apparatus, comprising: a processor coupled with a memory for storing a computer program that, when invoked by the processor, causes the apparatus to perform the method of any of claims 1-4.
10. A computer-readable storage medium for storing a computer program comprising instructions for implementing the method of any one of claims 1-4.
CN202110926932.9A 2021-08-12 2021-08-12 Video editing method and video editing device Active CN113810764B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110926932.9A CN113810764B (en) 2021-08-12 2021-08-12 Video editing method and video editing device
PCT/CN2022/092061 WO2023015997A1 (en) 2021-08-12 2022-05-10 Video editing method and video editing apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110926932.9A CN113810764B (en) 2021-08-12 2021-08-12 Video editing method and video editing device

Publications (2)

Publication Number Publication Date
CN113810764A CN113810764A (en) 2021-12-17
CN113810764B true CN113810764B (en) 2022-12-06

Family

ID=78893567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110926932.9A Active CN113810764B (en) 2021-08-12 2021-08-12 Video editing method and video editing device

Country Status (2)

Country Link
CN (1) CN113810764B (en)
WO (1) WO2023015997A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113810764B (en) * 2021-08-12 2022-12-06 荣耀终端有限公司 Video editing method and video editing device
CN114449354B (en) * 2022-02-07 2023-12-08 上海幻电信息科技有限公司 Video editing method and system
CN116389831B (en) * 2023-06-06 2023-08-15 湖南马栏山视频先进技术研究院有限公司 Yun Yuansheng-based offline rendering system and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109525901A (en) * 2018-11-27 2019-03-26 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and computer-readable medium
CN110188625A (en) * 2019-05-13 2019-08-30 浙江大学 A kind of video fine structure method based on multi-feature fusion
CN111416950A (en) * 2020-03-26 2020-07-14 腾讯科技(深圳)有限公司 Video processing method and device, storage medium and electronic equipment
CN111444819A (en) * 2020-03-24 2020-07-24 北京百度网讯科技有限公司 Cutting frame determining method, network training method, device, equipment and storage medium
CN112954450A (en) * 2021-02-02 2021-06-11 北京字跳网络技术有限公司 Video processing method and device, electronic equipment and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170070740A1 (en) * 2015-09-03 2017-03-09 Intel Corporation Encoding techniques for display of text and other high-frequency content
CN107358141B (en) * 2016-05-10 2020-10-23 阿里巴巴集团控股有限公司 Data identification method and device
CN108664931B (en) * 2018-05-11 2022-03-01 中国科学技术大学 Multi-stage video motion detection method
CN110602552B (en) * 2019-09-16 2022-02-22 广州酷狗计算机科技有限公司 Video synthesis method, device, terminal and computer readable storage medium
US11803947B2 (en) * 2020-03-18 2023-10-31 Intel Corporation Brightness and contrast enhancement for video
CN111510698A (en) * 2020-04-23 2020-08-07 惠州Tcl移动通信有限公司 Image processing method, device, storage medium and mobile terminal
CN112399269B (en) * 2020-11-12 2023-06-20 广东小天才科技有限公司 Video segmentation method, device, equipment and storage medium
CN113810764B (en) * 2021-08-12 2022-12-06 荣耀终端有限公司 Video editing method and video editing device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109525901A (en) * 2018-11-27 2019-03-26 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and computer-readable medium
CN110188625A (en) * 2019-05-13 2019-08-30 浙江大学 A kind of video fine structure method based on multi-feature fusion
CN111444819A (en) * 2020-03-24 2020-07-24 北京百度网讯科技有限公司 Cutting frame determining method, network training method, device, equipment and storage medium
CN111416950A (en) * 2020-03-26 2020-07-14 腾讯科技(深圳)有限公司 Video processing method and device, storage medium and electronic equipment
CN112954450A (en) * 2021-02-02 2021-06-11 北京字跳网络技术有限公司 Video processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113810764A (en) 2021-12-17
WO2023015997A1 (en) 2023-02-16

Similar Documents

Publication Publication Date Title
CN111179282B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN112532869B (en) Image display method in shooting scene and electronic equipment
CN113810764B (en) Video editing method and video editing device
US9536479B2 (en) Image display device and method
CN109793498B (en) Skin detection method and electronic equipment
CN112532892B (en) Image processing method and electronic device
CN113810601B (en) Terminal image processing method and device and terminal equipment
CN112530382B (en) Method and device for adjusting picture color of electronic equipment
CN111552451B (en) Display control method and device, computer readable medium and terminal equipment
CN110706310A (en) Image-text fusion method and device and electronic equipment
CN113542580B (en) Method and device for removing light spots of glasses and electronic equipment
CN113810603B (en) Point light source image detection method and electronic equipment
CN113973173A (en) Image synthesis method and electronic device
CN113747060B (en) Image processing method, device and storage medium
CN112700377A (en) Image floodlight processing method and device and storage medium
CN111147667A (en) Screen-off control method and electronic equipment
CN114880251A (en) Access method and access device of storage unit and terminal equipment
CN115150542A (en) Video anti-shake method and related equipment
CN112099741A (en) Display screen position identification method, electronic device and computer readable storage medium
CN114003319A (en) Screen-off display method and electronic equipment
CN115641867A (en) Voice processing method and terminal equipment
CN115412678A (en) Exposure processing method and device and electronic equipment
CN115706869A (en) Terminal image processing method and device and terminal equipment
CN114445522A (en) Brush effect graph generation method, image editing method, device and storage medium
CN115619628B (en) Image processing method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant