CN113965694A - Video recording method, electronic device and computer readable storage medium - Google Patents

Video recording method, electronic device and computer readable storage medium Download PDF

Info

Publication number
CN113965694A
CN113965694A CN202110927047.2A CN202110927047A CN113965694A CN 113965694 A CN113965694 A CN 113965694A CN 202110927047 A CN202110927047 A CN 202110927047A CN 113965694 A CN113965694 A CN 113965694A
Authority
CN
China
Prior art keywords
lut
preview image
recommendation
interface
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110927047.2A
Other languages
Chinese (zh)
Other versions
CN113965694B (en
Inventor
任泽强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202110927047.2A priority Critical patent/CN113965694B/en
Publication of CN113965694A publication Critical patent/CN113965694A/en
Application granted granted Critical
Publication of CN113965694B publication Critical patent/CN113965694B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a video recording method, electronic equipment and a computer readable storage medium, relates to the technical field of image processing, and aims to meet the requirement that a user wants to record conveniently by using a proper LUT filter. The specific scheme is as follows: and when the artificial intelligence AI identification function is started, displaying a first preview image and a color lookup table LUT recommendation control on the camera preview interface in the movie mode. The LUT recommendation control is used for prompting a user to use a target recommendation LUT, the first preview image is obtained by processing the original preview image through the target recommendation LUT, the target recommendation LUT is obtained according to scene parameter matching of the original preview image, and the scene parameter of the original preview image is obtained by performing AI identification on the original preview image.

Description

Video recording method, electronic device and computer readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a video recording method, an electronic device, and a computer-readable storage medium.
Background
At present, more and more people use electronic devices to record videos, recording the life of a drip. However, the existing camera video recording function of the electronic device is simple, and in the video recording mode of the camera, a user can select the most appropriate LUT filter for video recording only by manually selecting different color look-up table (LUT) filters, which cannot meet the requirement that the user wants to record conveniently by using the appropriate LUT filter.
Disclosure of Invention
The application provides a video recording method, electronic equipment and a computer readable storage medium, and aims to meet the requirement that a user wants to conveniently and conveniently record by using a proper LUT filter.
In order to achieve the above object, the present application provides the following technical solutions:
in a first aspect, the present application provides a video recording method, which displays a camera preview interface in a movie mode by responding to an operation of a user to enter the movie mode. And when the artificial intelligence AI identification function is started, displaying a first preview image and a color lookup table LUT recommendation control on a camera preview interface in a movie mode. Wherein the LUT recommendation control is to prompt the user to use the target recommendation LUT. And the first preview image is obtained by processing the original preview image by using a target recommendation LUT, and the target recommendation LUT is obtained according to the scene parameter matching of the original preview image. The scene parameters of the original preview image can be obtained by AI recognition of the original preview image.
In the embodiment of the application, the first preview image and the LUT recommendation control are displayed on the camera preview interface in the movie mode, so that the target recommendation LUT matched with the scene of the current original preview image is recommended to a user, the user can further consider using the target recommendation LUT in the video recording process, the LUT is not used only by manual selection of the user, and the experience of the user in shooting effect processing is improved.
In one possible implementation manner, after the first preview image and the LUT recommendation control are displayed on the camera preview interface in the movie mode, the LUT recommendation control may be hidden and the second preview image may be displayed on the camera preview interface in response to a cancel operation of the LUT recommendation control by the user. And the preview effect of the second preview image is different from the preview effect of the first preview image.
The user cancels the LUT recommendation control, which indicates that the user does not want to use the matched target recommendation LUT, so that the LUT recommendation control can be hidden, the target recommendation LUT is not used for processing the preview image, namely, a second preview image with a preview effect different from that of the first preview image is displayed, and the use experience of the user is improved.
In another possible implementation manner, after the first preview image and the LUT recommendation control are displayed on the camera preview interface in the movie mode, the LUT recommendation control may be hidden on the camera preview interface in response to a user starting a video recording operation, and when the user finishes recording, the LUT recommendation control is displayed on the camera preview interface in response to the user finishing the video recording operation.
In the process that a user uses a target recommendation LUT for video recording, in order that a video recording interface of the user cannot be shielded by the LUT recommendation control, the LUT recommendation control is hidden until the video recording is finished, and then the LUT recommendation control is displayed again.
In another possible implementation manner, after displaying the first preview image and the LUT recommendation control on the camera preview interface in the movie mode, the method further includes: in response to a user selection of the LUT template, the LUT recommendation control is hidden on the camera preview interface and a third preview image is displayed. Wherein the third preview image is processed from the original preview image using the user selected LUT template.
Since the user wants to use the LUT template selected by the user, the user does not need to use the target recommendation LUT on behalf of the user, and therefore the LUT recommendation control is hidden on the camera preview interface, and the LUT is not recommended to the user any more.
In another possible implementation, in response to an operation of selecting the LUT template by the user, hiding the LUT recommendation control and displaying a third preview image on the camera preview interface may be: in response to a user operation on the LUT control, hiding the LUT recommendation control and displaying an LUT template field on the camera preview interface, wherein a plurality of LUT templates are displayed on the LUT template field. And displaying a third preview image on the camera preview interface in response to an operation of a user selecting the LUT template through the LUT template column.
In another possible implementation manner, after displaying the first preview image and the LUT recommendation control on the camera preview interface in the movie mode, the method further includes: in response to the user's operation to exit the movie mode, the LUT recommendation control is hidden on the camera preview interface and a second preview image is displayed. And the preview effect of the second preview image is different from the preview effect of the first preview image.
In another possible implementation manner, after displaying the first preview image and the LUT recommendation control on the camera preview interface in the movie mode, the method further includes: and hiding the LUT recommendation control on the camera preview interface and displaying a fourth preview image in response to the operation of closing the AI recognition function by the user. Wherein the fourth preview image is obtained by processing the original preview image using the most recently used LUT template.
In another possible implementation, the second preview image is obtained by processing the original preview image using a preset default LUT.
In another possible implementation manner, after the first preview image and the LUT recommendation control are displayed on the camera preview interface in the movie mode, the LUT recommendation control may be hidden and the LUT template field may be displayed on the camera preview interface in response to an operation of a user on the LUT control, and the first preview image and the LUT recommendation control may be restored and displayed on the camera preview interface in response to an operation of automatically retracting the LUT template field.
In another possible implementation manner, in response to an operation of entering the movie mode by a user, before displaying the camera preview interface in the movie mode, the method further includes: and responding to the operation of starting the first application by the user, and displaying a camera preview interface in the default working mode. Wherein, the camera preview interface includes: a working mode control; the working mode control at least comprises: and a default working mode and a movie mode, and a camera preview interface in the movie mode is displayed in response to the operation that the user enters the movie mode through the working mode control.
In another possible implementation manner, when the artificial intelligence AI recognition function is turned on, displaying a first preview image and a color look-up table LUT recommendation control on a camera preview interface in a movie mode, includes: and when the AI identification function is started, AI identification is carried out on the original preview image to obtain the scene parameters of the original preview image, and then the target recommendation LUT is obtained by matching according to the scene parameters of the original preview image and the preset recommendation period. And processing the original preview image by using the target recommendation LUT to obtain a first preview image, and displaying the first preview image and the LUT recommendation control on a camera preview interface in a movie mode.
In another possible implementation manner, the method further includes: and when AI identification is carried out on the original preview image, displaying the dynamic effect on a camera preview interface. Where the effect is to prompt the user that the scene of the original preview image is being identified.
In another possible implementation manner, before performing AI identification on the original preview image and obtaining the scene parameter of the original preview image, the method further includes: when the AI recognition function is turned on, a preview callback is registered with the hardware abstraction layer HAL. The AI identification is performed on the original preview image to obtain the scene parameters of the original preview image, which may be: and acquiring scene parameters of the original preview image by calling an interface on the HAL. And the scene parameters of the original preview image are obtained by performing AI identification on the original preview image.
In another possible implementation manner, according to a preset recommendation period, matching to obtain a target recommendation LUT according to scene parameters of an original preview image, including: and according to a preset recommendation period, matching the scene parameters of the original preview image in a corresponding relation table of the pre-configured scene parameters and the LUT template to obtain a target recommendation LUT.
In another possible implementation manner, according to a preset recommendation period, matching to obtain a target recommendation LUT according to the scene parameters of the original preview image, including: and according to a preset recommendation period, inputting the scene parameters of the original preview image into an AI model, and outputting a target recommendation LUT by the AI model. Wherein the AI model is a machine model.
In another possible implementation manner, the method further includes: and stopping executing the operation of obtaining the target recommendation LUT according to the scene parameter matching of the original preview image in the period of hiding the LUT recommendation control. And in the period of displaying the LUT recommendation control, executing the operation of obtaining the target recommendation LUT according to the scene parameter matching of the original preview image.
In another possible implementation manner, the scene parameters of the original preview image include: scene and brightness of the original preview image.
In a second aspect, the present application provides an electronic device comprising: one or more processors, memory, a display screen, a camera, a wireless communication module, and a mobile communication module. The memory, the display screen, the camera, the wireless communication module and the mobile communication module are coupled to the one or more processors, the memory for storing computer program code comprising computer instructions which, when executed by the one or more processors, cause the electronic device to perform the video recording method according to any one of the first aspect.
In a third aspect, the present application provides a computer-readable storage medium including instructions that, when executed on an electronic device, cause the electronic device to perform the video recording method according to any one of the first aspect.
It should be appreciated that the description of technical features, solutions, benefits, or similar language in this application does not imply that all of the features and advantages may be realized in any single embodiment. Rather, it is to be understood that the description of a feature or advantage is intended to include the specific features, aspects or advantages in at least one embodiment. Therefore, the descriptions of technical features, technical solutions or advantages in the present specification do not necessarily refer to the same embodiment. Furthermore, the technical features, technical solutions and advantages described in the present embodiments may also be combined in any suitable manner. One skilled in the relevant art will recognize that an embodiment may be practiced without one or more of the specific features, aspects, or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments.
Drawings
Fig. 1 is a schematic diagram of an image processed by different LUT templates disclosed in an embodiment of the present application;
fig. 2a is a first schematic interface diagram of a mobile phone entering a movie mode disclosed in the embodiment of the present application;
fig. 2b is a schematic view of an interface of another mobile phone disclosed in the embodiment of the present application in the movie mode;
fig. 2c is a schematic view of a first interface in a scenario where a user manually selects an LUT template according to an embodiment of the present application;
fig. 3a is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application;
fig. 3b is a block diagram of a software structure of an electronic device according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of a video recording method according to an embodiment of the present application;
fig. 5a is a schematic view of a second interface of the mobile phone entering the movie mode disclosed in the embodiment of the present application;
FIG. 5b is a schematic diagram of an interface for opening an Artificial Intelligence (AI) setting item disclosed in an embodiment of the present application;
FIG. 5c is an interface diagram of a cancel LUT recommendation control disclosed in an embodiment of the present application;
FIG. 5d is a schematic interface diagram of a video recording process disclosed in an embodiment of the present application;
fig. 5e is a first schematic interface diagram of a hidden LUT recommendation control disclosed in the embodiment of the present application;
fig. 5f is a schematic interface diagram ii of a hidden LUT recommendation control disclosed in the embodiment of the present application;
fig. 5g is a third schematic interface diagram of a hidden LUT recommendation control disclosed in an embodiment of the present application;
fig. 5h is a fourth schematic interface diagram of a hidden LUT recommendation control disclosed in the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present application, "a plurality" means two or more unless otherwise specified.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
For ease of understanding, the embodiments of the present application describe herein the terms to which the embodiments of the present application relate:
1) user experience (UX): which may also be referred to as the UX feature, refers to the user's experience during the shooting process using the electronic device.
2) And a movie mode: refers to a mode in which the electronic device records video, i.e. one of the video recording modes. In the embodiment of the present application, the movie mode includes a 4K high-dynamic range (HDR) function and a color lookup table (LUT) function, and when a user enters the movie mode to record a video, the recorded video can have a texture of a movie, so that a picture is more stereoscopic.
3) And LUT: which may also be referred to as an LUT file or LUT parameters, is a color conversion template, such as a Red Green Blue (RGB) mapping table. The LUT can transform the actually sampled pixel gray value into another corresponding gray value through certain transformation (such as threshold, inversion, contrast adjustment, linear transformation, etc.), which can play the role of highlighting useful information of the image and enhancing the optical contrast of the image.
An image includes a number of pixels, each represented by an RGB value. The display screen of the electronic device may display the image according to the RGB values of each pixel point in the image. That is, these RGB values will dictate how the display is lit to blend out various colors for presentation to the user.
The LUT is an RGB mapping table used to represent the correspondence between RGB values before and after adjustment. For example, please refer to table 1, which shows an example of a LUT.
TABLE 1
Figure RE-GDA0003409939720000051
When the original RGB value is (14, 22, 24), the output RGB value is (6, 9, 4) through the mapping of the LUT shown in table 1. When the original RGB value is (61, 34, 67), the output RGB value is (66, 17, 47) through the mapping of the LUT shown in table 1. When the original RGB value is (94, 14, 171), the output RGB value is (117, 82, 187) through the mapping of the LUT shown in table 1. When the original RGB value is (241, 216, 222), the output RGB value is (255, 247, 243) through the mapping of the LUT shown in table 1.
It should be noted that, when different LUTs are used to process the same image, different styles of image effects can be obtained, and the image can be processed into different filter effects. For example, LUTs 1, LUT2, and LUT3 shown in fig. 1 are different color lookup tables, and can process an image into different filter effects. The image 101 shown in fig. 1 can be obtained by processing the original image 100 collected by the camera with the filter of the LUT 1. The image 102 shown in fig. 1 is obtained by processing the original image 100 captured by the camera using the filters of the LUT 2. The image 103 shown in fig. 1 can be obtained by processing the original image 100 collected by the camera with the filter of the LUT 3. As is clear from comparison of the images 101, 102, and 103 shown in fig. 1, the images 101, 102, and 103 are different in image effect or style.
4) The basic method for callback registration comprises the following steps: 1. an interface is defined, and a call-back calling method is defined in the interface. 2. Defining a callback class, providing a registration method of an interface in the class, adding an object into a callback object list by the interface during interface registration, and calling the callback method in the callback object list after a registered event occurs.
For clarity and conciseness of the following descriptions of the embodiments, a brief introduction of a processing scheme of a photographic effect is first given:
taking a mobile phone as an example, as shown in (1) in fig. 2a, when a user needs to record a video using the mobile phone, the user operates an icon 201 of a "camera" application in a main screen interface of the mobile phone, and the mobile phone displays an interface 202 shown in (2) in fig. 2 a. The interface 202 is a preview interface of a "photo" mode of the mobile phone, and the working mode control 2021 in the interface 202 includes: "take a picture" mode, "portrait" mode, "record" mode, "movie" mode, and "professional" mode. In response to the user selecting the "movie" mode 203 operation, the handset displays an interface 206 as shown in (1) in fig. 2 b. Interface 206 is a preview interface before the handset records in movie mode. In the interface 206, the handset displays a prompt message 205. The prompt message 205 shows "the movie mode is better for the landscape shooting effect", and is used to prompt the user that the mobile phone is in the landscape state. Then, when the user places the mobile phone in the landscape state, the mobile phone displays the interface 206 as shown in (2) in fig. 2 b. The interface 206 is a preview interface before the mobile phone records in the landscape state. The control 204 is a virtual shutter key, and a user can perform shooting by operating the virtual shutter key 204.
Also shown as interface 206 in fig. 2b (2), the interface 206 includes a 4K HDR control 207 and an LUT control 208. As shown in fig. 2c (1), in response to user manipulation of the LUT control 208, the handset displays an interface 206 as shown in fig. 2c (2). The interface 206 presents a LUT template column 209; the LUT template fields 209 include LUTs 1, 2, 3, and 8.
As shown in fig. 2c (3), the preview interface before recording shown by interface 206 is treated as a filter effect of LUT2 in response to user manipulation of LUT2 in LUT template field 209. If the user is satisfied with the filter effect of the LUT2, recording video using the LUT2 filter effect in the movie mode can be achieved by operating the virtual shutter key 204.
According to the processing scheme of the shooting effect, when the user sets the shooting effect in the movie mode, the user mainly selects different LUT filters in the LUT template manually, previews the processing effects of the different LUT filters on an interface, selects a proper LUT filter and records a video in the movie mode. For a user, if the user wants to select an LUT filter suitable for a current recording scene, the user only needs to rely on his own experience to manually select a suitable LUT filter for processing. The process of processing the shooting effect by the user in the movie mode is not convenient enough, and especially for the user who cannot use the LUT filter skillfully, the processing process of the shooting effect is more difficult, and the use experience for the user is not good.
Based on the problems in the above technical solutions, the application provides a video recording method, a scene corresponding to a preview image is identified through Artificial Intelligence (AI), a corresponding LUT template is matched according to the identified scene, then the preview image is processed by using the matched LUT template, the processed preview image and an LUT recommendation control are displayed on an interface, the LUT recommendation control is used for prompting a user with the LUT template recommended to the user by an electronic device, the user can consider using the recommended LUT template in the video recording process, and does not need to manually select the LUT template by the user, so that the experience of the user in processing the shooting effect is improved.
The video recording method provided by the embodiment of the application can be applied to electronic equipment with a camera, such as a mobile phone, a tablet Computer, a desktop Computer, a laptop Computer, a notebook Computer, an Ultra-mobile Personal Computer (UMPC), a handheld Computer, a netbook, a Personal Digital Assistant (PDA), wearable electronic equipment, an intelligent watch and the like.
The video recording method provided by the embodiment of the application can be applied to the electronic device shown in fig. 3 a. Fig. 3a is a schematic structural diagram of an electronic device. Wherein, the electronic equipment can include: the mobile communication device includes a processor 310, an external memory interface 320, an internal memory 321, a Universal Serial Bus (USB) interface 330, a charging management module 340, a power management module 341, a battery 342, an antenna 1, an antenna 2, a mobile communication module 350, a wireless communication module 360, an audio module 370, a speaker 370A, a receiver 370B, a microphone 370C, an earphone interface 370D, a sensor module 380, buttons 390, a motor 391, an indicator 392, a camera 393, a display 394, and a Subscriber Identity Module (SIM) card interface 395.
It is to be understood that the illustrated structure of the present embodiment does not constitute a specific limitation to the electronic device. In other embodiments, an electronic device may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 310 may include one or more processing units, such as: the processor 310 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be a neural center and a command center of the electronic device. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 310 for storing instructions and data. In some embodiments, the memory in the processor 310 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 310. If the processor 310 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 310, thereby increasing the efficiency of the system.
In some embodiments, processor 310 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc. For example, in the embodiments of the present application, a processor may be used to execute any of the video recording methods proposed in the present application.
It should be understood that the interface connection relationship between the modules illustrated in this embodiment is only an exemplary illustration, and does not constitute a limitation on the structure of the electronic device. In other embodiments, the electronic device may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 340 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 340 may receive charging input from a wired charger via the USB interface 330. In some wireless charging embodiments, the charging management module 340 may receive a wireless charging input through a wireless charging coil of the electronic device. The charging management module 340 may also supply power to the electronic device through the power management module 341 while charging the battery 342.
The power management module 341 is configured to connect the battery 342, the charging management module 340 and the processor 310. The power management module 341 receives input from the battery 342 and/or the charge management module 340 and provides power to the processor 310, the internal memory 321, the external memory, the display 394, the camera 393, and the wireless communication module 360. The power management module 341 may also be configured to monitor parameters such as battery capacity, battery cycle count, and battery state of health (leakage, impedance). In other embodiments, the power management module 341 may also be disposed in the processor 310. In other embodiments, the power management module 341 and the charging management module 340 may be disposed in the same device.
The wireless communication function of the electronic device may be implemented by the antenna 1, the antenna 2, the mobile communication module 350, the wireless communication module 360, the modem processor, the baseband processor, and the like.
The electronic device implements display functions via the GPU, the display 394, and the application processor, among other things. The GPU is an image processing microprocessor coupled to a display 394 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 310 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 394 is used to display images, video, and the like. The display screen 394 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a Mini-LED, a Micro-OLED, a quantum dot light-emitting diode (QLED), and the like.
The electronic device may implement the shooting function through the ISP, camera 393, video codec, GPU, display 394, application processor, etc.
The ISP is used to process the data fed back by the camera 393. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be located in camera 393.
Camera 393 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device may include 1 or N cameras 393, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device selects a frequency point, the digital signal processor is used for performing fourier transform and the like on the frequency point energy.
Video codecs are used to compress or decompress digital video. The electronic device may support one or more video codecs. In this way, the electronic device can play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can realize applications such as intelligent cognition of electronic equipment, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The electronic device may implement audio functions through the audio module 370, the speaker 370A, the receiver 370B, the microphone 370C, the earphone interface 370D, and the application processor. Such as music playing, recording, etc.
The audio module 370 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 370 may also be used to encode and decode audio signals. In some embodiments, the audio module 370 may be disposed in the processor 310, or some functional modules of the audio module 370 may be disposed in the processor 310. The speaker 370A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The receiver 370B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. Microphone 370C, also known as a "microphone," is used to convert sound signals into electrical signals.
The headphone interface 370D is used to connect wired headphones. The headset interface 370D may be the USB interface 330, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The external memory interface 320 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the electronic device. The external memory card communicates with the processor 310 through the external memory interface 320 to implement a data storage function. For example, files such as audio, video, etc. are saved in an external memory card.
The internal memory 321 may be used to store computer-executable program code, which includes instructions. The processor 310 executes various functional applications of the electronic device and data processing by executing instructions stored in the internal memory 321. For example, in the embodiment of the present application, the processor 310 may execute instructions stored in the internal memory 321, and the internal memory 321 may include a program storage area and a data storage area.
The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The data storage area can store data (such as audio data, phone book and the like) created in the using process of the electronic device. In addition, the internal memory 321 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
Keys 390 include a power-on key, a volume key, etc. The keys 390 may be mechanical keys. Or may be touch keys. The motor 391 may generate a vibration cue. The motor 391 may be used for both incoming call vibration prompting and touch vibration feedback. Indicator 392 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc. The SIM card interface 395 is for connecting a SIM card. The SIM card can be brought into and out of contact with the electronic device by being inserted into and pulled out of the SIM card interface 395. The electronic equipment can support 1 or N SIM card interfaces, and N is a positive integer greater than 1. The SIM card interface 395 may support a Nano SIM card, a Micro SIM card, a SIM card, etc.
The methods in the following embodiments may be implemented in an electronic device having the above hardware structure. In the following embodiments, the electronic device is taken as an example of a mobile phone, and technical solutions provided in the embodiments of the present application are specifically described.
In addition, an operating system runs on the above components. Such as the hongmeng system, the iOS operating system, the Android open source operating system, the Windows operating system, etc. A running application may be installed on the operating system.
Fig. 3b is a block diagram of a software structure of the electronic device according to the embodiment of the present application.
It will be appreciated that the hierarchical architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system may include an application layer (APP), a framework layer (FWK), a Hardware Abstraction Layer (HAL), and a kernel layer (kernel). In some embodiments, the handset also includes hardware (e.g., a display screen).
Exemplary application layers described above may include a User Interface (UI) layer and a logical layer. As shown in fig. 3b, the UI layer includes a camera, gallery, and other applications. The camera comprises an LUT control, a 4K HDR control, an AI setting item, a work mode control, a camera preview interface and an LUT recommendation control. The logic layer comprises an LUT template module, an AI recommendation module, an encoding module, an LUT logic control module, an HDR module, a configuration library and the like.
The hardware abstraction layer is an interface layer located between the kernel layer and the hardware, and can be used for abstracting the hardware. Illustratively, as shown in FIG. 3b, the hardware abstraction layer includes a camera interface.
The kernel layer provides underlying drivers for various hardware of the handset. Illustratively, as shown in FIG. 4, the core layer includes a camera driver module.
The framework layer provides an Application Programming Interface (API) and a programming service for an application program of the application layer. The framework layer includes some predefined functions. The framework layer provides programming services to application layer calls through the API interface. It should be noted that, in the embodiment of the present application, the programming service may be, for example, a camera service (camera service). In some embodiments, as shown in FIG. 4, the framework layer includes a camera services framework and a media framework. Wherein the media frame includes an encoder.
In the embodiment of the application, the AI recommendation module is configured to receive the preview image reported by the HAL layer and the scene parameter of the preview image. In some embodiments, the HAL layer performs AI scene recognition on the preview image using a scene recognition algorithm and an image algorithm, and recognizes scene parameters of the preview image. When the user selects the "movie" mode through the working mode control and starts the AI recognition function through the AI setting item, as shown in fig. 3b, the AI recommendation module receives a start instruction of the AI model and calls the camera interface to acquire the scene parameters of the preview image from the camera driving module, where the scene parameters of the preview image may include: the scene and brightness of the preview image. The camera driving module calls a camera interface to send the scene parameters of the preview image to the AI recommendation module, the AI recommendation module matches the scene parameters of the preview image to match an LUT template corresponding to the preview image, and the LUT template corresponding to the preview image is used as a target recommendation LUT.
In the embodiment of the application, the LUT logic control module receives the identifier of the target recommendation LUT, calls a camera interface of the hardware abstraction layer to send the identifier of the target recommendation LUT to the camera driving module, the camera driving module processes the preview image into a preview image with the effect of the target recommendation LUT, calls the camera interface of the hardware abstraction layer, and displays the processed preview image and the LUT recommendation control on the display screen, so that the display screen displays the processed preview image and the LUT recommendation control on the camera preview interface. The LUT recommendation control is used to prompt the user for a LUT template recommended for use.
In some embodiments, the LUT logic control module controls the LUT recommendation control not to be displayed on the camera preview interface, i.e., hides the LUT recommendation control, in response to a user's cancel operation on the LUT recommendation control. And the AI recommendation module responds to the cancel operation of the user on the LUT recommendation control and stops matching the LUT template on the scene parameters of the preview image.
In some embodiments, the LUT logic control module controls the display and hiding of LUT recommendation controls and whether to make LUT template recommendations to the user according to LUT recommendation rules.
Specifically, the execution process and principle of each module shown in fig. 3b may refer to the following relevant parts of the video recording method according to the embodiment of the present application, and are not described herein again.
The following describes a video recording method proposed by an embodiment of the present application with specific reference to fig. 4.
Fig. 4 shows a video recording method, which relates to a process in which an AI recommendation module matches an LUT template corresponding to a preview image according to scene parameters of the preview image by using an AI recommendation algorithm, and recommends the LUT template corresponding to the preview image as a target recommendation LUT to a user.
To make the description of the embodiments of the present application clearer, first, several examples are given in which the AI recommendation module identifies the target recommendation LUT by executing an AI recommendation algorithm.
In some embodiments, the matching relationship between the scene and the LUT template may be pre-configured in the AI recommendation module. And then, according to the scene parameters of the preview image, determining the scene of the preview image, then searching an LUT template matched with the scene of the preview image from the preset matching relation between the scene and the LUT template through the scene of the preview image, and taking the LUT template as a target recommendation LUT.
The preview image that is not processed by the LUT in the embodiment of the present application may be referred to as an original preview image. The scene parameters of the preview image can be considered as parameters obtained by performing AI identification on the original preview image, and therefore, the scene parameters of the preview image mentioned in the embodiments of the present application can be referred to as the scene parameters of the original preview image.
In some embodiments, the scene parameters of the preview image include: scene and brightness. Illustratively, the AI recommendation module may identify the preview image according to the first preset tag, and determine a tone corresponding to the preview image. For example, when the brightness value (also referred to as a gray value) of the preview image is 32, that is, the brightness value of the preview image is in the gray range of 0 to 33, the AI recommending module recognizes that the tone corresponding to the preview image is black. When the brightness value of the preview image is 175, that is, the brightness value of the preview image is in the gray scale interval 170-224, the AI recommendation module identifies the tone corresponding to the preview image as a bright portion.
In some embodiments, the AI recommendation module may further identify the brightness of the preview image according to a second preset tag. The second preset label is used for representing the exposure of the image. Illustratively, exposure refers to the ratio of black to highlight. For example, when the black proportion is less than or equal to 5%, it indicates that the exposure amount of the preview image is high (i.e., overexposure). When the black proportion is greater than 5% and the highlight proportion is greater than or equal to 10%, it indicates that the exposure amount of the preview image is high (i.e., is partially bright). When the black proportion is more than 5% and the highlight proportion is less than 10%, it indicates that the exposure amount of the preview image is normal (i.e., balanced).
For example, when the AI recommendation module identifies that the tone corresponding to the preview image is highlight, on this basis, if the black proportion of the preview image is less than or equal to 5%, it indicates that the preview image is overexposed, that is, the AI recommendation module identifies that the preview image is highlight and overexposed. If the black proportion of the preview image is greater than 5% and the highlight proportion is greater than or equal to 10%, the preview image is slightly bright, namely the AI recommending module identifies that the preview image is highlight and slightly bright.
In some embodiments, the AI recommendation module may further identify the scene of the preview image according to a third preset tag. And the third preset label is used for indicating the scene of the preview image. Illustratively, the scenes include a portrait, a food, and the like. Illustratively, portrait refers to a preview image that includes five sense organs of a person; or the preview image includes more than 50% of the total image of the five sense organs. The cate means that the preview image includes food (e.g., coffee, bread, etc.).
For example, the AI recommendation module may process the preview image by using an image processing technique, and if it is recognized that the preview image includes five sense organs of a person, it indicates that a scene of the preview image is a portrait. If the preview image is recognized to include food, the scene of the preview image is indicated as food.
In this embodiment of the application, the AI recommendation module may combine the first preset tag, the second preset tag, and the third preset to identify a scene and a brightness of the preview image, that is, to identify a scene parameter of the preview image, so as to match up the LUT corresponding to the preview image. Taking the example that the AI recommendation module identifies that the tone of the preview image is highlight according to the first preset tag, referring to the following table 2, the correspondence between the second preset tag and the LUT and the correspondence between the third preset tag and the LUT are obtained.
TABLE 2
Figure RE-GDA0003409939720000121
It should be noted that the correspondence between the second preset tag and the third preset tag shown in table 1 and the LUT template is only an example of the present application and does not limit the embodiments of the present application.
It should be understood that, in this embodiment, the LUT template corresponding to the preview image and matched by the AI recommendation module may include only one LUT, or may include two or more LUTs, which is not limited in this embodiment of the application. In addition, when the 4K HDR is in an off state, the LUT template includes LUTs of LUT1, LUT2, LUT3,. or LUT 8; when the 4K HDR is in the on state, the LUT template includes LUTs of LUTs 9, LUTs 10, LUTs 11.
In other embodiments, the AI recommendation module may identify the LUT that matches the scene of the preview image through the AI model. For example, the scene parameters of the preview image may be input into an AI model, which outputs an LUT corresponding to the preview image, i.e., a target recommendation LUT. The AI model may be any machine model for recognizing a preview image. For example, the AI model may be any of the following neural network models: VGG-net, Resnet, and Lenet.
Fig. 4 is a first flowchart of a video recording method provided in an embodiment of the present application, and the video recording method shown in fig. 4 may be applied to the aforementioned electronic device, for convenience of description, the following description takes an example that the electronic device is a mobile phone, and the video recording method shown in fig. 4 specifically includes:
s401, responding to the operation that the user starts the first application, and displaying a camera preview interface in a default working mode on the display screen.
The camera preview interface in the default working mode comprises: and previewing the image. The default working mode is a working mode which is entered by default after the first application is started. The first application is an application having a photographing function, such as a camera application. The default operation mode in the first application may be arbitrarily set, for example, the "photographing" mode may be set as the default operation mode. Specifically, a user starts a first application, the first application responds to the operation of starting the first application by the user, enters a default working mode, displays a camera preview interface in the default working mode on a display screen, starts and calls a camera to acquire a preview image, and displays the preview image on the camera preview interface in the default working mode.
In some embodiments, step S401 may be performed in such a way that, in response to the user clicking on an icon of the "camera" application, the display screen displays a camera preview interface in the default operating mode. The process of displaying the camera preview interface in the "photographing" mode on the display screen by the mobile phone in response to the user operating the icon of the "camera" application may refer to the foregoing description of (1) in fig. 2a, and is not described herein again.
It should be noted that, there are many ways to respond to the operation of the user to start the first application, for example, the operation may be to respond to the user clicking an icon of the first application, or, for example, the operation may be to start the first application in response to the user sliding upwards, and the way to respond to the operation of the user to start the first application includes, but is not limited to, what is proposed in the embodiments of the present application.
S402, responding to the operation of the user for entering the movie mode, and displaying a camera preview interface in the movie mode on the display screen.
The movie mode is a video recording mode, and the related descriptions of the movie mode can refer to the related contents of the movie mode mentioned above, which are not described herein again. A camera preview interface in movie mode, comprising: and previewing the image. In the camera preview interface in the movie mode, the displayed preview image has the processing effect of the movie mode, that is, the preview image has the texture of a movie, and the picture is more stereoscopic. In other embodiments, the preview images presented by the camera preview interface in different operation modes also have different processing effects.
In some embodiments, the camera preview interface displayed on the display screen in step S401 further includes an operation mode control. The working mode control comprises a plurality of working modes such as a movie mode and the like, and the mobile phone can respond to the operation of entering the movie mode by a user by operating the movie mode in the working mode control, and the display screen displays a camera preview interface in the movie mode. Specifically, reference may be made to the foregoing description of (2) in fig. 2a, and details are not repeated here.
In other embodiments, the user does not perform the operation of entering the movie mode, and the display screen may directly display the camera preview interface in the movie mode. For example, when the default operation mode in step S401 is the movie mode, the display screen may display a camera preview interface in the movie mode in response to an operation of the user starting the first application. I.e. by default enter into movie mode when the first application is started. For example, as shown in (1) of fig. 5a, when a user needs to record a video using a mobile phone, the user operates an icon 501 of a "camera" application in a main screen interface of the mobile phone, and the mobile phone displays an interface 502 as shown in (2) of fig. 5 a. Wherein, the interface 502 is a preview interface in the mobile phone movie mode. I.e. the icon 501 for operating the "camera" application, enters the movie mode by default, displaying a camera preview interface in the movie mode.
As can be seen from the foregoing description of steps S401 to S402, there are many ways to trigger the display screen to display the camera preview interface in the movie mode, including but not limited to what is proposed in the embodiments of the present application.
It should be noted that there are many ways for the display screen to display the camera preview interface in the movie mode, for example, by responding to an operation of the user entering the movie mode and registering a preview callback, the display screen is enabled to display the camera preview interface in the movie mode. The display screen displays different specific implementation modes of the camera preview interface in the movie mode, and the implementation of the embodiment of the application is not influenced.
And S403, responding to the operation of opening the AI setting item by the user, and registering a preview callback to the HAL by the AI recommending module.
When the AI setting item is turned on, the AI recognition function of the first application is turned on. In steps S401 to S403, the user performs an operation of entering a movie mode and opening an AI setting item on the first application, at this time, the first application enters the movie mode and opens an AI identification function, and then triggers the AI recommendation module to register a preview callback to the HAL, where the registration of the preview callback by the AI recommendation module may be understood as a callback for registering a preview image and a scene parameter of the preview image. Since the first application in step S402 has already entered the movie mode and the display screen has already displayed the camera preview interface in the movie mode, that is, the callback of the preview image has already been registered, the registered preview callback in step S403 may also be understood as the callback of the scene parameter of the registered preview image. In summary, in steps S401 to S403, the first application is in the movie mode, and the AI recognition function is turned on, and the AI recommendation module obtains the preview image and the scene parameter of the preview image by registering a preview callback. The preview image acquired by the AI recommendation module is subjected to effect processing in a movie mode.
For example, the AI identification function may be opened by default, or the AI identification function may be opened in response to an operation of opening an AI setting item by a user, which is not limited in this embodiment of the application.
The AI setting item is used to control whether to start the AI model recognition preview image, i.e., whether to turn on the AI recognition function. And when the AI setting item is opened, starting an AI model to identify the preview image, and identifying to obtain the scene parameters of the preview image. Specifically, when the AI setting item is opened, the AI recommendation module registers a preview callback to an interface on the HAL, so that the AI recommendation module can obtain the preview image and the scene parameters of the preview image by calling the interface on the HAL. For example, the AI recommendation module may register a preview callback with a camera interface on the HAL, and after registering the preview callback, the AI recommendation module may call the camera interface to obtain a preview image and scene parameters of the preview image from the camera driver module. And the camera driving module calls a camera interface to send the scene parameters of the preview image to the AI recommending module. It should be noted that, since step S401 and step S402 are executed, when the first application is in the movie mode, after the AI recommendation module registers the preview callback, the preview image acquired by calling the interface on the HAL is the preview image in the movie mode. When the AI setting item is closed, the AI recommendation module will not register a preview callback to the HAL, i.e. the AI recommendation module will not have the function of acquiring the scene parameters of the preview image. For the related technology of the registration callback, reference may be made to the foregoing description of the method of registering the callback, which is not described herein again.
The AI model may be any machine model for recognizing a preview image. For example, the AI model may be any of the following neural network models: VGG-net, Resnet, and Lenet.
In some embodiments, the scene parameters of the preview image may include: the scene and brightness of the preview image. In other embodiments, the scene may be classified as a character scene, a travel scene, a food scene, a landscape scene, a character scene, a pet scene, a still scene, or the like. The HAL can identify and calculate scene parameters of the preview image through a scene identification algorithm and an image algorithm in the AI model.
In some embodiments, the scenarios for the user to turn on the AI setting item are: as shown in fig. 5b (1), interface 502, which interface 502 includes a 4K HDR control 503, an LUT control 504, and settings 505. In some embodiments, as shown in (1) in fig. 5b, in response to the user's operation on the setting item 505, the mobile phone displays a setting interface 506 as shown in (2) in fig. 5b, where the setting interface 506 includes a "photo scale" setting item, a "voice-controlled photographing" setting item, a "smiling face snapshot" setting item, a "video resolution" setting item, a "video frame rate" setting item, a "movie HDR 10" setting item, a "high-efficiency video format" setting item, an "AI movie hue" setting item 507, and the like. In response to the user's start operation on the "AI movie hue" setting item 507, the handset displays a setting interface 506 as shown in (3) in fig. 5b, in which setting interface 506 the "AI movie hue" setting item is opened, i.e., the handset starts the AI model recognition preview image.
In other embodiments, the AI settings are in a default on state, i.e., the AI recommender module automatically registers the preview callback with the HAL in response to a user action to turn on the AI settings. For example, it may be that the first application turns on the AI setting item by default when entering the movie mode. The AI recommender will therefore automatically trigger the registration of preview callbacks with the HAL in movie mode. I.e. the AI recommender module may also register a preview callback with the HAL in response to a user entering movie mode. Therefore, there are many triggering conditions for registering the preview callback to the HAL by the AI recommendation module, and the triggering is not limited to the operation triggering of opening the AI setting item by the user.
In other embodiments, the AI recommendation module further controls the display screen to display the animation of the identified preview image on the camera preview interface in the movie mode when the scene parameter of the received preview image changes in response to the operation of the user to open the AI setting item. The effect of identifying the preview image is to prompt the user that the scene of the preview image is currently being identified. For example, as shown in (1) of fig. 5c, the circles of different sizes and different transparencies shown in the interface 502 prompt the user that the scene of the preview image is currently being recognized through the circles of different sizes and different transparencies.
As can be seen from the foregoing, after steps S401 to S403 are executed, the first application is in the movie mode, and the AI recognition function state is turned on, and the AI recommendation module can acquire the preview image and the scene parameters of the preview image, where the preview image is subjected to the effect processing in the movie mode.
And S404, the AI recommendation module matches the scene parameters of the preview image according to a preset recommendation period to obtain an LUT template corresponding to the preview image.
The value of the recommendation period may be set empirically, for example, may be set to 10s, that is, every 10s, the LUT template corresponding to the preview image is obtained by matching according to the scene parameters of the preview image. For convenience in description, the LUT template corresponding to the preview image obtained by matching by the AI recommendation module is collectively referred to as a target recommendation LUT. The target recommendation LUT is an LUT template which is recommended by the AI recommendation module and is matched with the scene of the preview image.
The process of identifying the target recommendation LUT by the AI recommendation module using the AI recommendation algorithm can be referred to the related parts, which are not described herein again.
When the mobile phone is in the mode of opening the AI recognition function and is in the movie mode, the AI recommending module automatically triggers and executes step S404. The process of the AI recommending module executing step S404 may be that, for each recommending period, an LUT template corresponding to the preview image is obtained by matching according to the scene parameters of the preview image received in the recommending period.
As can be seen from the foregoing description of step S403, since the AI recommendation module registers the preview callback on the HAL, the HAL reports the scene parameters of the preview image to the AI recommendation module. And then the AI recommendation module can acquire the scene parameters of the preview image in each recommendation period, and further can obtain an LUT template corresponding to the preview image by matching according to the scene parameters of the preview image.
In some embodiments, the HAL may send the scene parameters of the preview image to the AI recommendation module according to a preset recommendation period. And the AI recommendation module uses the scene parameters of the latest received preview image according to a preset recommendation period to obtain an LUT template corresponding to the preview image through matching. In other embodiments, the HAL may also send the scene parameters of the preview image to the AI recommending module according to a preset recommending period, and the AI recommending module matches the received scene parameters of the preview image to obtain the LUT template corresponding to the preview image according to the received scene parameters of the preview image each time the AI recommending module receives the scene parameters of the preview image.
S405, the AI recommendation module sends the identifier of the target recommendation LUT to the LUT logic control module.
The AI recommendation module sends the target recommendation LUTs obtained in step S404 to the LUT logic control module. Namely, the target recommendation LUT obtained in each recommendation period is sent to the LUT logic control module.
S406, the LUT logic control module sends the identification of the target recommendation LUT to the HAL.
Wherein the identifier of the target recommendation LUT is a unique identifier specific to the target recommendation LUT. For example, the identification of the target recommendation LUT may be a name of the target recommendation LUT, or a label of the target recommendation LUT, or the like.
In some embodiments, one implementation of step S406 may be that the LUT logic control module calls a camera interface on the HAL, passing the identification of the target recommendation LUT into the camera interface.
S407, HAL returns the processed preview image to the LUT logic control module.
The processed preview image is processed by the target recommendation LUT. And after receiving the identifier of the target recommendation LUT issued by the LUT logic control module, the HAL processes the preview image according to the identifier of the target recommendation LUT to obtain the processed preview image.
The preview image processed by the target recommendation LUT in the embodiment of the present application may be referred to as a first preview image.
In some embodiments, after the camera interface on the HAL receives the identifier of the target recommendation LUT issued in step S406, the camera interface sends the identifier of the target recommendation LUT to the camera driving module, and the camera driving module processes the preview image by using the target recommendation LUT to obtain a processed preview image, and then returns the processed preview image to the LUT logic control module through the HAL. For example, if the target recommended LU is LUT2, the camera driver module processes the preview image into an image of the filter effect of LUT 2. However, the related art using the LUT to process the preview image can refer to the foregoing description of the LUT, and is not described herein again.
And S408, the LUT logic control module controls the display screen to display the processed preview image and the LUT recommendation control on the camera preview interface in the movie mode.
The LUT recommendation control is used for prompting a target recommendation LUT used by the processed preview image. Specifically, after receiving the processed preview image returned by the HAL, the LUT logic control module controls to display the processed preview image and the target recommendation LUT.
In some embodiments, one way to execute step S408 may be that the LUT logic control module outputs the processed preview image and the related display data of the target recommendation LUT to a display screen, and the display screen further displays the processed preview image and the LUT recommendation control on the camera preview interface in the movie mode according to the processed preview image and the related display data of the target recommendation LUT.
In some embodiments, as shown in (1) of fig. 5c, the AI recommendation module identifies a scene of the preview image, and displays a plurality of circles with different sizes and different transparencies on the interface 502 to prompt the user that the scene of the preview image is currently being identified, and the AI recommendation module acquires scene parameters of the preview image through the HAL, matches out that the name of the LUT is "warm light", that is, the name of the target recommendation LUT is "warm light", then as shown in (2) of fig. 5c, on the camera preview interface 502 in the movie mode, a preview image processed by the "warm light" LUT is displayed, and a LUT capsule 508 prompting the "warm light" is displayed. For example, the nomenclature of the LUT may be other suitable names besides "warm light", and the embodiment of the present application does not limit the nomenclature of the LUT. The exemplary LUT recommendation control may be in other forms besides the capsule form, as shown in the LUT capsule 508 of fig. 5 c. Similarly, the animation for identifying the scene of the preview image may be in other forms, besides the animation shown in (1) in fig. 5 c. The difference in the dynamic effect form for prompting recognition of the scene of the preview image does not affect the implementation of the embodiment of the application. The different forms of the LUT recommendation control do not affect the implementation of the embodiments of the present application.
In the embodiment of the application, the target recommendation LUT is obtained through matching according to the scene parameters of the current preview image, the processed preview image and the LUT recommendation control are displayed on a camera preview interface in the movie mode, the LUT recommendation control prompts a user to recommend the target recommendation LUT to use, and the processed preview image shows the effect of the target recommendation LUT after processing so as to be referred by the user.
As can be seen from the foregoing description, steps S404 to S408 are processes of recommending the user to use the target recommendation LUT when the mobile phone is in the movie mode and the AI recognition function is in the on state. Steps S404 to S408 are performed according to a preset recommended cycle.
And S409, the LUT logic control module controls the display screen to hide the LUT recommendation control and controls the AI recommendation module to suspend executing the step S404 in response to the video-recording operation started by the user, until the user finishes the video-recording operation, the display screen is controlled to display the LUT recommendation control and the AI recommendation module is controlled to start executing the step S404.
In the process of executing steps S404 to S408, when the LUT logic control module detects that the user starts a video recording operation, in order to prevent the displayed LUT recommendation control from interfering with the user video recording, the LUT logic control module may control the display screen to hide the LUT recommendation control, and in the process of starting the video recording by the user, the LUT logic control module may control the AI recommendation module to suspend executing step S404, so as to improve the operation efficiency. And when the user finishes recording, the display screen resumes displaying the camera preview interface, at this time, the LUT logic control module continues to control the display screen to display the LUT recommendation control, and controls the AI recommendation module to start executing the step S404 and continue to recommend the LUT for the user.
There are many ways for the LUT to control the AI recommender to suspend performing step S404, for example, a suspend instruction may be sent to the AI recommender, and the AI recommender may suspend performing step S404 in response to the suspend instruction. There are many ways for the LUT logic control module to control the AI recommender module to resume performing step S404, for example, a start instruction may be sent to the AI recommender module, and the AI recommender module resumes performing step S404 in response to the start instruction.
For example, as shown in fig. 5d, in (1) of fig. 5d, the LUT capsule 508 is displayed in the interface 502, when the user clicks the virtual shutter key to start recording, in response to the user starting the recording operation, the shooting and processing are performed using the "warm light" filter shown in the LUT capsule 508, as shown in (2) of fig. 5d, during recording, the LUT capsule 508 is hidden, the LUT capsule 508 is not present on the interface 502, and the current status during recording is displayed. Continuing to refer to (2) in fig. 5d, when the user clicks the virtual shutter key again, the LUT capsule 508 is again displayed again on the interface 502 in response to the user ending the recording operation, as shown in (3) of fig. 5 d.
As can be seen from the foregoing description, step S409 is actually one way for the LUT logic control module to control to temporarily stop recommending LUTs to users and temporarily stop displaying LUT recommendation controls, and the LUT logic control module may control to temporarily stop and restart recommending LUTs to users and temporarily stop displaying and resuming displaying LUT recommendation controls in response to the user' S operations of starting and ending video recording, and may also have other response ways to trigger the LUT logic control module to execute.
In other embodiments, the LUT logic control module controls the display screen to hide the LUT recommendation control and controls the AI recommendation module to suspend executing step S404 in a time period when the LUT template field is expanded on the camera preview interface, until the LUT template field is automatically collected on the camera preview interface, controls the display screen to display the LUT recommendation control and controls the AI recommendation module to start executing step S404.
In some cases, the user clicks on the LUT template field to view the LUT template field, but does not select the LUT template in the LUT template field, and the LUT template field is automatically retracted after being expanded for a preset time period, that is, hidden from the camera preview interface. When the LUT template field is expanded, in order to bring better visual experience to the user, the LUT logic control module controls the display screen to temporarily hide the LUT recommendation control, and controls the AI recommendation module to suspend executing the step S404, until the LUT template field is retracted, the LUT logic control module continues to control the LUT to be recommended to the user, that is, controls the display screen to display the LUT recommendation control, and controls the AI recommendation module to start executing the step S404.
For example, as shown in fig. 5e, in (1) of fig. 5e, the LUT capsule 508 named "warm light" is displayed on the interface 502, as shown in (2) of fig. 5e, when the user clicks on the LUT control 509, the LUT template column is displayed 510 on the interface 502, and when the user does not continue any operation, as shown in (3) of fig. 5e, the LUT template column is automatically retracted and not displayed on the interface 502, and the LUT capsule 508 is redisplayed on the interface 502.
It should be noted that the LUT logic control module is triggered to temporarily control the LUT recommendation control to be hidden, and the operation of the temporary control AI recommendation module to suspend the user from suspending recommending the LUT in step S404 is very many, which is not limited in the embodiment of the present application.
And S410, the LUT logic control module controls the display screen to hide the LUT recommendation control in response to the cancel operation of the user on the LUT recommendation control, and controls the AI recommendation module to finish executing the step S404.
In the process of executing steps S404 to S408, when the user cancels the LUT recommendation control, it indicates that the user does not need to recommend the matched target recommendation LUT to the user by the mobile phone, and therefore, the LUT logic module controls the display screen to hide the LUT recommendation control in response to the cancellation operation of the LUT recommendation control by the user, that is, controls the display screen not to display the LUT recommendation control on the camera preview interface, and controls the AI recommendation module not to use the scene parameters of the preview image to match the LUT any more, that is, not to execute step S404, and accordingly, since step S404 is finished executing, step S406 to step S408 are not executed any more.
In some embodiments, the LUT recommendation control displayed on the camera preview interface has a icon that can be dismissed, and the user can hide the LUT recommendation control by clicking the icon that is dismissed on the LUT recommendation control, so that the preview image displayed on the subsequent camera preview interface is no longer the preview image processed by the target recommendation LUT, and the LUT recommendation control is no longer displayed. For example, as shown in (3) of FIG. 5c, the LUT capsule 508 on the interface 502 has an "X" icon, and when the user clicks on the "X" icon, as shown in (4) of FIG. 5c, there is no longer a LUT capsule 508 in (3) on the interface 502.
In the embodiment of the application, the LUT logic control module can control not to recommend the target LUT to the user any more according to the requirements of the user, and the use experience of the user is improved.
It should be noted that step S410 is triggered to be executed only when the user performs a cancel operation on the LUT recommendation control, and if the user does not perform the cancel operation on the LUT recommendation control, step S410 is not executed, and the processed preview image and the LUT recommendation control are always displayed on the display screen in the movie mode on the camera preview interface. Step S410 is therefore an optional step. After the step S410 is triggered to be executed, the process shown in fig. 4 is ended, and the target recommendation LUT is not recommended until the step S401 is restarted, that is, the first application is restarted, the first application enters the movie mode again, and the AI recognition function is turned on.
And S411, the LUT logic control module controls the display screen to hide the LUT recommendation control in response to the operation of selecting the LUT template by the user, and controls the AI recommendation module to finish executing the step S404.
In the process of executing steps S404 to S408, if the user does not want to use the target recommendation LUT, the user may manually select the LUT template required by the user, and the operation of selecting the LUT template is executed on the camera preview interface in the movie mode. And the LUT logic control module controls the display screen to hide the LUT recommendation control in response to the operation of selecting the LUT template by the user, and controls the AI recommendation module to end executing step S404. The LUT logic control module controls the display screen to hide the LUT recommendation control, and controls the AI recommendation module to end the process and principle of step S404, which can be seen in relevant parts of step S410.
And after the user selects the LUT template, processing the preview image by using the LUT template selected by the user, and displaying the preview image processed by the LUT template selected by the user on a display preview interface. In the embodiment of the present application, the preview image processed by using the LUT template selected by the user may be referred to as a third preview image.
In some embodiments, the process by which the LUT logic control module responds to the user selecting the operation of the LUT template may be: the camera preview interface in movie mode further comprises: and the LUT control is used for displaying an LUT template column on the display screen when a user performs a starting operation on the LUT control, and the user selects an LUT template in the LUT template column. The LUT logic control module controls the display screen to hide the LUT recommendation control in response to the user performing a start operation on the LUT control, and controls the AI recommendation module to end performing step S404. For example, as shown in fig. 5f (1), the user clicks on the LUT control 509, and the camera navigation interface 502 displays the LUT template field 509 as shown in fig. 5f (2). LUT template fields 209 include LUTs 1, LUTs 2, LUTs 3. Continuing with fig. 5f at (2), the user clicks on the select LUT2 template and the interface 502 changes to that shown at (3) in fig. 5f, which illustrates the preview image after processing by the LUT 2. Illustratively, as shown in (3) of fig. 5f, a prompt message 511 of "LUT 2" may also be displayed on the interface 502 to remind the user that the current preview image is processed by the LUT 2.
It should be noted that there is no correlation between the execution of step S411 and the execution of step S410, step S411 is triggered to be executed only when the user selects the LUT template, and if the user does not select the LUT template, step S410 is not executed, and the processed preview image and the LUT recommendation control are always displayed on the display screen on the camera preview interface in the movie mode. Step S411 is therefore an optional step, similar to step S410. After the step S411 is triggered to be executed, the process shown in fig. 4 is ended, and the target recommendation LUT is not recommended until the step S401 is restarted, that is, the first application is restarted, the first application enters the movie mode again and the AI recognition function is turned on.
And S412, the LUT logic control module controls the display screen to hide the LUT recommendation control in response to the operation of the user for exiting the movie mode, and controls the AI recommendation module to finish executing the step S404.
In the process of executing steps S404 to S408, if the user does not want to continue using the movie mode, the user may choose to exit the movie mode. And the LUT logic control module controls the display screen to hide the LUT recommendation control in response to the operation of exiting the movie mode, and controls the AI recommendation module to end the execution of step S404. The LUT logic control module controls the display screen to hide the LUT recommendation control, and controls the AI recommendation module to end the process and principle of step S404, which can be seen in relevant parts of step S410.
The operation of exiting the movie mode may be an operation of switching to another working mode except the movie mode, for example, switching to a professional mode, an operation of directly closing the first application, an operation of entering a background of the mobile phone, an operation of turning on the 4K HDR control, an operation of waking up the mobile phone again after turning off the screen, an operation of clearing an execution process of the first application, and the like. When the LUT logic control module detects any operation of exiting the movie mode, the LUT logic control module controls the display screen to hide the LUT recommendation control, and controls the AI recommendation module to end the execution of step S404.
For example, as shown in (1) in fig. 5g, the LUT capsule 508 is displayed on the interface 502, and the 4K HDR control 512 is also displayed, and when the 4K HDR control 512 is turned on by the user, as shown in (2) in fig. 5g, the LUT capsule 508 is hidden.
It should be noted that there is no association relationship between the execution of step S411 and the execution of step S409 and step S410, step S411 is triggered to be executed only when the user exits the movie mode, and if the user does not exit the movie mode, step S411 is not executed, and the processed preview image and the LUT recommendation control are always displayed on the display screen on the camera preview interface in the movie mode. Therefore, step S412 is an optional step similar to step S411 and step S410. After the step S411 is triggered to be executed, the process shown in fig. 4 is ended, and the target recommendation LUT is not recommended until the step S401 is restarted, that is, the first application is restarted, the first application enters the movie mode again and the AI recognition function is turned on.
S413, the LUT logic control module controls the display screen to hide the LUT recommendation control in response to the user' S operation of closing the AI setting item, and controls the AI recommendation module to end executing step S404.
In the course of performing steps S404 to S408, if the user does not want to use the AI recognition function, the AI setting item may be selected to be closed. As can be seen from the foregoing description, when the AI recommendation module executes step S404, the AI recognition function is required to be used to obtain the scene parameters of the preview image, and therefore, when the AI recognition function is not used by the user, the AI recommendation module cannot execute step S404 and cannot perform LUT recommendation, and therefore, the LUT logic control module controls the display screen to hide the LUT recommendation control in response to the operation of closing the AI setting item by the user, and controls the AI recommendation module to end executing step S404.
For example, as shown in (1) in fig. 5h, the interface 502 displays a LUT capsule prompting that the current target recommendation LUT is "warm light", when the user clicks on the setting item 505, as shown in (2) in fig. 5h, the mobile phone display screen displays a setting interface 506, the setting interface 506 includes a "photo scale" setting item, a "voice control photographing" setting item, a "smiley face snapshot" setting item, a "video resolution" setting item, a "video frame rate" setting item, a "movie HDR 10" setting item, a "high efficiency video format" setting item, and an "AI movie hue" setting item 507, the "AI movie hue" setting item 507 is originally in an open state, and after the user clicks on the "AI movie hue" setting item 507, in response to the user's closing operation on the "AI movie hue" setting item 507, the mobile phone displays a setting interface 506 as shown in (3) in fig. 5h, in which the "AI hue" setting interface 506 is closed, namely, the AI recognition function is closed by the mobile phone. At this time, when the user clicks the return icon 513 of the setting interface, the camera preview interface 502 in the movie mode is displayed in response to the user return operation, as shown in (4) in fig. 5h, and the LUT capsule is no longer present on the camera preview interface 502 in the movie mode.
Similarly, the execution of step S413 is not related to the execution of step S410, step S411, and step S412. As can be seen from the foregoing description, the LUT logic control module controls the display screen to hide the LUT recommendation control in step S410, step S411, step S412, and step S413, and controls the AI recommendation module to end executing step S404, and it can also be understood that step S410, step S411, step S412, and step S413 are all manners for the first application to end recommending the LUT matching the scene of the preview image to the user. For example, in addition to the operations mentioned in step S410 to step S413, there may be other operations for the user to close the LUT recommendation, and the LUT logic control module may also be triggered to execute the operation of controlling the display screen to hide the LUT recommendation control, and control the AI recommendation module to end executing step S404, which is not limited in this embodiment of the application.
In other embodiments, the LUT logic control module may control the first application to resume using the preset default LUT to display the preview image in addition to controlling to end recommending LUTs to the user.
For example, when the AI recognition function is turned on and the recommendation of the LUT to the user is finished, the LUT logic control module may control the display screen to display the preview image processed by the default LUT on the camera preview interface. And when the AI identification function is closed and the recommendation of the LUT to the user is finished, the LUT logic control module continuously controls the display screen to display the preview image processed by the LUT which is used for the last time on the camera preview interface.
The default LUT processed preview image mentioned in the embodiment of the present application may be referred to as a second preview image, and the last LUT template processed preview image mentioned in the embodiment of the present application may be referred to as a fourth preview image.
In some embodiments, the LUT logic control module may complete the control of the LUT logic control module of fig. 4 by pre-configured intelligent recommendation logic. For example, intelligent recommendation rules may be preconfigured in the first application, and the intelligent recommendation rules may include recommendation rules, LUT recommendation control display rules, and recovery rules. The LUT logic control module controls whether the AI recommendation module performs step S404 according to the recommendation rule. And controlling the display of the LUT recommendation control on the camera preview interface according to the display rule of the LUT recommendation control. It is also possible to control whether the preview image is restored to processing using a default LUT according to a restoration rule.
As can be seen from the foregoing content, in the embodiment of the present application, when the LUT recommendation control is hidden, the AI recommendation module does not execute step S404, that is, the target recommendation LUT is obtained by mismatching, and does not recommend the target recommendation LUT to be used to the user, and when the LUT recommendation control is displayed, executes step S404 to recommend the target recommendation LUT to the user.
In the embodiment of the application, the AI logic control module matches the scene parameters of the preview image to obtain an LUT template corresponding to the preview image, and then controls the display screen to display the processed preview image and the LUT recommendation control on a camera preview interface in a movie mode under the control of the LUT logic control module, wherein the processed preview image is obtained by processing the preview image through the target recommendation LUT. And the LUT recommendation control is used for prompting the LUT template recommended to the user. Through displaying the processed preview image and the LUT recommendation control on the camera preview interface, the purpose that the LUT is recommended to the user and matched with the scene of the current preview image is recommended is achieved, the user can further consider using the goal recommendation LUT in the video recording process, the LUT is not used through manual selection of the user, and the experience of the user in shooting effect processing is improved.
The present embodiment also provides a computer-readable storage medium, which includes instructions, when the instructions are executed on an electronic device, cause the electronic device to execute the relevant method steps in fig. 4, so as to implement the method in the foregoing embodiment.
The present embodiment also provides a computer program product containing instructions, which when run on an electronic device, causes the electronic device to perform the relevant method steps as in fig. 4, to implement the method in the above-described embodiment.
The present embodiment also provides a control device comprising a processor and a memory for storing computer program code comprising computer instructions which, when executed by the processor, perform the method according to the above embodiment as the relevant method steps in fig. 4. The control device may be an integrated circuit IC or may be a system on chip SOC. The integrated circuit can be a general integrated circuit, a Field Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC).
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
In the several embodiments provided in this embodiment, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, each functional unit in the embodiments of the present embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present embodiment essentially or partially contributes to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor to execute all or part of the steps of the method described in the embodiments. And the aforementioned storage medium includes: flash memory, removable hard drive, read only memory, random access memory, magnetic or optical disk, and the like.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (19)

1. A method for recording, comprising:
responding to the operation of entering the movie mode by a user, and displaying a camera preview interface in the movie mode;
when the artificial intelligence AI identification function is started, displaying a first preview image and a color lookup table LUT recommendation control on a camera preview interface in a movie mode; wherein the LUT recommendation control is to prompt a user to use a target recommendation LUT; the first preview image is obtained by processing an original preview image by using a target recommendation LUT; the target recommendation LUT is obtained according to scene parameter matching of an original preview image; and the scene parameters of the original preview image are obtained by performing AI identification on the original preview image.
2. The video recording method according to claim 1, wherein after displaying the first preview image and the LUT recommendation control on the camera preview interface in the movie mode, further comprising:
hiding the LUT recommendation control and displaying a second preview image on the camera preview interface in response to a cancel operation of a user on the LUT recommendation control; wherein the preview effect of the second preview image is different from the preview effect of the first preview image.
3. The video recording method according to claim 1, wherein after displaying the first preview image and the LUT recommendation control on the camera preview interface in the movie mode, further comprising:
hiding the LUT recommendation control on the camera preview interface in response to a user starting a video recording operation;
and responding to the end of the video recording operation of the user, and displaying the LUT recommendation control on the camera preview interface.
4. The video recording method according to claim 1, wherein after displaying the first preview image and the LUT recommendation control on the camera preview interface in the movie mode, further comprising:
in response to a user selection operation of an LUT template, hiding the LUT recommendation control and displaying a third preview image on the camera preview interface; wherein the third preview image is obtained by processing an original preview image using the user selected LUT template.
5. The video recording method according to claim 4, wherein said hiding the LUT recommendation control and displaying a third preview image on the camera preview interface in response to the user selecting the LUT template comprises:
in response to a user's operation of an LUT control, hiding the LUT recommendation control and displaying an LUT template field on the camera preview interface; wherein a plurality of LUT templates are displayed on the LUT template column;
displaying a third preview image on the camera preview interface in response to a user operation selecting a LUT template through the LUT template field.
6. The video recording method according to claim 1, wherein after displaying the first preview image and the LUT recommendation control on the camera preview interface in the movie mode, further comprising:
in response to an operation of a user exiting a movie mode, hiding the LUT recommendation control and displaying a second preview image on the camera preview interface; wherein the preview effect of the second preview image is different from the preview effect of the first preview image.
7. The video recording method according to claim 1, wherein after displaying the first preview image and the LUT recommendation control on the camera preview interface in the movie mode, further comprising:
hiding the LUT recommendation control and displaying a fourth preview image on the camera preview interface in response to an operation of closing an AI recognition function by a user; wherein the fourth preview image is obtained by processing the original preview image using the most recently used LUT template.
8. The video recording method according to claim 2, wherein the second preview image is obtained by processing the original preview image using a preset default LUT.
9. The video recording method according to claim 1, wherein after displaying the first preview image and the LUT recommendation control on the camera preview interface in the movie mode, further comprising:
in response to a user's operation of an LUT control, hiding the LUT recommendation control and displaying an LUT template field on the camera preview interface;
and in response to the operation of automatic retraction of the LUT template bar, resuming display of the first preview image and the LUT recommendation control on the camera preview interface.
10. The video recording method according to claim 1, wherein before displaying the camera preview interface in the movie mode in response to the user's operation of entering the movie mode, the method further comprises:
responding to the operation of starting the first application by a user, and displaying a camera preview interface in a default working mode; wherein the camera preview interface includes: a working mode control; the working mode control comprises at least: a default operating mode and a movie mode;
and responding to the operation of entering the movie mode through the working mode control by the user, and displaying a camera preview interface in the movie mode.
11. The video recording method according to claim 1, wherein when the artificial intelligence AI recognition function is turned on, displaying a first preview image and a color look-up table LUT recommendation control on the camera preview interface in movie mode comprises:
when the AI identification function is started, AI identification is carried out on the original preview image to obtain the scene parameters of the original preview image;
according to a preset recommendation period, matching to obtain a target recommendation LUT according to the scene parameters of the original preview image;
processing the original preview image by using a target recommendation LUT to obtain a first preview image;
displaying the first preview image and the LUT recommendation control on the camera preview interface in the movie mode.
12. The video recording method according to claim 11, further comprising:
when AI identification is carried out on the original preview image, displaying a dynamic effect on the camera preview interface; wherein the animation effect is to prompt a user that a scene of an original preview image is being recognized.
13. The video recording method according to claim 11, wherein before performing AI recognition on the original preview image to obtain the scene parameters of the original preview image, the method further comprises:
when the AI identification function is started, registering a preview callback to a hardware abstraction layer HAL;
the AI identifying the original preview image to obtain the scene parameter of the original preview image includes:
acquiring scene parameters of an original preview image by calling an interface on the HAL; and obtaining the scene parameters of the original preview image by performing AI identification on the original preview image.
14. The video recording method according to claim 11, wherein the obtaining of the target recommendation LUT according to the scene parameter of the original preview image by matching according to a preset recommendation period comprises:
and according to a preset recommendation period, matching the scene parameters of the original preview image in a corresponding relation table of the pre-configured scene parameters and the LUT template to obtain a target recommendation LUT.
15. The video recording method according to claim 11, wherein the obtaining of the target recommendation LUT according to the scene parameter of the original preview image by matching according to a preset recommendation period comprises:
inputting the scene parameters of the original preview image into an AI (artificial intelligence) model according to a preset recommendation period, and outputting a target recommendation LUT (look up table) by the AI model; wherein the AI model is a machine model.
16. The video recording method according to any one of claims 1 to 15, further comprising:
stopping executing the operation of obtaining a target recommendation LUT according to the scene parameter matching of the original preview image in the time period of hiding the LUT recommendation control;
and in the time period of displaying the LUT recommendation control, executing the operation of obtaining a target recommendation LUT according to the scene parameter matching of the original preview image.
17. The video recording method according to any one of claims 1 to 15, wherein the scene parameters of the original preview image include: scene and brightness of the original preview image.
18. An electronic device, comprising: one or more processors, a memory, a display screen, a camera, a wireless communication module and a mobile communication module;
the memory, the display screen, the camera, the wireless communication module, and the mobile communication module are coupled with the one or more processors, the memory for storing computer program code, the computer program code comprising computer instructions, which when executed by the one or more processors, cause the electronic device to perform the video recording method of any of claims 1-17.
19. A computer-readable storage medium comprising instructions that, when executed on an electronic device, cause the electronic device to perform the video recording method of any of claims 1-17.
CN202110927047.2A 2021-08-12 2021-08-12 Video recording method, electronic device and computer readable storage medium Active CN113965694B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110927047.2A CN113965694B (en) 2021-08-12 2021-08-12 Video recording method, electronic device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110927047.2A CN113965694B (en) 2021-08-12 2021-08-12 Video recording method, electronic device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113965694A true CN113965694A (en) 2022-01-21
CN113965694B CN113965694B (en) 2022-12-06

Family

ID=79460531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110927047.2A Active CN113965694B (en) 2021-08-12 2021-08-12 Video recording method, electronic device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113965694B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114697555A (en) * 2022-04-06 2022-07-01 百富计算机技术(深圳)有限公司 Image processing method, device, equipment and storage medium
WO2023015959A1 (en) * 2021-08-12 2023-02-16 荣耀终端有限公司 Filming method and electronic device
CN115883958A (en) * 2022-11-22 2023-03-31 荣耀终端有限公司 Portrait shooting method
CN116668838A (en) * 2022-11-22 2023-08-29 荣耀终端有限公司 Image processing method and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103533244A (en) * 2013-10-21 2014-01-22 深圳市中兴移动通信有限公司 Shooting device and automatic visual effect processing shooting method thereof
CN103945113A (en) * 2013-01-18 2014-07-23 三星电子株式会社 Method and apparatus for photographing in portable terminal
CN105323456A (en) * 2014-12-16 2016-02-10 维沃移动通信有限公司 Image previewing method for photographing device and image photographing device
CN105812646A (en) * 2014-12-30 2016-07-27 Tcl集团股份有限公司 Shooting method, shooting device, image processing method, image processing device, and communication system
CN109068056A (en) * 2018-08-17 2018-12-21 Oppo广东移动通信有限公司 A kind of electronic equipment and its filter processing method of shooting image, storage medium
CN111587399A (en) * 2017-09-27 2020-08-25 深圳传音通讯有限公司 Filter effect display method and device and mobile terminal
CN112511750A (en) * 2020-11-30 2021-03-16 维沃移动通信有限公司 Video shooting method, device, equipment and medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103945113A (en) * 2013-01-18 2014-07-23 三星电子株式会社 Method and apparatus for photographing in portable terminal
US20140204244A1 (en) * 2013-01-18 2014-07-24 Samsung Electronics Co., Ltd. Method and apparatus for photographing in portable terminal
CN103533244A (en) * 2013-10-21 2014-01-22 深圳市中兴移动通信有限公司 Shooting device and automatic visual effect processing shooting method thereof
CN105323456A (en) * 2014-12-16 2016-02-10 维沃移动通信有限公司 Image previewing method for photographing device and image photographing device
CN105812646A (en) * 2014-12-30 2016-07-27 Tcl集团股份有限公司 Shooting method, shooting device, image processing method, image processing device, and communication system
CN111587399A (en) * 2017-09-27 2020-08-25 深圳传音通讯有限公司 Filter effect display method and device and mobile terminal
CN109068056A (en) * 2018-08-17 2018-12-21 Oppo广东移动通信有限公司 A kind of electronic equipment and its filter processing method of shooting image, storage medium
CN112511750A (en) * 2020-11-30 2021-03-16 维沃移动通信有限公司 Video shooting method, device, equipment and medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023015959A1 (en) * 2021-08-12 2023-02-16 荣耀终端有限公司 Filming method and electronic device
CN114697555A (en) * 2022-04-06 2022-07-01 百富计算机技术(深圳)有限公司 Image processing method, device, equipment and storage medium
CN114697555B (en) * 2022-04-06 2023-10-27 深圳市兆珑科技有限公司 Image processing method, device, equipment and storage medium
CN115883958A (en) * 2022-11-22 2023-03-31 荣耀终端有限公司 Portrait shooting method
CN116668838A (en) * 2022-11-22 2023-08-29 荣耀终端有限公司 Image processing method and electronic equipment
CN116668838B (en) * 2022-11-22 2023-12-05 荣耀终端有限公司 Image processing method and electronic equipment

Also Published As

Publication number Publication date
CN113965694B (en) 2022-12-06

Similar Documents

Publication Publication Date Title
CN113965694B (en) Video recording method, electronic device and computer readable storage medium
US11722449B2 (en) Notification message preview method and electronic device
US10565763B2 (en) Method and camera device for processing image
CN113810602B (en) Shooting method and electronic equipment
CN113645408B (en) Photographing method, photographing apparatus, and storage medium
WO2021013132A1 (en) Input method and electronic device
CN113727017B (en) Shooting method, graphical interface and related device
CN109981885B (en) Method for presenting video by electronic equipment in incoming call and electronic equipment
CN112887583A (en) Shooting method and electronic equipment
CN113963659A (en) Adjusting method of display equipment and display equipment
WO2020155052A1 (en) Method for selecting images based on continuous shooting and electronic device
CN113935898A (en) Image processing method, system, electronic device and computer readable storage medium
CN113747060B (en) Image processing method, device and storage medium
CN113170037A (en) Method for shooting long exposure image and electronic equipment
CN113099146A (en) Video generation method and device and related equipment
CN115689963A (en) Image processing method and electronic equipment
CN113452969B (en) Image processing method and device
CN114065312A (en) Component display method and electronic equipment
US20230162529A1 (en) Eye bag detection method and apparatus
CN115730091A (en) Comment display method and device, terminal device and readable storage medium
CN115734032A (en) Video editing method, electronic device and storage medium
CN117119316B (en) Image processing method, electronic device, and readable storage medium
WO2022170918A1 (en) Multi-person-capturing method and electronic device
WO2023010912A9 (en) Image processing method and electronic device
WO2023142690A1 (en) Restorative shooting method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant