WO2023010912A1 - 一种图像处理方法及电子设备 - Google Patents

一种图像处理方法及电子设备 Download PDF

Info

Publication number
WO2023010912A1
WO2023010912A1 PCT/CN2022/090630 CN2022090630W WO2023010912A1 WO 2023010912 A1 WO2023010912 A1 WO 2023010912A1 CN 2022090630 W CN2022090630 W CN 2022090630W WO 2023010912 A1 WO2023010912 A1 WO 2023010912A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
lut
electronic device
preset
scene
Prior art date
Application number
PCT/CN2022/090630
Other languages
English (en)
French (fr)
Other versions
WO2023010912A9 (zh
Inventor
肖斌
崔瀚涛
王宇
朱聪超
邵涛
胡树红
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Priority to EP22797244.5A priority Critical patent/EP4152741A4/en
Publication of WO2023010912A1 publication Critical patent/WO2023010912A1/zh
Publication of WO2023010912A9 publication Critical patent/WO2023010912A9/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/617Upgrading or updating of programs or applications for camera control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera

Definitions

  • the present application relates to the technical field of photographing, and in particular to an image processing method and electronic equipment.
  • Existing mobile phones generally have photographing and video recording functions, and more and more people use mobile phones to take pictures and videos to record every bit of life.
  • mobile phones when mobile phones are shooting (such as taking pictures and recording videos), they can only use the pre-configured color look-up table (Look Up Table, LUT) before shooting, the LUT selected by the user, or the LUT determined by identifying the preview image to process the preview image.
  • LUT Color look-up table
  • the mobile phone can only take photos or videos with a style or display effect corresponding to the above-mentioned pre-configured or selected parameters, and the style or display effect of the photos or videos taken by the mobile phone is single.
  • the present application provides an image processing method and electronic equipment, which can dynamically adjust the LUT during the process of taking pictures or recording videos, and enrich the display effect obtained by taking pictures or recording videos.
  • the present application provides an image processing method.
  • the electronic device can acquire the first image.
  • the first image is an image collected by a camera of the electronic device, and the first image includes a first object to be photographed.
  • the electronic device may determine a first scene corresponding to the first image, where the first scene is used to identify a scene corresponding to the first object to be photographed.
  • the electronic device may determine the first LUT according to the first scene.
  • the electronic device may process the first image according to the first LUT to obtain the second image, and display the second image. The display effect of the second image corresponds to the first LUT.
  • the electronic device can dynamically adjust the LUT according to each frame of image acquired by the electronic device during the process of taking pictures or recording videos.
  • display effects or styles corresponding to different LUTs can be presented, and the display effects obtained by taking pictures or video recordings can be enriched.
  • the electronic device may collect a third image, the third image is an image collected by the camera of the electronic device, and the third image includes the second shot object.
  • the electronic device may determine that the second image corresponds to the second scene, and the second scene is used to identify the scene corresponding to the second subject; the electronic device determines the second LUT according to the second scene; the electronic device processes the third image according to the second LUT to obtain The fourth image is displayed, and the fourth image is displayed, and the display effect of the fourth image corresponds to the second LUT.
  • the electronic device can use different LUTs to process the images through the method of the present application.
  • display effects or styles corresponding to different LUTs can be presented, and display effects obtained by taking pictures or videos can be enriched.
  • the electronic device determining the first LUT according to the first scene may include: the electronic device determining the third LUT corresponding to the first scene among the multiple third LUTs as the first The first LUT of the image.
  • the electronic device can identify the shooting scene corresponding to the first image (that is, the first scene), and determine the first LUT according to the shooting scene.
  • multiple third LUTs are pre-configured in the electronic device, and are used to process images captured by the camera of the electronic device to obtain images with different display effects, and each first LUT corresponds to a display effect in a scene.
  • the electronic device determining the first LUT according to the first scene may include: the electronic device determining the third LUT corresponding to the first scene among the multiple third LUTs as the first A fourth LUT of the image: the electronic device calculates a weighted sum of the fourth LUT of the first image and the first LUT of the fifth image to obtain the first LUT.
  • the fifth image is a previous frame image of the first image
  • the third LUT of the previous frame image of the first frame image collected by the electronic device during this shooting process is a preset LUT.
  • a plurality of third LUTs are pre-configured in the electronic device, and are used to process images collected by a camera of the electronic device to obtain images with different display effects, and each third LUT corresponds to a display effect in a scene.
  • the electronic device when determining the final LUT, not only refers to the current frame image, but also refers to the final LUT of the previous frame image.
  • the smooth transition of display effects or styles corresponding to different LUTs can be realized, the display effect of the multi-frame preview image presented by the electronic device can be optimized, and the user's visual experience in the process of taking photos or recordings can be improved.
  • the electronic device calculates the weighted sum of the fourth LUT of the first image and the first LUT of the fifth image to obtain the first LUT, which may include: the electronic device adopts a preconfigured The first weighting coefficient and the second weighting coefficient are calculated by calculating the weighted sum of the fourth LUT of the first image and the first LUT of the fifth image to obtain the first LUT.
  • the first weighting coefficient is the weighting coefficient of the fourth LUT of the first image
  • the second weighting coefficient is the weighting coefficient of the first LUT of the fifth image
  • the sum of the first weighting coefficient and the second weighting coefficient is equal to 1.
  • the first weighting coefficient and the second weighting coefficient may be preset weights preconfigured in the electronic device.
  • the first weighting coefficient and the second weighting coefficient may be set by a user in the electronic device.
  • the electronic device may respond to In the first preset operation, the first setting item and the second setting item are displayed.
  • the first setting item is used to set the first weighting coefficient
  • the second setting item is used to set the second weighting coefficient.
  • the electronic device may use the first weighting coefficient set by the user as the weighting coefficient of the fourth LUT of the first image, and use the second weighting coefficient set by the user as the weighting coefficient of the fourth LUT of the first image.
  • the weighting coefficient is used as the weighting coefficient of the first LUT of the fifth image.
  • the first preset operation is a click operation on the first preset control displayed by the electronic device, and the first preset control is used to trigger the electronic device to set the weight of the fourth LUT of the first image and the first LUT of the fifth image or, the first preset operation is the user's click operation on the first physical button of the electronic device.
  • a preset artificial intelligence (artificial intelligence, AI) model (such as preset AI model b) is preconfigured in the electronic device.
  • the preset AI model b has the capability of recognizing the first image and the scene detection result of the first image, and outputting the weight of each third LUT among the plurality of third LUTs.
  • the electronic device can obtain the weight of each third LUT through the preset AI model b; then, according to the obtained weight, calculate a weighted sum of multiple third LUTs to obtain the first LUT.
  • the electronic device determining the first LUT according to the first scene may include: the electronic device takes the instruction information of the first scene and the first image as input, runs a preset AI model, and obtains multiple third LUTs of multiple third LUTs. Three weighting coefficients; the electronic device uses multiple third weighting coefficients to calculate a weighted sum of multiple third LUTs to obtain the first LUT. Wherein, the sum of the multiple third weighting coefficients is 1, and the multiple third LUTs are in one-to-one correspondence with the multiple third weighting coefficients.
  • the electronic device determines the first LUT of the first image, not only referring to a third LUT corresponding to the first scene of the first image, but also referring to all but one of the multiple third LUTs.
  • the electronic device determines the first LUT according to the first scene, which may include: the electronic device uses the indication information of the first scene and the first image as input, and runs a preset AI model, A plurality of third weighting coefficients of a plurality of third LUTs is obtained; the electronic device adopts a plurality of third weighting coefficients to calculate a weighted sum of a plurality of third LUTs to obtain a fourth LUT of the first image; the electronic device calculates the weighted sum of the first image A weighted sum of the fourth LUT and the first LUT of the fifth image to obtain the first LUT.
  • the fifth image is a previous frame image of the first image
  • the third LUT of the previous frame image of the first frame image collected by the electronic device during this shooting process is a preset LUT.
  • the sum of the multiple third weighting coefficients is 1, and the multiple third LUTs are in one-to-one correspondence with the multiple third weighting coefficients.
  • the electronic device when determining the final LUT, not only refers to the current frame image, but also refers to the final LUT of the previous frame image.
  • the smooth transition of display effects or styles corresponding to different LUTs can be realized, the display effect of the multi-frame preview image presented by the electronic device can be optimized, and the user's visual experience in the process of taking photos or recordings can be improved.
  • the electronic device before the electronic device obtains the weight of each third LUT through the preset AI model, the electronic device can first train the preset AI model b, so that the preset AI model b It has the capability of identifying the first image and the scene detection result of the first image, and outputting the weight of each third LUT among the plurality of third LUTs.
  • the electronic device may acquire multiple sets of data pairs, each set of data pairs includes a sixth image and a seventh image, and the sixth image is an image that satisfies a preset condition obtained by processing the seventh image. Then, the electronic device may recognize the seventh image, and determine the third scene corresponding to the seventh image. Finally, the electronic device can use the seventh image and the sixth image and the indication information for identifying the third scene as input samples to train the preset AI model, so that the preset AI model can determine which weight to use for multiple third LUTs. The ability of the LUT obtained by the weighted sum to process the seventh image to obtain the display effect of the sixth image.
  • the input samples of the preset AI model b have the indication information of the third scene corresponding to the seventh image added.
  • the training principle of the preset AI model b is the same as that of the aforementioned preset AI model. The difference is that the indication information of the third scene corresponding to the seventh image may more clearly indicate the shooting scene corresponding to the seventh image.
  • the shooting scene of the seventh image is the third scene, it indicates that the seventh image is more likely to be an image of the third scene. Then, setting the weighting coefficient of the third LUT corresponding to the photographed object to a larger value is beneficial to improve the display effect.
  • the indication information of the third scene can guide the training of the preset AI model b, guiding the preset AI model b to train in a direction inclined to the third scene. In this way, the convergence of the preset AI model b can be accelerated, and the number of training times of the second preset AI model can be reduced.
  • an electronic device may acquire a first image, the first image is an image collected by a camera of the electronic device, and the first image includes a first photographed object. Afterwards, the electronic device may use the first image as an input, and run a preset AI model (eg, preset AI model a) to obtain multiple third weighting coefficients of multiple third LUTs. The sum of the multiple third weighting coefficients is 1, and the multiple third LUTs are in one-to-one correspondence with the multiple third weighting coefficients. The electronic device calculates a weighted sum of the multiple third LUTs by using multiple third weighting coefficients to obtain the first LUT. The electronic device processes the first image according to the first LUT to obtain a second image, and displays the second image, and the display effect of the second image corresponds to the first LUT.
  • a preset AI model eg, preset AI model a
  • the electronic device can dynamically adjust the LUT according to each frame of image acquired by the electronic device during the process of taking pictures or recording videos.
  • display effects or styles corresponding to different LUTs can be presented, and the display effects obtained by taking pictures or video recordings can be enriched.
  • the electronic device determines the first LUT of the first image, not only referring to a third LUT corresponding to the first scene of the first image, but also referring to other shooting scenes corresponding to the first scene in the plurality of third LUTs.
  • the third LUT In this way, the display effect of the electronic device can be improved.
  • the electronic device adopts multiple third weighting coefficients, and calculates a weighted sum of multiple third LUTs to obtain the first LUT, including: the electronic device adopts multiple third weighting coefficients, Calculate the weighted sum of a plurality of third LUTs to obtain the fourth LUT of the first image; the electronic device calculates the weighted sum of the fourth LUT of the first image and the first LUT of the fifth image to obtain the first LUT; wherein, the fifth The image is a previous frame of the first image, and the third LUT of the previous frame of the first frame of image collected by the electronic device during this shooting process is a preset LUT.
  • the electronic device when determining the final LUT, not only refers to the current frame image, but also refers to the final LUT of the previous frame image.
  • the smooth transition of display effects or styles corresponding to different LUTs can be realized, the display effect of the multi-frame preview image presented by the electronic device can be optimized, and the user's visual experience in the process of taking photos or recordings can be improved.
  • the electronic device before the electronic device takes the first image as input, runs the preset AI model, and obtains multiple third weighting coefficients of multiple third LUTs, the electronic device can train the pre-set Let AI model a.
  • the method for training the preset AI model a by the electronic device includes: the electronic device obtains multiple sets of data pairs, each set of data pairs includes the sixth image and the seventh image, and the sixth image is obtained by processing the seventh image and meets the preset conditions. Image; the electronic device uses the seventh image and the sixth image as input samples, and trains the preset AI model, so that the preset AI model can determine which weight is used to weight and obtain the LUT obtained by calculating the weighted sum of multiple third LUTs to process the seventh image. The ability to get the display effect of the sixth image.
  • the user may adjust the weight output by the preset AI model a or the preset AI model b.
  • the method of the present application may further include: the electronic device displays a plurality of third setting items in response to the user's second preset operation; wherein, each third setting item corresponds to a third LUT, and is used to set the third LUT of the third LUT. Three weighting coefficients; the electronic device updates the corresponding third weighting coefficient in response to the user's setting operation on one or more third setting items among the plurality of third setting items. Wherein, the electronic device calculates the weighted sum of the multiple third LUTs by using the updated multiple third weighting coefficients.
  • the above-mentioned second preset operation is the user's click operation on the second preset control, and the second preset control is used to trigger the electronic device to set the weight of multiple third LUTs; or, the second preset operation is the user's click operation on the electronic device The click operation of the second physical button.
  • the weight output by the preset AI model a or the preset AI model b can be adjusted by the user.
  • the electronic device can adjust the LUT according to the needs of the user, so that images with higher user satisfaction can be captured.
  • a user may also add a LUT in the electronic device.
  • the method of the present application further includes: the electronic device displays one or more fourth setting items in response to the user's third preset operation; wherein, the third preset operation is used to trigger the electronic device to add a new display effect, and each fourth The setting item corresponds to a fifth LUT, and each fifth LUT corresponds to a display effect in a shooting scene.
  • the fifth LUT is different from the third LUT; in response to the user's selection operation on any fourth setting item in the preview interface, The electronic device stores the fifth LUT corresponding to the fourth setting item selected by the user.
  • the above-mentioned fourth setting item includes a preview image processed by using the corresponding fifth LUT, which is used to present a display effect corresponding to the fifth LUT.
  • the user can confirm whether a satisfactory LUT is obtained according to the adjusted display effect presented by the electronic device. In this way, the efficiency of the user setting the newly added LUT can be improved.
  • the acquisition of the first image by the electronic device may include: the preview interface of the electronic device when the electronic device takes pictures, the preview interface of the electronic device before video recording, or the video recording of the electronic device Viewfinder interface, capture the first image. That is to say, the method can be applied to the scene of taking pictures of the electronic device, the scene of recording and the scene before recording in the recording mode.
  • the first image may be an image collected by a camera of the electronic device.
  • the first image may be a preview image obtained from an image captured by a camera of the electronic device.
  • the present application provides an electronic device, where the electronic device includes a memory, a display screen, one or more cameras, and one or more processors.
  • the memory, display screen, camera and processor are coupled.
  • the camera is used for collecting images
  • the display screen is used for displaying images collected by the camera or images generated by the processor.
  • Computer program codes are stored in the memory, and the computer program codes include computer instructions.
  • the electronic The device executes the method described in the first aspect or the second aspect and any possible design manner thereof.
  • the present application provides an electronic device, which includes a memory, a display screen, one or more cameras, and one or more processors. Memory, display screen, camera and processor are coupled. Wherein, computer program codes are stored in the memory, and the computer program codes include computer instructions.
  • the electronic device When the computer instructions are executed by the processor, the electronic device is made to perform the following steps: acquire a first image, the first image is captured by a camera of the electronic device image, the first image includes a first subject; determine a first scene corresponding to the first image, wherein the first scene is used to identify a scene corresponding to the first subject; determine a first color lookup table LUT according to the first scene; The first LUT processes the first image to obtain a second image, and displays the second image, and the display effect of the second image corresponds to that of the first LUT.
  • the electronic device when the computer instruction is executed by the processor, the electronic device further executes the following steps: after displaying the second image, collect a third image, the third image is the camera of the electronic device The collected image, the third image includes the second object; determine that the second image corresponds to the second scene, wherein the second scene is used to identify the scene corresponding to the second object; determine the second LUT according to the second scene; The LUT processes the third image to obtain a fourth image, and displays the fourth image, and the display effect of the fourth image corresponds to that of the second LUT.
  • the electronic device when the computer instruction is executed by the processor, the electronic device further executes the following steps: the preview interface before the electronic device takes pictures, the preview interface before the electronic device records, or the electronic device In the viewfinder interface that is recording, capture the first image.
  • the first image is an image collected by a camera of the electronic device; or, the first image is a preview image obtained from an image collected by the camera of the electronic device.
  • the electronic device when the computer instruction is executed by the processor, the electronic device further executes the following step: determining the third LUT corresponding to the first scene among the plurality of third LUTs as The first LUT of the first image.
  • multiple third LUTs are pre-configured in the electronic device, and are used to process images captured by the camera of the electronic device to obtain images with different display effects, and each first LUT corresponds to a display effect in a scene.
  • the electronic device when the computer instruction is executed by the processor, the electronic device further executes the following step: determining the third LUT corresponding to the first scene among the plurality of third LUTs as The fourth LUT of the first image; wherein, a plurality of third LUTs are pre-configured in the electronic device, and are used to process images collected by the camera of the electronic device to obtain images with different display effects, and each third LUT corresponds to a scene The following display effect; calculate the weighted sum of the fourth LUT of the first image and the first LUT of the fifth image to obtain the first LUT; wherein, the fifth image is the previous frame image of the first image, and the electronic device in this time
  • the third LUT of the previous frame image of the first frame image collected during shooting is a preset LUT.
  • the electronic device when the computer instruction is executed by the processor, the electronic device further executes the following step: calculate the first image by using the pre-configured first weighting coefficient and the second weighting coefficient A weighted sum of the fourth LUT and the first LUT of the fifth image to obtain the first LUT.
  • the first weighting coefficient is the weighting coefficient of the fourth LUT of the first image
  • the second weighting coefficient is the weighting coefficient of the first LUT of the fifth image
  • the sum of the first weighting coefficient and the second weighting coefficient is equal to 1.
  • the smaller the first weighting coefficient is, the larger the second weighting coefficient is, and the smoother the transition effect of the multiple frames of the second image is.
  • the electronic device when the computer instruction is executed by the processor, the electronic device further executes the following steps: calculating the first The weighted sum of the fourth LUT of the image and the first LUT of the fifth image, before obtaining the first LUT, in response to the first preset operation, displaying the first setting item and the second setting item, the first setting item is used to set the second A weighting coefficient, the second setting item is used to set the second weighting coefficient; in response to the user's setting operation on the first setting item and/or the second setting item, the first weighting coefficient set by the user is used as the fourth weighting coefficient of the first image For the weighting coefficient of the LUT, the second weighting coefficient set by the user is used as the weighting coefficient of the first LUT of the fifth image.
  • the first preset operation is a click operation on the first preset control displayed by the electronic device, and the first preset control is used to trigger the electronic device to set the weight of the fourth LUT of the first image and the first LUT of the fifth image or, the first preset operation is the user's click operation on the first physical button of the electronic device.
  • the electronic device when the computer instruction is executed by the processor, the electronic device further executes the following steps: using the indication information of the first scene and the first image as input, and running the preset AI model, obtaining a plurality of third weighting coefficients of a plurality of third LUTs; wherein, the sum of a plurality of third weighting coefficients is 1, and a plurality of third LUTs correspond to a plurality of third weighting coefficients one by one; adopting a plurality of third Three weighting coefficients, calculating the weighted sum of multiple third LUTs to obtain the first LUT.
  • the electronic device when the computer instruction is executed by the processor, the electronic device further executes the following steps: using the indication information of the first scene and the first image as input, and running the preset AI model, obtaining a plurality of third weighting coefficients of a plurality of third LUTs; wherein, the sum of a plurality of third weighting coefficients is 1, and a plurality of third LUTs correspond to a plurality of third weighting coefficients one by one; adopting a plurality of third Three weighting coefficients, calculating the weighted sum of a plurality of third LUTs to obtain the fourth LUT of the first image; calculating the weighted sum of the fourth LUT of the first image and the first LUT of the fifth image to obtain the first LUT; wherein, The fifth image is the previous frame image of the first image, and the third LUT of the previous frame image of the first frame image collected by the electronic device during this shooting process is a preset LUT
  • the electronic device when the computer instruction is executed by the processor, the electronic device further executes the following steps: before determining the first LUT according to the first scenario, acquiring multiple sets of data pairs, each The group data pair includes a sixth image and a seventh image, and the sixth image is an image satisfying a preset condition obtained by processing the seventh image; identifying the seventh image and determining a third scene corresponding to the seventh image; combining the seventh image and the seventh image
  • the six images, and the indication information for identifying the third scene are used as input samples, and the preset AI model is trained so that the preset AI model has the ability to determine which weight to use to weight and sum the third LUTs to obtain the LUT to process the seventh image to obtain The ability to display effects of the sixth image.
  • the electronic device when the computer instruction is executed by the processor, the electronic device further executes the following steps: displaying a plurality of third setting items in response to the second preset operation; wherein, Each third setting item corresponds to a third LUT, which is used to set the third weighting coefficient of the third LUT; in response to the user's setting operation on one or more third setting items in the plurality of third setting items, update the corresponding A third weighting coefficient; wherein, the electronic device calculates a weighted sum of multiple third LUTs by using the updated multiple third weighting coefficients.
  • the second preset operation is the user's click operation on the second preset control, and the second preset control is used to trigger the electronic device to set the weight of a plurality of third LUTs; or, the second preset operation is the user's click operation on the electronic device The click operation of the second physical button in the
  • the electronic device when the computer instruction is executed by the processor, the electronic device further executes the following steps: displaying one or more fourth setting items in response to the third preset operation; Wherein, the third preset operation is used to trigger the electronic device to add a new display effect, each fourth setting item corresponds to a fifth LUT, each fifth LUT corresponds to a display effect in a shooting scene, the fifth LUT and the first The three LUTs are different; in response to the user's selection operation on any fourth setting item, the fifth LUT corresponding to the fourth setting item selected by the user is saved.
  • the fourth setting item includes a preview image processed by using a corresponding fifth LUT, which is used to present a display effect corresponding to the fifth LUT.
  • the present application provides an electronic device, which includes a memory, a display screen, one or more cameras, and one or more processors. Memory, display screen, camera and processor are coupled. Wherein, computer program codes are stored in the memory, and the computer program codes include computer instructions.
  • the electronic device When the computer instructions are executed by the processor, the electronic device is made to perform the following steps: acquire a first image, the first image is captured by a camera of the electronic device image, the first image includes the first subject; the first image is used as input, and the preset artificial intelligence AI model is run to obtain multiple third weighting coefficients of multiple second color lookup tables LUT; wherein, multiple third weighting coefficients The sum of the coefficients is 1, and the plurality of third LUTs correspond to the plurality of third weighting coefficients one by one; using the plurality of third weighting coefficients, the weighted sum of the plurality of third LUTs is calculated to obtain the first LUT; according to the first LUT The first image is processed to obtain the second image, and the second image is displayed, and the display effect of the second image corresponds to the first LUT.
  • the electronic device when the computer instruction is executed by the processor, the electronic device further executes the following steps: calculating a weighted sum of multiple third LUTs by using multiple third weighting coefficients, Obtain the fourth LUT of the first image; Calculate the weighted sum of the fourth LUT of the first image and the first LUT of the fifth image to obtain the first LUT;
  • the fifth image is the previous frame image of the first image
  • electronic The third LUT of the previous frame image of the first frame image captured by the device during this shooting process is the preset LUT.
  • the electronic device when the computer instructions are executed by the processor, the electronic device further executes the following steps: taking the first image as input, running a preset AI model to obtain multiple Before multiple third weighting coefficients of the three LUTs, multiple sets of data pairs are obtained, each set of data pairs includes a sixth image and a seventh image, and the sixth image is an image that satisfies a preset condition obtained by processing the seventh image; The image and the sixth image are used as input samples, and the preset AI model is trained so that the preset AI model has the ability to determine which weight is used to weight the multiple third LUTs and obtain the LUT to process the seventh image to obtain the display effect of the sixth image Ability.
  • the present application provides a computer-readable storage medium, the computer-readable storage medium includes computer instructions, and when the computer instructions are run on an electronic device, the electronic device is made to execute the first aspect or the second aspect and any of them.
  • the computer-readable storage medium includes computer instructions, and when the computer instructions are run on an electronic device, the electronic device is made to execute the first aspect or the second aspect and any of them.
  • the present application provides a computer program product.
  • the computer program product When the computer program product is run on a computer, the computer is made to execute the method described in the first aspect or the second aspect and any possible design manner.
  • the computer may be the electronic device described above.
  • FIG. 1 is a schematic diagram of display effects or styles corresponding to various LUTs
  • FIG. 2 is a schematic diagram of a viewfinder interface for taking pictures of a mobile phone
  • Fig. 3 is a schematic diagram of a viewfinder interface of a mobile phone video
  • FIG. 4 is a schematic diagram of a hardware structure of an electronic device provided in an embodiment of the present application.
  • FIG. 5 is a flow chart of an image processing method provided in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a viewfinder interface for taking pictures of a mobile phone provided in an embodiment of the present application
  • FIG. 7A is a flow chart of another image processing method provided by the embodiment of the present application.
  • FIG. 7B is a schematic diagram of the principle of determining the final LUT (ie, the first LUT) of the T-th frame image provided by the embodiment of the present application;
  • FIG. 7C is a flowchart of another image processing method provided by the embodiment of the present application.
  • FIG. 7D is a schematic diagram of the principle of determining the final LUT (ie, the first LUT) of the T-th frame image provided by the embodiment of the present application;
  • FIG. 7E is a schematic diagram of a viewfinder interface for taking pictures of another mobile phone provided in the embodiment of the present application.
  • FIG. 7F is a schematic diagram of a viewfinder interface for taking pictures of another mobile phone provided in the embodiment of the present application.
  • FIG. 8 is a schematic diagram of a viewfinder interface of a mobile phone video provided in an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a video viewfinder interface of another mobile phone provided in an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a video viewfinder interface of another mobile phone provided in an embodiment of the present application.
  • FIG. 11A is a flow chart of another image processing method provided by the embodiment of the present application.
  • FIG. 11B is another schematic diagram of the principle of determining the final LUT (ie, the first LUT) of the T-th frame image provided by the embodiment of the present application;
  • FIG. 11C is another schematic diagram of the principle of determining the final LUT (ie, the first LUT) of the T-th frame image provided by the embodiment of the present application;
  • FIG. 12A is a flow chart of another image processing method provided by the embodiment of the present application.
  • FIG. 12B is another schematic diagram of the principle of determining the final LUT (ie, the first LUT) of the T-th frame image provided by the embodiment of the present application;
  • FIG. 12C is another schematic diagram of the principle of determining the final LUT (ie, the first LUT) of the T-th frame image provided by the embodiment of the present application;
  • FIG. 13 is a flow chart of another image processing method provided by the embodiment of the present application.
  • FIG. 14A is a schematic view of another video viewfinder interface of a mobile phone provided in the embodiment of the present application.
  • FIG. 14B is a schematic diagram of another video viewfinder interface of a mobile phone provided in the embodiment of the present application.
  • FIG. 15A is another schematic diagram of the principle of determining the final LUT (ie, the first LUT) of the T-th frame image provided by the embodiment of the present application;
  • FIG. 15B is another schematic diagram of the principle of determining the final LUT (ie, the first LUT) of the T-th frame image provided by the embodiment of the present application;
  • FIG. 16A is a schematic diagram of another mobile phone video viewfinder interface provided by the embodiment of the present application.
  • FIG. 16B is a schematic diagram of another mobile phone video viewfinder interface provided by the embodiment of the present application.
  • FIG. 17A is another schematic diagram of the principle of determining the final LUT (that is, the fourth LUT) of the T-th frame image provided by the embodiment of the present application;
  • FIG. 17B is another schematic diagram of the principle of determining the final LUT (that is, the fourth LUT) of the T-th frame image provided by the embodiment of the present application;
  • FIG. 18A is a schematic diagram of another mobile phone video viewfinder interface provided by the embodiment of the present application.
  • FIG. 18B is a schematic diagram of another mobile phone video viewfinder interface provided by the embodiment of the present application.
  • FIG. 18C is a schematic diagram of another mobile phone video viewfinder interface provided by the embodiment of the present application.
  • FIG. 19 is a schematic structural diagram of a chip system provided by an embodiment of the present application.
  • first and second are used for descriptive purposes only, and cannot be understood as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, a feature defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of this embodiment, unless otherwise specified, “plurality” means two or more.
  • RGB Red Green Blue
  • the three primary colors RGB include red (Red), green (Green), and blue (Blue). By mixing these three colors of light in different proportions, a variety of colors can be obtained.
  • the image collected by the camera is composed of pixels, and each pixel is composed of red sub-pixels, green sub-pixels and blue sub-pixels.
  • the value range of R, G, and B is 0-255, such as RGB(255,0,0) means pure red, Green(0,255,0) means pure green, Blue(0,0,255) Indicates pure blue. In short, these three colors are mixed in different proportions to obtain rich and colorful colors.
  • LUT Color lookup table
  • LUT file LUT parameter, which is a mapping table of Red Green Blue (RGB).
  • An image consists of many pixels, and each pixel is represented by an RGB value.
  • the display screen of the electronic device can display the image according to the RGB value of each pixel in the image. In other words, these RGB values will tell the display how to emit light to mix a variety of colors to present to the user. If you want to change the color (or style, effect) of the image, you can adjust these RGB values.
  • the LUT is a RGB mapping table, which is used to represent the corresponding relationship between RGB values before and after adjustment.
  • FIG. 1 shows an example of a LUT.
  • the output RGB value is (6, 9, 4,) after the mapping of the LUT shown in Table 1.
  • the output RGB value is (66, 17, 47) after the mapping of the LUT shown in Table 1.
  • the output RGB value is (117, 82, 187) after the mapping of the LUT shown in Table 1.
  • the output RGB value is (255, 247, 243) after the mapping of the LUT shown in Table 1.
  • the display effect of the image not processed by LUT is different from that of the image processed by LUT; the same image can be processed by different LUTs, and display effects of different styles can be obtained.
  • the "display effect" of the image described in the embodiments of the present application refers to the image effect that can be observed by human eyes after the image is displayed on the display screen.
  • LUT 1, LUT 2, and LUT 3 shown in Figure 1 are different LUTs.
  • the image 101 shown in FIG. 1 can be obtained.
  • the image 102 shown in FIG. 1 can be obtained.
  • the image 103 shown in FIG. 1 can be obtained. Comparing the image 101 , the image 102 and the image 103 shown in FIG. 1 shows that the display effects of the image 101 , the image 102 and the image 103 are different.
  • the preview image can only be processed by using the pre-configured LUT before shooting, the LUT selected by the user, or the LUT determined by identifying the preview image.
  • the mobile phone may display the viewfinder interface 201 for taking pictures shown in (a) of FIG. 2 in response to the user's click operation on the icon of the camera application.
  • the viewfinder interface 201 for taking pictures may include a preview image 202 captured by the camera and an AI shooting switch 203 .
  • the preview image 202 is an image without LUT processing.
  • the AI shooting switch 203 is used to trigger the mobile phone to recognize the shooting scene corresponding to the preview image 202 .
  • the mobile phone can receive the user's click operation on the AI shooting switch 203.
  • the mobile phone can identify the shooting scene (such as a character scene) corresponding to the preview image 202 .
  • multiple preset LUTs may be stored in the mobile phone, and each preset LUT corresponds to a shooting scene.
  • preset LUTs corresponding to character scenes preset LUTs corresponding to food scenes
  • preset LUTs corresponding to plant scenes preset LUTs corresponding to animal scenes
  • preset LUTs corresponding to sea scenes can be saved in the mobile phone. It should be noted that using the LUT corresponding to each shooting scene to process the image of the shooting scene can improve the display effect in the shooting scene.
  • the mobile phone can process the preview image 202 by using the preset LUT corresponding to the identified shooting scene.
  • the mobile phone processes the preview image 202 by using the preset LUT corresponding to the above shooting scene, and can obtain the preview image 205 shown in (b) in FIG. 2 .
  • the mobile phone may display a camera viewfinder interface 204 shown in (b) in FIG.
  • the mobile phone may display a video viewfinder interface 301 shown in (a) in FIG. 3 .
  • the viewfinder interface 301 of the video may include a preview image 303 captured by the camera and shooting style options 302 .
  • the preview image 303 is an image without LUT processing.
  • the mobile phone can receive the user's click operation on the shooting style option 302 .
  • the mobile phone may display a style selection interface 304 shown in (b) in FIG. 3 , which is used to prompt the user to select the shooting style/effect of the video.
  • the style selection interface 304 may include prompt information 304 "Please select the shooting style/effect you need".
  • the style selection interface 304 may also include multiple style options, such as the original image option, the ** style option, the ## style option and the && style option. Each style option is used for a preset LUT, which is used to trigger the mobile phone to use the corresponding preset LUT to process the preview image of the video.
  • the above multiple styles may include: natural style, gray tone style, oil painting style, black and white style, travel style, gourmet style, landscape style, character style, Pet style or still life style etc.
  • the mobile phone can use the preset LUT corresponding to the ## style to process the preview image 306 of the video, such as the mobile phone can display ( in Figure 3 c)
  • the video viewfinder interface 305 shown in c) can include a preview image 306 .
  • the original image option shown in (b) in Figure 3 corresponds to the image that has not been processed by LUT
  • the option of **style corresponds to the image processed by the LUT of **style
  • the option of ##style corresponds to the image using #
  • the && style option corresponds to the image processed by the LUT of the && style.
  • the display effects of the four images shown in (b) in FIG. 3 are different.
  • the mobile phone can only take photos or videos with the style or display effect corresponding to the above-mentioned pre-configured LUT, the LUT selected by the user, or the LUT determined by recognizing the preview image.
  • the style or display effect of photos or videos taken by mobile phones is single, which cannot meet the diverse shooting needs of current users.
  • An embodiment of the present application provides an image processing method, which can be applied to an electronic device including a camera.
  • the electronic device may determine a scene (that is, a first scene) corresponding to a frame of a first image captured by the camera. Then, the electronic device may determine a first LUT corresponding to the first scene. Finally, the electronic device may use the first LUT of the frame of image, perform image processing on the first image to obtain the second image, and display the second image.
  • the display effect of the second image is the same as the display effect corresponding to the first LUT.
  • the electronic device can dynamically adjust the LUT according to each frame of image acquired by the electronic device during the process of taking pictures or recording videos.
  • display effects or styles corresponding to different LUTs can be presented, and the display effects obtained by taking pictures or video recordings can be enriched.
  • the electronic device in the embodiment of the present application can be a portable computer (such as a mobile phone), a tablet computer, a notebook computer, a personal computer (personal computer, PC), a wearable electronic device (such as a smart watch), an augmented reality (augmented reality, AR) ⁇ virtual reality (virtual reality, VR) equipment, vehicle-mounted computer, etc.
  • a portable computer such as a mobile phone
  • a tablet computer such as a notebook computer
  • personal computer personal computer, PC
  • a wearable electronic device such as a smart watch
  • AR augmented reality
  • VR virtual reality
  • FIG. 4 shows a schematic structural diagram of an electronic device 100 provided in an embodiment of the present application.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2.
  • a processor 110 an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2.
  • USB universal serial bus
  • Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, And a subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the above-mentioned sensor module 180 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor 180A, a temperature sensor, a touch sensor 180B, an ambient light sensor, a bone conduction sensor, etc. .
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor neural network processor (neural-network processing unit, NPU), and/or Micro control unit (micro controller unit, MCU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor
  • AP application processor
  • modem processor graphics processing unit
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU baseband processor neural network processor
  • MCU Micro control unit
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory.
  • the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • the interface can include integrated circuit (inter-integrated circuit, I2C) interface, serial peripheral interface (serial peripheral interface, SPI), integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, pulse code modulation (pulse code modulation) , PCM) interface, universal asynchronous receiver/transmitter (UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input/output (general-purpose input/output, GPIO) interface, A subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • SPI serial peripheral interface
  • I2S integrated circuit built-in audio
  • I2S integrated circuit sound
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous receiver/transmitter
  • mobile industry processor interface mobile industry processor interface
  • MIPI mobile industry processor interface
  • the interface connection relationship between the modules shown in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is configured to receive a charging input from a charger.
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 .
  • the power management module 141 and the charging management module 140 may also be set in the same device.
  • the wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied on the electronic device 100 .
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (wireless local area networks, WLAN) (such as Wi-Fi network), Bluetooth (blue tooth, BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), NFC, infrared technology (infrared, IR) and other wireless communication solutions.
  • wireless local area networks wireless local area networks, WLAN
  • Wi-Fi network such as Wi-Fi network
  • Bluetooth blue tooth, BT
  • global navigation satellite system global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • NFC infrared technology
  • infrared technology infrared, IR
  • the electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos and the like.
  • the display is a touch screen.
  • the electronic device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
  • the electronic device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display screen 194 and the application processor.
  • the ISP is used for processing the data fed back by the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • the NPU is a neural-network (NN) computing processor.
  • NPU neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be realized through the NPU, such as: recognition of the film state, image restoration, image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving music, video and other files in the external memory card.
  • the internal memory 121 may be used to store computer-executable program codes including instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 .
  • the internal memory 121 may include an area for storing programs and an area for storing data.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
  • the storage data area can store data created during the use of the electronic device 100 (such as audio data, phonebook, etc.) and the like.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the fingerprint sensor 180A is used to collect fingerprint information.
  • the electronic device 100 can use the fingerprint characteristics of the collected fingerprint information to perform user identity verification (ie, fingerprint identification), so as to realize fingerprint unlocking, access to application locks, fingerprint photography, and fingerprint answering of incoming calls.
  • user identity verification ie, fingerprint identification
  • the touch sensor 180B is also called “touch panel (TP)”.
  • the touch sensor 180B can be disposed on the display screen 194, and the touch sensor 180B and the display screen 194 form a touch screen, also called “touch screen”.
  • the touch sensor 180B is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to the touch operation can be provided through the display screen 194 .
  • the touch sensor 180B may also be disposed on the surface of the electronic device 100 , which is different from the position of the display screen 194 .
  • the keys 190 include a power key, a volume key and the like.
  • the motor 191 can generate a vibrating reminder.
  • the indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the SIM card interface 195 is used for connecting a SIM card.
  • An embodiment of the present application provides an image processing method, which can be applied to an electronic device including a camera and a display screen (such as a touch screen).
  • the image processing method may include S501-S504.
  • the mobile phone acquires a first image.
  • the first image is an image collected by a camera of the mobile phone, and the first image includes a first photographed object.
  • the mobile phone may capture the first image on a preview interface of taking pictures of the mobile phone.
  • the mobile phone may display the preview interface 601 shown in (a) in FIG. 6 .
  • the preview interface 601 includes a first image 602 captured by the camera of the mobile phone.
  • the first image 602 is an image without LUT processing.
  • the mobile phone may capture the first image on the preview interface before the video recording of the mobile phone.
  • the mobile phone may display the preview interface 801 shown in (a) in FIG. 8 .
  • the preview interface 801 includes a first image 802 captured by the camera of the mobile phone.
  • the first image 802 is an image without LUT processing.
  • the mobile phone may capture the first image on a viewfinder interface (also called a preview interface) where the mobile phone is recording.
  • a viewfinder interface also called a preview interface
  • the mobile phone may display a preview interface 1003 shown in (b) in FIG. 10 .
  • the preview interface 1003 includes a first image 1004 captured by the camera of the mobile phone.
  • the first image 1004 is an image without LUT processing.
  • the above-mentioned first image may be an image collected by a camera of a mobile phone.
  • the first image may be an original image collected by the camera of the mobile phone, and the first image has not been processed by the ISP.
  • the first image may be a preview image obtained from an image captured by a camera of the mobile phone.
  • the first image may be a preview image after image processing is performed on the original image collected by the camera of the mobile phone.
  • the mobile phone determines a first scene corresponding to the first image. Wherein, the first scene is used to identify the scene corresponding to the first shooting object.
  • the mobile phone determines the first LUT according to the first scene.
  • multiple third LUTs may be pre-configured in the mobile phone.
  • the multiple third LUTs may also be referred to as multiple preset LUTs.
  • the multiple third LUTs are used to process the preview images collected by the camera to obtain images with different display effects, and each third LUT corresponds to a display effect in a shooting scene.
  • the image 101 is obtained by processing the original image 100 by using LUT 1 (that is, the third LUT 1, also called preset LUT 1)
  • the image 102 is obtained by using LUT 2 (that is, the third LUT 2, also called preset LUT 1).
  • preset LUT 2 It is called the preset LUT 2) to process the original image 100, and the image 103 is obtained by processing the original image 100 with the LUT 3 (that is, the third LUT 3, also called the preset LUT 3). Comparing the image 101, the image 102 and the image 103 presents different display effects. That is to say, preset LUT 1, preset LUT 2 and preset LUT3 can correspond to different display effects or styles.
  • the different display effects may be display effects in different shooting scenarios.
  • the shooting scene may be: a character scene, a travel scene, a food scene, a landscape scene, a pet scene, or a still life scene.
  • the corresponding LUT can be used to process the preview image to obtain the corresponding display effect or style. Therefore, the mobile phone can recognize the first image, and determine the shooting scene corresponding to the first image (that is, the first scene). Then, the mobile phone can determine the first LUT according to the first scene.
  • the shooting scene may be a character scene, a travel scene, a food scene, a landscape scene, a pet scene or a still life scene.
  • the images collected in the scene of people may include images of people
  • the images collected in the scene of food may include images of food. Therefore, in the embodiment of the present application, the mobile phone can identify the shooting object included in the first image to determine the shooting scene corresponding to the first image.
  • the mobile phone may use a pre-configured image shooting scene detection algorithm to identify the first image, so as to identify the shooting scene corresponding to the first image (ie, the first shooting scene).
  • the first image is the first image 602 shown in (a) in FIG. 6 .
  • the mobile phone recognizes the first image 602, and can recognize that the shooting scene (that is, the first scene) corresponding to the first image 602 is a character scene. In this way, the mobile phone can determine the third LUT corresponding to the character scene as the first LUT.
  • S503 may include S503a.
  • the mobile phone determines the third LUT corresponding to the first scene among the plurality of third LUTs as the first LUT of the T-th frame image (that is, the first image).
  • the first image of the T-th frame is the first image 602 shown in (a) in FIG. 6 as an example.
  • the mobile phone performs S502-S503 (including S503a) to determine the first LUT method. .
  • the mobile phone may perform scene detection on the first image 602, and recognize the first scene (such as a character scene) corresponding to the first image 602 . Then, the mobile phone can perform LUT selection (that is, LUT Select), and select the first LUT corresponding to the character scene from multiple third LUTs (such as third LUTs such as third LUT 1, third LUT 2, and third LUT 3) .
  • LUT selection that is, LUT Select
  • third LUTs such as third LUT 1, third LUT 2, and third LUT 3
  • the mobile phone when determining the final LUT, not only refers to the previous frame image (that is, the first image), but also refers to the final LUT of the previous frame image of the first image.
  • the smooth transition of display effects or styles corresponding to different LUTs can be realized, the display effect of the multi-frame preview image presented by the electronic device can be optimized, and the user's visual experience in the process of taking pictures or recordings can be improved.
  • S503 may include S503A-S503B.
  • S503A The mobile phone determines, among the plurality of third LUTs, a third LUT corresponding to the first scene as a fourth LUT of the first image.
  • the mobile phone calculates a weighted sum of the fourth LUT of the first image and the first LUT of the fifth image to obtain the first LUT.
  • the fifth image is an image of a previous frame of the first image.
  • the third LUT of the image in the previous frame of the first frame of the first image captured by the mobile phone during this shooting process is a preset LUT.
  • the camera of the mobile phone can collect images in real time and output each frame of the collected images. For example, if the first image is the second frame of image captured by the mobile phone, then the fifth image is the first frame of image captured by the mobile phone. If the first image is the T-th frame image collected by the mobile phone, the fifth image is the T-1-th frame image collected by the mobile phone, T ⁇ 2, and T is an integer.
  • the mobile phone can use the first weighting coefficient P 1 and the second weighting coefficient P 2 to calculate the fourth LUT of the T-th frame image (that is, the first image) and the T-1-th frame image (that is, the fifth image ) to obtain the first LUT of the T-th frame image (ie, the first image).
  • the first weighting coefficient P1 and the second weighting coefficient P2 may also be collectively referred to as time-domain smoothing weights.
  • the first weighting coefficient P1 is the weighting coefficient of the fourth LUT of the T-th frame image
  • the second weighting coefficient P2 is the weighting coefficient of the first LUT of the T-1th frame image.
  • the above-mentioned first weighting coefficient P1 and second weighting coefficient P2 may be preset in the mobile phone.
  • the fourth LUT of the T-th frame image can be marked as Q (T, 2)
  • the first LUT of the T-1-th frame image can be marked as Q (T-1, 3 )
  • the first LUT of the T-th frame image can be recorded as Q (T, 3) .
  • the first LUT of the 0th frame image is the preset LUT. That is, Q (0, 3) is a preset value.
  • the mobile phone can use the following formula (1) to calculate the first LUT of the T-th frame image, such as Q (T, 3) .
  • the transition effect of the multi-frame second image is smoother.
  • the first image of the T-th frame is the first image 602 shown in (a) in FIG. 6 as an example.
  • the mobile phone executes S502-S503 (including S503A-S503B) to determine the first LUT. Methods.
  • the mobile phone may perform scene detection on the first image 602, and recognize the first scene (such as a character scene) corresponding to the first image 602 . Then, the mobile phone can perform LUT selection (that is, LUT Select), and select the fourth LUT corresponding to the character scene from multiple third LUTs (such as third LUTs such as third LUT 1, third LUT 2, and third LUT 3) . Finally, the mobile phone can perform weighted sum (Blending) on the fourth LUT of the T-th frame image (that is, the first image) and the first LUT of the T-1-th frame image (that is, the fifth image) to obtain the T-th frame image The first LUT.
  • LUT selection that is, LUT Select
  • third LUTs such as third LUT 1, third LUT 2, and third LUT 3
  • the mobile phone can perform weighted sum (Blending) on the fourth LUT of the T-th frame image (that is, the first image) and the first LUT of the T-1-th frame image (that is, the
  • the weighting coefficients of the fourth LUT of the T-th frame image (ie, the first image) and the first LUT of the T-1-th frame image (ie, the fifth image) may be set by the user.
  • the aforementioned preview interface (such as the preview interface 601, the preview interface 801, or the preview interface 1003) may further include a first preset control.
  • the first preset control is used to trigger the mobile phone to set the weights of the fourth LUT of the T-th frame image and the first LUT of the T-1-th frame image, that is, the above-mentioned first weighting coefficient and the second weighting coefficient. For example, as shown in (a) in FIG.
  • the preview interface 701 may include a first preset control 703, which is used to trigger the mobile phone to set the fourth LUT of the T-th frame image and the T-1th LUT. The weight of the first LUT of the frame image.
  • the preview interface 701 also includes a first image 702 .
  • the method in the embodiment of the present application may further include S503' and S503′′.
  • the mobile phone displays the first setting item and the second setting item in response to the user's click operation on the first preset control.
  • the first setting item is used to set the first weighting coefficient of the fourth LUT of the T-th frame image
  • the second setting item is used to set the second weighting coefficient of the first LUT of the T-1th frame image
  • the mobile phone may display the preview interface 704 shown in (b) in FIG. 7E .
  • the preview interface 704 includes a first preset control 705 , a first image 706 , a first setting item 707 and a second setting item 708 .
  • the first setting item 707 is used to set the first weighting coefficient of the fourth LUT of the T-th frame image.
  • the second setting item 708 is used to set the second weighting coefficient of the first LUT of the T-1th frame image.
  • the first preset control 705 is in a different state from the first preset control 703 . If the first preset control 705 is in the on state, the first preset control 703 is in the off state.
  • the aforementioned preview interface may include the aforementioned first preset control, or may not include the aforementioned first preset control.
  • the mobile phone may receive a first preset operation input by the user on the preview interface.
  • the above S504' may be replaced by: the mobile phone displays the first setting item and the second setting item on the preview interface in response to the user's first preset operation on the preview interface.
  • the first preset operation may be any preset gesture such as an L-shaped gesture, an S-shaped gesture, or a ⁇ -shaped gesture input by the user on a display screen (such as a touch screen) of the mobile phone.
  • the first preset operation may be the user's click operation on the first physical button of the mobile phone.
  • the first physical button may be a physical button in the mobile phone, or a combination of at least two physical buttons.
  • the mobile phone responds to the user's setting operation on the first setting item and/or the second setting item, uses the first weighting coefficient set by the user as the weighting coefficient of the fourth LUT of the T-th frame image, and uses the second weighting coefficient set by the user
  • the weighting coefficient is used as the weighting coefficient of the first LUT of the T-1th frame image.
  • the first weighting coefficient and the second weighting coefficient may be collectively referred to as time-domain smoothing weights.
  • the mobile phone adopts the first LUT of the T-th frame image obtained by the weighting coefficient set by the user.
  • Different display effects can be obtained by using the first LUTs of different T-th frame images to process the same first image.
  • the mobile phone may also display the display effect after the user adjusts the first weighting coefficient and the second weighting coefficient and uses the first LUT of the T-th frame image.
  • the first weighting coefficients corresponding to the first setting item 713 shown in (b) are all different.
  • the second weighting coefficients corresponding to the second setting item 714 shown in (b) are all different. Therefore, the display effects of the preview image 706 shown in (b) in FIG. 7E , the preview image 709 shown in (a) in FIG.
  • 7F and the preview image 712 shown in (b) in FIG. 7F are all different.
  • the user can set an appropriate weighting coefficient according to the adjusted display effect.
  • 715 shown in (c) in FIG. 7F is the LUT-processed image determined by using the weights (ie, weighting coefficients) shown in (b) in FIG. 7F .
  • the fourth LUT of the T-th frame image can be marked as Q (T, 2)
  • the first LUT of the T-1th frame image can be marked as Q (T-1, 3)
  • the first LUT of the T-th frame image is denoted as Q (T, 3)
  • the first LUT of the 0th frame image is the preset LUT. That is, Q (0, 3) is a preset value.
  • the mobile phone can use the following formula (2) to calculate the first LUT of the T-th frame image, such as Q (T, 3) .
  • the mobile phone processes the first image according to the first LUT to obtain a second image, and displays the second image.
  • the display effect of the second image corresponds to the first LUT of the first image.
  • the first image is the first image 602 shown in (a) in FIG. 6 .
  • the mobile phone executes S504 to obtain the second image 604 shown in (b) in FIG. 6 and display the preview interface 603 shown in (b) in FIG. 6 .
  • the preview interface 603 includes a second image 604 processed by using the first LUT of the T-th frame image.
  • the display effect of the image not processed by LUT is different from the display effect of the image processed by LUT.
  • the first image 602 shown in (a) in FIG. 6 has not been processed by LUT, and the second image 604 shown in (b) in FIG.
  • the photo preview interface 605 includes a preview image 606 .
  • the embodiment of the present application introduces S504 here with reference to FIG. 7D .
  • the mobile phone can execute S504, using the time-domain smoothing weights (including the first weighting coefficient and the second weighting coefficient) shown in Figure 7D to calculate the fourth LUT of the T-th frame image and the first LUT of the T-1-th frame image
  • the weighted sum is used to obtain the first LUT of the T-th frame shown in FIG. 7D.
  • the mobile phone may use the first LUT in the T-th frame shown in FIG. 7D to perform image processing on the preview image collected by the camera to obtain the second image 604 shown in FIG. 7D .
  • the first image is the first image 802 shown in (a) in FIG. 8 as an example.
  • the mobile phone executes S504 to obtain the second image 804 shown in (b) in FIG. 8 and display the preview interface 803 shown in (b) in FIG. 8 .
  • the preview interface 803 includes a second image 804 processed by using the first LUT of the T-th frame image.
  • the display effect of the second image 804 shown in (b) in FIG. 8 is different from the display effect of the first image 802 shown in (a) in FIG. 8 .
  • the viewfinder interface of the camera of the mobile phone may change greatly.
  • the user may move the mobile phone, so that the view content of the mobile phone changes.
  • the user may switch the front and rear cameras of the mobile phone, so that the viewing content of the mobile phone changes. If the framing content of the mobile phone changes greatly, the display effect/style of the mobile phone may change with the change of the framing content.
  • the mobile phone may collect a third image, the third image is an image collected by the camera of the mobile phone, and the third image includes the second subject; the mobile phone determines that the second image corresponds to the second scene, and the second scene It is used to identify the scene corresponding to the second shooting object; the mobile phone determines the second LUT according to the second scene; the mobile phone processes the third image according to the second LUT to obtain a fourth image, and displays the fourth image.
  • the display effect of the fourth image corresponds to the second LUT.
  • the preview image 804 shown in (b) in FIG. 8 is an image captured by a front camera.
  • the mobile phone can switch to use the rear camera to capture images, such as the mobile phone can display the video viewfinder interface 901 shown in (a) in Figure 9 .
  • the viewfinder interface 901 of the video includes a preview image (which can be used as a fourth image) 902 .
  • the preview image 902 serving as the fourth image may be obtained through processing according to the third image collected by the camera. Since the image contents of the preview image 902 and the preview image 804 have changed greatly; therefore, the shooting scenes of the preview image 902 and the preview image 804 may also have changed greatly.
  • the shooting scene of the preview image 804 is a character scene (ie, the first scene), and the shooting scene of the preview image 902 may be a food scene (ie, the second scene).
  • the phone can automatically adjust the LUT.
  • the mobile phone may display the viewfinder interface 903 shown in (b) of FIG. 9 .
  • the viewfinder interface 903 of the video includes a preview image (which can be used as a fourth image) 904 .
  • the preview image (can be used as the fourth image) 904 is different from the shooting scene of the preview image (can be used as the second image) 902, and the LUT adopted when the preview image 904 is processed is different from the LUT adopted when the preview image 902 is processed; therefore , the display effect of the preview image 904 is different from the display effect of the preview image 902 .
  • the first image is the first image 1004 in the preview interface 1003 shown in (b) in FIG. 10 .
  • the mobile phone executes S504 to obtain the second image 1006 shown in (c) in FIG. 10 and display the preview interface 1005 shown in (b) in FIG. 10 .
  • the preview interface 1005 includes a second image 1006 processed by using the first LUT of the T-th frame image. The display effect of the second image 1006 is different from that of the first image 1004 .
  • the mobile phone may determine a scene corresponding to a frame of the first image captured by the camera (ie, the first scene). Then, the mobile phone may determine the first LUT corresponding to the first scene. Finally, the mobile phone may use the first LUT of this frame of image to perform image processing on the first image to obtain a second image, and display the second image. Wherein, the display effect of the second image is the same as the display effect corresponding to the first LUT.
  • the mobile phone can dynamically adjust the LUT according to each frame of image periodically acquired by the mobile phone during the process of taking pictures or recording videos.
  • display effects or styles corresponding to different LUTs can be presented, and the display effects obtained by taking pictures or video recordings can be enriched.
  • the mobile phone when determining the final LUT, not only refers to the current frame image, but also refers to the final LUT of the previous frame image.
  • the smooth transition of display effects or styles corresponding to different LUTs can be realized, the display effect of the multi-frame preview image presented by the mobile phone can be optimized, and the user's visual experience in the process of taking pictures or recordings can be improved.
  • the images collected by the camera may not only include images of one shooting scene, but may include images of multiple shooting scenes (called complex shooting scenes).
  • the preview image 902 includes images of people, images of food, and images of buildings.
  • the mobile phone executes the method shown in S503
  • only a third LUT corresponding to the first scene of the first image can be used as the first LUT; or, only a third LUT corresponding to the first scene of the first image can be used as the first LUT;
  • a third LUT of the first LUT is used as the fourth LUT to determine the first LUT.
  • the first LUT only refers to a third LUT corresponding to the first scene of the first image, and does not refer to the third LUT in the complex shooting scene except the first The third LUT corresponding to the shooting scene other than the scene. In this way, the display effect of the mobile phone may be affected.
  • the mobile phone can use the T-th frame image (that is, the first image) as the input of the preset AI model (such as the preset AI model a), and run the preset AI model to obtain the above-mentioned multiple third The weight of the LUT. Then, the mobile phone can calculate the weighted sum of the multiple third LUTs to obtain the first LUT.
  • the above S502-S503 can be replaced by S1101-S1102.
  • the mobile phone takes the T-th frame image (that is, the first image) as input, and runs the preset AI model a to obtain multiple third weighting coefficients of multiple third LUTs.
  • the sum of the multiple third weighting coefficients is 1, and the multiple third LUTs are in one-to-one correspondence with the multiple third weighting coefficients.
  • the aforementioned preset AI model a may be a neural network model used for LUT weight learning.
  • the preset AI model a may be any of the following neural network models: VGG-net, Resnet and Lenet.
  • the training process of the preset AI model a may include Sa and Sb.
  • the mobile phone acquires multiple sets of data pairs, each set of data pairs includes a sixth image and a seventh image, and the sixth image is an image that satisfies a preset condition obtained by processing the seventh image.
  • the preset condition may specifically be: the processed display effect (also referred to as display effect) satisfies a preset standard display effect. That is to say, the above-mentioned sixth image is equivalent to a standard image, and the seventh image is an unprocessed original image. Wherein, the above-mentioned sixth image can be obtained by performing (photoshop, PS) on the seventh image. It should be noted that the above multiple sets of data pairs may include multiple data pairs in different shooting scenarios.
  • the mobile phone uses the seventh image and the sixth image as input samples, and trains the preset AI model a, so that the preset AI model a has the ability to determine which weight to use to obtain the LUT obtained by weighting and summing multiple third LUTs to process the seventh image.
  • the image is capable of obtaining the display effect of the sixth image.
  • the preset AI model a can repeatedly perform the following operations (1)-operation (2) until the preset AI model a processes If the eighth image obtained from the seventh image achieves the display effect of the sixth image, it means that the preset AI model a has the above capabilities.
  • Operation (1) The seventh image is used as input (Input), and the preset AI model a adopts the weights of multiple third LUTs to process the seventh image (Input) to obtain the eighth image (Output).
  • the weights adopted are default weights.
  • the default weight includes a plurality of default weighting coefficients.
  • the multiple default weighting coefficients are in one-to-one correspondence with the multiple third LUTs.
  • the multiple default weighting coefficients are preconfigured in the mobile phone.
  • Operation (2) The preset AI model a adopts the gradient descent method, compares the eighth image (Output) with the sixth image (ie, the standard image), and updates the weights in operation (1).
  • the preset AI model a when starting to train the preset AI model a, the above-mentioned multiple default weighting coefficients may all be the same. As the training progresses, the preset AI model a will gradually adjust the weights of multiple third LUTs, and learn to determine which weights to use to weight the multiple third LUTs and obtain the LUT to process the second image to obtain the first Image display capabilities.
  • the mobile phone calculates a weighted sum of multiple third LUTs by using multiple third weighting coefficients to obtain the first LUT of the T-th frame image.
  • the T-th frame image (that is, the first image) is the first image 902 shown in (a) in FIG. 9 as an example.
  • the mobile phone executes S504, the method of obtaining the second image.
  • the mobile phone can execute S1101, take the first image 902 as an input, and run the preset AI model a shown in FIG. 11B to obtain multiple third weighting coefficients shown in FIG. 11B .
  • the sum is 1, and there is a one-to-one correspondence between the multiple third LUTs and the multiple third weighting coefficients.
  • the preset AI model a shown in FIG. 11B outputs M third weighting coefficients, M ⁇ 2, and M is an integer.
  • the third weighting coefficient corresponding to the third LUT 1 (that is, the preset LUT 1) is K (T, 1)
  • the third weighting coefficient corresponding to the third LUT 2 that is, the preset LUT 2
  • the coefficient is K (T, 2)
  • the third weighting coefficient corresponding to the third LUT 3 (that is, preset LUT 3) is K (T, 3)
  • the third weighting coefficient corresponding to the third LUT M (that is, preset LUT M)
  • the coefficient is K (T,M) .
  • the mobile phone may execute S1102 to calculate the weighted sum of M third LUTs according to the following formula (4) by using the above-mentioned multiple third weighting coefficients to obtain the first LUT of the T-th frame image.
  • the first LUT of the T-th frame image may be marked as Q (T, m, 3)
  • the third LUT m may be marked as Q (T, m, 1).
  • the mobile phone may execute S504 to perform image processing on the first image 902 by using the first LUT of the T-th frame image shown in FIG. 11B to obtain the second image 904 shown in FIG. 11B .
  • the mobile phone determines the first LUT of the T-th frame image, not only referring to a third LUT corresponding to the first scene of the first image, but also referring to all but one of the multiple third LUTs.
  • the mobile phone when determining the final LUT, not only refers to the previous frame image (that is, the first image), but also refers to the final LUT of the previous frame image of the first image.
  • the smooth transition of display effects or styles corresponding to different LUTs can be realized, the display effect of the multi-frame preview image presented by the electronic device can be optimized, and the user's visual experience in the process of taking photos or recordings can be improved.
  • S1102 may include: the mobile phone adopts a plurality of third weighting coefficients, calculates the weighted sum of a plurality of third LUTs, and obtains the fourth LUT of the T-th frame image; The weighted sum of the four LUTs and the first LUT of the T-1 frame image (that is, the fifth image) obtains the first LUT of the T frame image.
  • FIG. 11C shows the method for determining the first LUT of the T-th frame image by the mobile phone in S1101-S1102 in this embodiment; and the schematic diagram of the method for obtaining the second image by the mobile phone in S504.
  • the mobile phone can use both the T-th frame image (that is, the first image) and the scene detection result of the first image as the input of the AI model (such as the preset AI model b), and run the AI model to obtain the above-mentioned multiple The weight of the third LUT. Then, the mobile phone can calculate the weighted sum of the multiple third LUTs to obtain the first LUT. Specifically, as shown in FIG. 12A , S503 can be replaced with S1201-S1202.
  • the mobile phone takes the indication information of the first scene and the first image (that is, the T-th frame image) as input, and runs the preset AI model b to obtain multiple third weighting coefficients of multiple third LUTs.
  • the sum of the multiple third weighting coefficients is 1, and the multiple third LUTs are in one-to-one correspondence with the multiple third weighting coefficients.
  • the aforementioned preset AI model b may be a neural network model used for LUT weight learning.
  • the preset AI model b may be any of the following neural network models: VGG-net, Resnet and Lenet.
  • the training process of the preset AI model b may include Si, Sii, and Siii.
  • the mobile phone acquires multiple sets of data pairs, each set of data pairs includes a sixth image and a seventh image, and the sixth image is an image that satisfies a preset condition obtained by processing the seventh image.
  • Si is the same as the above-mentioned Sa, and will not be described in detail here in the embodiment of the present application.
  • the mobile phone recognizes the seventh image, and determines the third scene corresponding to the second image.
  • the method for the mobile phone to identify the seventh image to determine the third scene corresponding to the seventh image can refer to the method for the mobile phone to identify the first scene corresponding to the first image, which will not be described in this embodiment of the present application.
  • the mobile phone uses the seventh image and the sixth image, and the indication information for identifying the third scene as input samples, and trains the preset AI model b, so that the preset AI model b has the ability to determine which weight to use for multiple third LUTs.
  • the display effect of the sixth image can be obtained by processing the seventh image with the LUT obtained by the weighted sum.
  • the input samples of the preset AI model b have added indication information of the third scene corresponding to the th image.
  • the training principle of the preset AI model b is the same as that of the aforementioned preset AI model a. The difference is that the indication information of the third scene corresponding to the seventh image may more clearly indicate the shooting scene corresponding to the seventh image.
  • the indication information of the third scene can guide the training of the preset AI model b, guiding the preset AI model b to train in a direction inclined to the third scene. In this way, the convergence of the preset AI model b can be accelerated, and the number of training times of the preset AI model b can be reduced.
  • the mobile phone calculates a weighted sum of multiple third LUTs using multiple third weighting coefficients to obtain the first LUT of the T-th frame image (ie, the first image).
  • the T-th frame image (that is, the first image) is the first image 902 shown in (a) in FIG. 9 as an example.
  • the mobile phone executes S504, the method of obtaining the second image.
  • the mobile phone may execute S502 to perform scene detection results on the T-th frame image (that is, the first image) 902 to obtain the first scene corresponding to the first image 902 shown in FIG. 12B .
  • the mobile phone can execute S1201, take the first image 902 and the indication information of the first scene as input, and run the preset AI model b shown in FIG. 12B to obtain multiple third weighting coefficients shown in FIG. 12B.
  • the sum of the multiple third weighting coefficients is 1, and the multiple third LUTs are in one-to-one correspondence with the multiple third weighting coefficients.
  • the preset AI model b shown in FIG. 12B outputs M third weighting coefficients, M ⁇ 2, and M is an integer.
  • the mobile phone may execute S1202 to calculate a weighted sum of M third LUTs by using multiple third weighting coefficients to obtain the first LUT of the T-th frame image.
  • the mobile phone may execute S505 to perform image processing on the first image 902 by using the first LUT of the T-th frame shown in FIG. 12B to obtain the second image 904 shown in FIG. 12B .
  • the mobile phone determines the first LUT of the T-th frame image, not only referring to a third LUT corresponding to the first scene of the first image, but also referring to all but one of the multiple third LUTs.
  • the mobile phone determines the multiple third weighting coefficients, it also refers to the first image. In this way, the display effect of the mobile phone can be improved.
  • the mobile phone when determining the final LUT, may not only refer to the current frame image (that is, the first image), but also refer to the final LUT of the previous frame image of the first image. In this way, in the process of changing the LUT, the smooth transition of display effects or styles corresponding to different LUTs can be realized, the display effect of the multi-frame preview image presented by the electronic device can be optimized, and the user's visual experience in the process of taking photos or recordings can be improved.
  • S1203 may include: the mobile phone adopts a plurality of third weighting coefficients, calculates the weighted sum of a plurality of third LUTs, and obtains the fourth LUT of the T-th frame image; The weighted sum of the four LUTs and the first LUT of the T-1 frame image (that is, the fifth image) obtains the first LUT of the T frame image.
  • FIG. 12C shows the method of determining the first LUT of the T-th frame image by the mobile phone in S1201-S1202 in this embodiment; and the schematic diagram of the method of obtaining the second image by the mobile phone in S504.
  • the user may adjust at least one third weighting coefficient among the plurality of third weighting coefficients output by the preset AI model a or the preset AI model b. That is to say, the mobile phone may receive the user's adjustment operation on the plurality of third weighting coefficients, and calculate the first LUT of the T-th frame image by using the plurality of third weighting coefficients adjusted by the user.
  • the method in this embodiment of the present application may further include S1301-S1302.
  • the above S1102 or S1202 may be replaced by S1303.
  • the method in this embodiment of the present application may further include S1301-S1302.
  • S1202 can be replaced with S1303.
  • the mobile phone displays multiple third setting items in response to the user's click operation on the second preset control.
  • Each third setting item corresponds to a third LUT, and is used for setting a third weighting coefficient of the third LUT.
  • the above preview interface may further include a second preset control.
  • the second preset control is used to trigger the mobile phone to display multiple third setting items of the multiple third weighting coefficients, so that the user can set the weights of the multiple third LUTs through the multiple third setting items.
  • the preview interface 1401 includes a second preset control 1402 .
  • the method in the embodiment of the present application is introduced by taking the third setting item as the scroll bar shown in (a) in FIG. 14A as an example. It can be seen from the foregoing embodiments that each shooting style and shooting scene may correspond to a third LUT.
  • the mobile phone can set the weight (ie, the weighting coefficient) corresponding to the third LUT through the above third setting item.
  • the display state of the second preset control 1402 changes, for example, the mobile phone can display the second preset control 1406 shown in (b) in FIG. 14A.
  • the display state corresponding to the second preset control 1402 (for example, the display state of black characters on a white background) is used to indicate that the second preset control is in an off state.
  • the display state corresponding to the second preset control 1406 (for example, the display state of white characters on a black background) is used to indicate that the second preset control is in an on state.
  • the preview interface 1403 also includes a second image 1404 .
  • the display effect of the second image 1404 is: the display effect obtained by processing the first image with the fourth LUT of the T-th frame by using multiple third weighting coefficients shown in the multiple third setting items 1405 for weighted sum calculation.
  • the above-mentioned preview interface may include the above-mentioned second preset control, or may not include the above-mentioned second preset control.
  • the mobile phone may receive a second preset operation input by the user on the preview interface.
  • the above S1301 may be replaced by: the mobile phone displays multiple third setting items on the preview interface in response to the user's second preset operation on the preview interface.
  • the second preset operation may be any preset gesture such as an L-shaped gesture, an S-shaped gesture, or a ⁇ -shaped gesture input by the user on a display screen (such as a touch screen) of the mobile phone.
  • the preset gesture corresponding to the second preset operation is different from the preset gesture corresponding to the first preset operation.
  • the second preset operation may be the user's click operation on the second physical button of the mobile phone.
  • the first physical button may be a physical button in the mobile phone, or a combination of at least two physical buttons.
  • the second physical key is different from the above-mentioned first physical key.
  • the mobile phone updates a corresponding third weighting coefficient in response to the user's setting operation on one or more third setting items among the plurality of third setting items.
  • the mobile phone may receive a user's setting operation on multiple third setting items 1405 shown in (b) in FIG. 14A , and display a preview interface 1407 shown in (a) in FIG. 14B .
  • the preview interface 1407 includes a plurality of third setting items 1409 .
  • the multiple third weighting coefficients indicated by the multiple third setting items 1409 are different from the multiple third weighting coefficients indicated by the multiple third setting items 1405 . That is to say, in response to the user's setting operation on the multiple third setting items 1405, the mobile phone updates the multiple third weighting coefficients from the third weighting coefficients indicated by the multiple third setting items 1405 to the multiple third setting items The third weighting coefficient shown in 1409 .
  • the preview interface 1407 also includes a second image 1408 .
  • the display effect of the second image 1408 is: using multiple third weighting coefficients shown in multiple third setting items 1409 to carry out weighted sum calculation, and finally obtain the display effect obtained by processing the first image with the first LUT of the Tth frame. Comparing (a) in FIG. 14B with (b) in FIG. 14A , it can be seen that the display effect of the second image 1408 is different from the display effect of the second image 1404 .
  • the mobile phone may receive a user's setting operation on multiple third setting items 1409 shown in (a) in FIG. 14B , and display a preview interface 1410 shown in (b) in FIG. 14B .
  • the preview interface 1410 includes a plurality of third setting items 1412 .
  • the multiple third weighting coefficients indicated by the multiple third setting items 1412 are different from the multiple third weighting coefficients indicated by the multiple third setting items 1409 . That is to say, in response to the user's setting operation on the multiple third setting items 1409, the mobile phone updates the multiple third weighting coefficients from the third weighting coefficients indicated by the multiple third setting items 1409 to multiple third setting items The third weighting coefficient shown in 1412 .
  • the preview interface 1410 also includes a second image 1411 .
  • the display effect of the second image 1411 is: the display effect obtained by processing the first image with the first LUT of the T-th frame by using multiple third weighting coefficients shown in the multiple third setting items 1412 for weighted sum calculation. Comparing (b) in FIG. 14B with (a) in FIG. 14B , it can be seen that the display effect of the second image 1411 is different from the display effect of the second image 1408 .
  • the mobile phone may receive a user's setting operation on one or more third setting items among multiple third setting items.
  • the sum of multiple third weighting coefficients after the mobile phone is updated is not necessarily 1.
  • the user can adjust the above-mentioned multiple third weighting coefficients in real time by adjusting any one of the above-mentioned third setting items. Moreover, the user can observe the display effect of the second image after adjusting the multiple third weighting coefficients, and set appropriate weighting coefficients for the multiple third LUTs.
  • the mobile phone may receive the user's click operation on the second preset control 1406 shown in (b) in FIG. 14B .
  • the mobile phone may hide the above-mentioned multiple third setting items, and display a preview interface 1413 shown in (c) in FIG. 14B.
  • the preview interface 1413 includes a second preview control 1402 and a second image 1414 .
  • the mobile phone calculates a weighted sum of multiple third LUTs using the updated multiple third weighting coefficients to obtain the first LUT of the T-th frame image (ie, the first image).
  • FIG. 15A in this embodiment of the present application introduces a method for the mobile phone to execute S1301-S1303 to determine the first LUT of the T-th frame image. And, the mobile phone executes S504, the method of obtaining the second image.
  • multiple third weighting coefficients as shown in Figure 15A can be obtained, such as multiple third weighting coefficients output by the preset AI model a or the second pre-AI model coefficient.
  • the mobile phone may execute S1301-S1302 to update the above multiple third weighting coefficients by using the user-defined third weighting coefficients to obtain updated multiple third weighting coefficients.
  • the mobile phone may execute S1303 to calculate the weighted sum of M third LUTs according to the following formula (5) by using the updated third weighting coefficients to obtain the first LUT of the T-th frame image.
  • the first LUT of the T-th frame image may be marked as Q (T, 3)
  • the first LUT m may be marked as Q (T, m, 1).
  • the mobile phone may execute S504 to perform image processing on the first image by using the first LUT of the T-th frame image shown in FIG. 15A to obtain the second image 1411 shown in FIG. 15A .
  • the mobile phone can not only determine the weighting coefficients of multiple third LUTs through the preset AI model a or preset AI model b, but also provide the user with an option to adjust the multiple third LUTs. Weighting coefficients for services. In this way, the mobile phone can calculate the fourth LUT of the T-th frame image according to the weighting coefficient adjusted by the user. In this way, the mobile phone can take photos or videos that the user wants according to the needs of the user, which can improve the shooting experience of the user.
  • the mobile phone when determining the final LUT, not only refers to the previous frame image (that is, the first image), but also refers to the final LUT of the previous frame image of the first image.
  • the smooth transition of display effects or styles corresponding to different LUTs can be realized, the display effect of the multi-frame preview image presented by the electronic device can be optimized, and the user's visual experience in the process of taking photos or recordings can be improved.
  • S1303 may include: the mobile phone adopts a plurality of third weighting coefficients, calculates the weighted sum of a plurality of third LUTs, and obtains the fourth LUT of the T-th frame image; the mobile phone calculates the T-th frame image (that is, the first image).
  • the weighted sum of the four LUTs and the first LUT of the T-1 frame image ie, the fifth image is used to obtain the first LUT of the T frame image.
  • FIG. 15B shows the method for the mobile phone to execute S1301-S1303 to determine the first LUT of the T-th frame image in this embodiment; and the schematic diagram of the method for the mobile phone to execute S504 to obtain the second image.
  • the user can add a LUT in the mobile phone.
  • M third LUTs are preset in the mobile phone.
  • the mobile phone may add an M+1th third LUT, a +2th third LUT, etc. in the mobile phone.
  • the method in this embodiment of the application may further include S1601-S1603.
  • the mobile phone In response to a user's second preset operation, the mobile phone displays a third preset control.
  • the third preset control is used to trigger the mobile phone to add a new LUT (that is, a display effect corresponding to the LUT).
  • the mobile phone in response to the above-mentioned second preset operation, can not only display a plurality of third setting items, but also display a third preset control.
  • the mobile phone may display the preview interface 1601 shown in (a) in FIG. 16A .
  • the video preview interface 1601 includes a first image 1602 and a third preset control 1603 .
  • the third preset control 1603 is used to trigger the mobile phone to add a LUT, that is, to add a display effect corresponding to the LUT.
  • the mobile phone In response to the user's click operation on the third preset control, the mobile phone displays one or more fourth setting items, each fourth setting item corresponds to a fifth LUT, and each fifth LUT corresponds to a shooting scene The display effect of the fifth LUT is different from the third LUT.
  • the mobile phone may display the preview interface 1604 shown in (b) in FIG. 16A .
  • the preview interface 1604 includes one or more fourth setting items, such as "%% style” setting item, "@@ style” setting item, "& ⁇ style” setting item and “ ⁇ style” setting item and so on.
  • Each fourth setting item corresponds to a fifth LUT.
  • the mobile phone In response to the user's selection operation on any fourth setting item, the mobile phone saves a fifth LUT corresponding to the fourth setting item selected by the user.
  • the mobile phone may save the fifth LUT corresponding to the "@@style” setting item. That is to say, the fifth LUT corresponding to the "@@style” setting item can be used as a third LUT for the mobile phone to execute S503 to determine the first LUT of the T-th frame image.
  • the mobile phone may display the preview interface 1605 shown in (c) in FIG. 16A .
  • the preview interface 1605 shown in (c) in FIG. 16A further includes a fourth setting item corresponding to "@@style".
  • each of the above fourth setting items further includes a preview image processed by using the corresponding fifth LUT, which is used to present the display effect corresponding to the fifth LUT.
  • the "%% style” setting item, "@@ style” setting item, "& ⁇ style” setting item and " ⁇ style” setting item all show the corresponding The preview image after the fifth LUT processing.
  • the above fifth LUT may be pre-stored in the mobile phone, but the fifth LUT is not applied to the camera application of the mobile phone.
  • the fifth LUT selected by the user can be applied to the camera application of the mobile phone.
  • the fifth LUT corresponding to the "@@style" setting item can be used as a third LUT for the mobile phone to execute S503 to determine the first LUT of the T-th frame image.
  • the mobile phone does not provide the above-mentioned multiple fifth LUTs for the user to choose, but the user sets the required LUTs by himself.
  • the mobile phone in response to the user's click operation on the third preset control, the mobile phone may display the fourth interface.
  • the fourth interface includes three adjustment options of RGB LUT parameters, and the three adjustment options are used to set a new LUT.
  • the mobile phone in response to the user's click operation on the third preset control 1603 shown in (a) in FIG. 16A , the mobile phone may display the fourth interface 16007 shown in (a) in FIG. 16B .
  • the fourth interface 16007 includes three adjustment options 1608 .
  • the mobile phone may receive the user's adjustment operations on the three adjustment options 1608, and store the newly added LUT set by the user in response to the user's adjustment operations.
  • the mobile phone may receive the user's adjustment operations on the three adjustment options 1608, and display the fourth interface 1609 shown in (b) in FIG. 16B.
  • the fourth interface 1609 includes three adjustment options 1610 .
  • the LUTs corresponding to the three adjustment options 1610 are different from the LUTs corresponding to the three adjustment options 1608 .
  • the mobile phone may save the LUTs corresponding to the three adjustment options 1610 (ie, newly added LUTs).
  • LUT also known as 3D LUT
  • 3D LUT is a relatively complex three-dimensional look-up table.
  • the setting of LUT will involve the adjustment of many parameters (such as brightness and color, etc.). Manual setting is difficult to fine-tune the adjustment of each parameter of LUT. Therefore, in the embodiment of the present application, a new function of the LUT may be provided to the user in a manner of global adjustment. That is to say, the above three adjustment options 1608 of RGB LUT parameters and the three adjustment options 1610 of RGB LUT parameters are three adjustment options of LUT that support global adjustment.
  • an initial LUT can be initialized.
  • the cube of this initial LUT (the output value is exactly the same as the input value).
  • Table 2 shows an initial LUT, and the output value of the initial LUT shown in Table 2 is exactly the same as the input value, both are (10, 20, 30).
  • the values of the progress bars of the three adjustment options of the LUT can be normalized. For example, "0" - "+100” can be normalized to [1.1, 10.0], and "-100" - "0” can be normalized to [0.0, 1.0].
  • the normalized value can be used as the color channel coefficient (such as represented by Rgain, Ggain, Bgain), and multiplied by the input value of the initial LUT to obtain the output value of the newly added LUT.
  • the newly added LUT shown in Table 3 can be obtained from the initial LUT shown in Table 2.
  • the mobile phone can calculate the product of the RGB value (10, 20, 30) and the corresponding gain value in (5.0, 3.7, 5.8), and obtain the output value (50, 74, 174) of the GRB of the newly added LUT shown in Table 4.
  • the above fourth interface may further include more user setting items such as a brightness coefficient slide bar, a dark area brightness coefficient/bright area brightness coefficient slide bar, adjustment of gray scale curves of each channel, and the like.
  • user setting items such as a brightness coefficient slide bar, a dark area brightness coefficient/bright area brightness coefficient slide bar, adjustment of gray scale curves of each channel, and the like.
  • the mobile phone may also perform S1601-S1603, as shown in FIG. 17A or FIG. 17B , in response to the operation of adding a LUT by the user, adding a fifth LUT among multiple third LUTs.
  • the mobile phone may also add a new LUT in the mobile phone in response to the user's operation.
  • the new LUT is set by the user according to his own needs, and the new LUT has a high degree of fit with the shooting needs of the user.
  • the mobile phone adopts the new LUT to process the images collected by the camera, and can take photos or videos with high user satisfaction, which can improve the user's shooting experience.
  • the method of the embodiment of the present application may be applied to a scenario where the mobile phone performs image processing on photos or videos in the mobile phone gallery (or album) (abbreviated as: post-shooting image processing scenario).
  • the mobile phone may execute S501-S504 in response to the user's preset operation on any photo in the album to obtain and display the second image.
  • the mobile phone may display an album list interface 1801 shown in (a) in FIG. 18A , and the album list interface 1801 includes preview items of multiple photos.
  • the mobile phone can respond to the user's click operation on the preview item 1802 of the "little girl" photo (equivalent to the first image) in the album list interface 1801, and can directly display the "little girl" photo corresponding to the preview item 1802 ( equivalent to the first image).
  • the mobile phone can execute S501-S504 in response to the user's click operation on the preview item 1802 of the "little girl” photo (equivalent to the first image), and obtain and display the image shown in (b) in FIG. 18A
  • the second image 1803 of The detail page of the photo shown in (b) in FIG. 18A includes not only the second image 1803 but also an edit button 1804 .
  • the edit button 1804 is used to trigger the mobile phone to edit the second image 1803 .
  • the user can trigger the mobile phone to execute S501-S504 in the editing interface of a photo to obtain and display the second image.
  • the mobile phone may display the details page of the photo 1805 (ie the first image) shown in (a) in FIG. 18B .
  • the mobile phone may display an editing interface 1807 shown in (b) in FIG. 18B .
  • the editing interface 1807 includes an "Intelligent AI” button 1808, a "Crop” button, a “Filter” button and an “Adjust” button.
  • the “Smart AI” button 1809 is used to trigger the phone to adjust the LUT of the first image.
  • the “Crop” button is used to trigger the phone to crop the first image.
  • the "filter” button is used to trigger the mobile phone to add a filter effect to the first image.
  • the “adjustment” button is used to trigger the mobile phone to adjust parameters such as contrast, saturation and brightness of the first image.
  • the mobile phone may execute S501-S504 to obtain and display the second image 1811 shown in (c) in FIG. 18B.
  • the editing interface shown in (c) in FIG. 18B includes not only the second image 1811 but also a save button 1810 .
  • the save button 1810 is used to trigger the mobile phone to save the second image 1811 .
  • the mobile phone may save the second image 907 and display the photo details page of the second image 1811 shown in FIG. 18C.
  • the method for the mobile phone to perform image processing on the video in the mobile phone gallery is similar to the method for the mobile phone to perform image processing on the photos in the mobile phone gallery, and will not be described here in this embodiment of the present application. The difference is that the mobile phone needs to process each frame of the video.
  • An embodiment of the present application provides an electronic device, and the electronic device may include: a display screen (such as a touch screen), a camera, a memory, and one or more processors.
  • the display screen, camera, memory and processor are coupled.
  • the memory is used to store computer program code comprising computer instructions.
  • the processor executes the computer instructions, the electronic device can execute various functions or steps performed by the mobile phone in the foregoing method embodiments.
  • the structure of the electronic device reference may be made to the structure of the electronic device 400 shown in FIG. 4 .
  • the embodiment of the present application also provides a chip system, as shown in FIG. 19 , the chip system 1900 includes at least one processor 1901 and at least one interface circuit 1902 .
  • the above-mentioned processor 1901 and the interface circuit 1902 may be interconnected through lines.
  • interface circuit 1902 may be used to receive signals from other devices, such as memory of an electronic device.
  • the interface circuit 1902 may be used to send signals to other devices (such as the processor 1901).
  • the interface circuit 1902 can read instructions stored in the memory, and send the instructions to the processor 1901 .
  • the electronic device may be made to execute various steps executed by the mobile phone 190 in the foregoing embodiments.
  • the chip system may also include other discrete devices, which is not specifically limited in this embodiment of the present application.
  • the embodiment of the present application also provides a computer storage medium, the computer storage medium includes computer instructions, and when the computer instructions are run on the electronic device, the electronic device is made to perform various functions or steps performed by the mobile phone in the above method embodiments.
  • the embodiment of the present application also provides a computer program product, which, when the computer program product is run on a computer, causes the computer to execute each function or step performed by the mobile phone in the method embodiment above.
  • the disclosed devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be Incorporation or may be integrated into another device, or some features may be omitted, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the unit described as a separate component may or may not be physically separated, and the component displayed as a unit may be one physical unit or multiple physical units, that is, it may be located in one place, or may be distributed to multiple different places . Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the software product is stored in a storage medium Among them, several instructions are included to make a device (which may be a single-chip microcomputer, a chip, etc.) or a processor (processor) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: various media that can store program codes such as U disk, mobile hard disk, read only memory (ROM), random access memory (random access memory, RAM), magnetic disk or optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

本申请公开了一种图像处理方法及电子设备,涉及拍照技术领域,可在拍照或录像过程中动态调整LUT,丰富拍照或录像的显示效果。电子设备获取第一图像,该第一图像为电子设备的摄像头采集的图像,第一图像包括第一拍摄对象;电子设备确定第一图像对应的第一场景,第一场景用于标识第一拍摄对象对应的场景;电子设备根据第一场景确定第一LUT;电子设备根据第一LUT对第一图像进行处理得到第二图像,并显示第二图像,第二图像的显示效果与第一LUT对应。

Description

一种图像处理方法及电子设备
本申请要求于2021年07月31日提交国家知识产权局、申请号为202110877402.X、发明名称为“一种图像处理方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及拍照技术领域,尤其涉及一种图像处理方法及电子设备。
背景技术
现有的手机一般具有拍照和录像功能,越来越来的人使用手机拍摄照片和视频来记录生活的点点滴滴。目前,手机拍摄(如拍照和录像)时,只能采用拍摄前预先配置的颜色查找表(Look Up Table,LUT)、用户选择的LUT或者识别预览图像确定的LUT来处理预览图像。如此,手机只能拍摄得到上述预先配置或者选择的参数对应的风格或显示效果的照片或视频,手机拍摄的照片或视频的风格或显示效果单一。
发明内容
本申请提供一种图像处理方法及电子设备,可以在拍照或录像过程中动态调整LUT,丰富拍照或录像得到的显示效果。
第一方面,本申请提供一种图像处理方法。该方法中,电子设备可以获取第一图像。该第一图像是电子设备的摄像头采集的图像,该第一图像包括第一拍摄对象。之后,电子设备可以确定第一图像对应的第一场景,第一场景用于标识第一拍摄对象对应的场景。然后,电子设备可以根据第一场景确定第一LUT。最后,电子设备可以根据第一LUT对第一图像进行处理得到第二图像,并显示第二图像。该第二图像的显示效果与第一LUT对应。
采用本方案,电子设备在拍照或录像过程中,可以根据电子设备获取的每一帧图像动态调整LUT。这样,在拍照或录像过程中,便可以呈现出不同LUT对应的显示效果或风格,可以丰富拍照或录像得到的显示效果。
在第一方面的一种可能的设计方式中,在电子设备显示第二图像之后,电子设备可以采集第三图像,该第三图像为电子设备的摄像头采集的图像,第三图像包括第二拍摄对象。电子设备可以确定第二图像对应第二场景,第二场景用于标识第二拍摄对象对应的场景;电子设备根据第二场景确定第二LUT;电子设备根据第二LUT对第三图像进行处理得到第四图像,并显示第四图像,第四图像的显示效果与第二LUT对应。
也就是说,当电子设备的摄像头采集不同拍摄对象的图像时,通过本申请的方法,电子设备可以采用不同的LUT处理图像。这样,可以呈现出不同LUT对应的显示效果或风格,可以丰富拍照或录像得到的显示效果。
在第一方面的另一种可能的设计方式中,电子设备根据第一场景确定第一LUT,可以包括:电子设备将多个第三LUT中第一场景对应的第三LUT,确定为第一图像的第一LUT。
该设计方式中,电子设备可以识别第一图像对应的拍摄场景(即第一场景),根 据该拍摄场景确定第一LUT。其中,多个第三LUT预先配置在电子设备中,用于对电子设备的摄像头采集的图像进行处理得到不同显示效果的图像,每个第一LUT对应一种场景下的显示效果。
在第一方面的另一种可能的设计方式中,电子设备根据第一场景确定第一LUT,可以包括:电子设备将多个第三LUT中第一场景对应的第三LUT,确定为第一图像的第四LUT;电子设备计算第一图像的第四LUT和第五图像的第一LUT的加权和,得到第一LUT。其中,第五图像是第一图像的前一帧图像,电子设备在本次拍摄过程中采集的第1帧图像的前一帧图像的第三LUT是预设LUT。其中,多个第三LUT预先配置在电子设备中,用于对电子设备的摄像头采集的图像进行处理得到不同显示效果的图像,每个第三LUT对应一种场景下的显示效果。
在该设计方式中,电子设备在确定最终LUT时,不仅参考了当前一帧图像,还参考了上一帧图像的最终LUT。这样,可以在改变LUT的过程中,实现不同LUT对应的显示效果或风格的平滑过渡,可以优化电子设备呈现的多帧预览图像的显示效果,提升用户拍照或录像过程中的视觉体验。
在第一方面的另一种可能的设计方式中,电子设备计算第一图像的第四LUT和第五图像的第一LUT的加权和,得到第一LUT,可以包括:电子设备采用预先配置的第一加权系数和第二加权系数,计算第一图像的第四LUT和第五图像的第一LUT的加权和,得到第一LUT。其中,第一加权系数是第一图像的第四LUT的加权系数,第二加权系数是第五图像的第一LUT的加权系数,第一加权系数和第二加权系数之和等于1。
其中,第一加权系数越小,第二加权系数越大,多帧第二图像的过渡效果越平滑。在这种设计方式中,上述第一加权系数和第二加权系数,可以是预先配置在电子设备中的预设权重。
在第一方面的另一种可能的设计方式中,第一加权系数和第二加权系数,可以由用户在电子设备中设置。
具体的,在电子设备采用预先配置的第一加权系数和第二加权系数,计算第一图像的第四LUT和第五图像的第一LUT的加权和,得到第一LUT之前,电子设备可以响应于第一预设操作,显示第一设置项和第二设置项。该第一设置项用于设置第一加权系数,第二设置项用于设置第二加权系数。然后,电子设备响应于用户对第一设置项和/或第二设置项的设置操作,可以将用户设置的第一加权系数作为第一图像的第四LUT的加权系数,将用户设置的第二加权系数作为第五图像的第一LUT的加权系数。
其中,第一预设操作是对电子设备显示的第一预设控件的点击操作,第一预设控件用于触发电子设备设置第一图像的第四LUT和第五图像的第一LUT的权重;或者,第一预设操作是用户对电子设备的第一物理按键的点击操作。
在第一方面的另一种可能的设计方式中,电子设备中预先配置有预设人工智能(artificial intelligence,AI)模型(如预设AI模型b)。该预设AI模型b具备识别第一图像和第一图像的场景检测结果,输出多个第三LUT中每个第三LUT的权重的能力。电子设备可以通过该预设AI模型b得到每个第三LUT的权重;然后,根据得到的权重计算对多个第三LUT的加权和,得到第一LUT。
具体的,上述电子设备根据第一场景确定第一LUT,可以包括:电子设备将第一场景的指示信息和第一图像作为输入,运行预设AI模型,得到多个第三LUT的多个第三加权系数;电子设备采用多个第三加权系数,计算多个第三LUT的加权和,得到第一LUT。其中,多个第三加权系数之和为1,多个第三LUT与多个第三加权系数一一对应。
在该设计方式中,针对复杂的拍摄场景,电子设备确定第一图像的第一LUT,不仅参考了第一图像的第一场景对应的一个第三LUT,还参考了多个第三LUT中除第一场景之外的其他拍摄场景对应的第三LUT。这样,可以提升电子设备的显示效果。
在第一方面的另一种可能的设计方式中,电子设备根据第一场景确定第一LUT,可以包括:电子设备将第一场景的指示信息和第一图像作为输入,运行预设AI模型,得到多个第三LUT的多个第三加权系数;电子设备采用多个第三加权系数,计算多个第三LUT的加权和,得到第一图像的第四LUT;电子设备计算第一图像的第四LUT和第五图像的第一LUT的加权和,得到第一LUT。其中,第五图像是第一图像的前一帧图像,电子设备在本次拍摄过程中采集的第1帧图像的前一帧图像的第三LUT是预设LUT。其中,多个第三加权系数之和为1,多个第三LUT与多个第三加权系数一一对应。
在该设计方式中,电子设备在确定最终LUT时,不仅参考了当前一帧图像,还参考了上一帧图像的最终LUT。这样,可以在改变LUT的过程中,实现不同LUT对应的显示效果或风格的平滑过渡,可以优化电子设备呈现的多帧预览图像的显示效果,提升用户拍照或录像过程中的视觉体验。
在第一方面的另一种可能的设计方式中,在电子设备通过预设AI模型得到每个第三LUT的权重之前,电子设备可以先训练该预设AI模型b,使预设AI模型b具备识别第一图像和第一图像的场景检测结果,输出多个第三LUT中每个第三LUT的权重的能力。
具体的,电子设备可以获取多组数据对,每组数据对包括第六图像和第七图像,第六图像是处理第七图像得到的满足预设条件的图像。然后,电子设备可以识别第七图像,确定第七图像对应的第三场景。最后,电子设备可以将第七图像和第六图像,以及识别第三场景的指示信息作为输入样本,训练预设AI模型,使得预设AI模型具备确定采用何种权重对多个第三LUT求加权和得到的LUT处理第七图像能够得到第六图像的显示效果的能力。
需要说明的是,与上述预设AI模型a不同的是,预设AI模型b的输入样本增加了第七图像对应的第三场景的指示信息。该预设AI模型b的训练原理与上述预设AI模型的训练原理相同。不同的是,第七图像对应的第三场景的指示信息可以更加明确的指示第七图像对应的拍摄场景。
应理解,如果识别到第七图像的拍摄场景为第三场景,则表示该第七图像是第三场景的图像的可能性较高。那么,将拍摄对象对应的第三LUT的加权系数设置为较大值,有利于提升显示效果。由此可见,该第三场景的指示信息可以对预设AI模型b的训练起到引导的作用,引导预设AI模型b向倾向于该第三场景的方向训练。这样,可以加速预设AI模型b的收敛,减少第二预设AI模型的训练次数。
第二方面,本申请提供一种图像处理方法,该方法中,电子设备可获取第一图像,该第一图像为电子设备的摄像头采集的图像,第一图像包括第一拍摄对象。之后,电子设备可以将第一图像作为输入,运行预设AI模型(如预设AI模型a),得到多个第三LUT的多个第三加权系数。多个第三加权系数之和为1,多个第三LUT与多个第三加权系数一一对应。电子设备采用多个第三加权系数,计算多个第三LUT的加权和,得到第一LUT。电子设备根据第一LUT对第一图像进行处理得到第二图像,并显示第二图像,第二图像的显示效果与第一LUT对应。
采用本方案,电子设备在拍照或录像过程中,可以根据电子设备获取的每一帧图像动态调整LUT。这样,在拍照或录像过程中,便可以呈现出不同LUT对应的显示效果或风格,可以丰富拍照或录像得到的显示效果。
并且,电子设备确定第一图像的第一LUT,不仅参考了第一图像的第一场景对应的一个第三LUT,还参考了多个第三LUT中除第一场景之外的其他拍摄场景对应的第三LUT。这样,可以提升电子设备的显示效果。
在第二方面的一种可能的设计方式中,电子设备采用多个第三加权系数,计算多个第三LUT的加权和,得到第一LUT,包括:电子设备采用多个第三加权系数,计算多个第三LUT的加权和,得到第一图像的第四LUT;电子设备计算第一图像的第四LUT和第五图像的第一LUT的加权和,得到第一LUT;其中,第五图像是第一图像的前一帧图像,电子设备在本次拍摄过程中采集的第1帧图像的前一帧图像的第三LUT是预设LUT。
在该设计方式中,电子设备在确定最终LUT时,不仅参考了当前一帧图像,还参考了上一帧图像的最终LUT。这样,可以在改变LUT的过程中,实现不同LUT对应的显示效果或风格的平滑过渡,可以优化电子设备呈现的多帧预览图像的显示效果,提升用户拍照或录像过程中的视觉体验。
在第二方面的另一种可能的设计方式中,在电子设备将第一图像作为输入,运行预设AI模型,得到多个第三LUT的多个第三加权系数之前,电子设备可以训练预设AI模型a。其中,电子设备训练预设AI模型a的方法包括:电子设备获取多组数据对,每组数据对包括第六图像和第七图像,第六图像是处理第七图像得到的满足预设条件的图像;电子设备将第七图像和第六图像作为输入样本,训练预设AI模型,使得预设AI模型具备确定采用何种权重对多个第三LUT求加权和得到的LUT处理第七图像能够得到第六图像的显示效果的能力。
在第一方面或第二方面的另一种可能的设计方式中,用户可以调整上述预设AI模型a或预设AI模型b输出的权重。本申请的方法还可以包括:电子设备响应于用户的第二预设操作,显示多个第三设置项;其中,每个第三设置项对应一个第三LUT,用于设置第三LUT的第三加权系数;电子设备响应于用户对多个第三设置项中一个或多个第三设置项的设置操作,更新对应的第三加权系数。其中,电子设备采用更新后的多个第三加权系数计算多个第三LUT的加权和。
上述第二预设操作是用户对第二预设控件的点击操作,第二预设控件用于触发电子设备设置多个第三LUT的权重;或者,第二预设操作是用户对电子设备中第二物理按键的点击操作。
在该设计方式中,可以由用户调整预设AI模型a或预设AI模型b输出的权重。这样,电子设备可以按照用户的需求调整LUT,如此便可以拍摄到与用户满意度更高的图像。
在第一方面或第二方面的另一种可能的设计方式中,还可以由用户在电子设备中新增LUT。本申请的方法还包括:电子设备响应于用户的第三预设操作,显示一个或多个第四设置项;其中,第三预设操作用于触发电子设备新增显示效果,每个第四设置项对应一种第五LUT,每种第五LUT对应一种拍摄场景下的显示效果,第五LUT与第三LUT不同;响应于用户对预览界面中任一个第四设置项的选择操作,电子设备保存用户选择的第四设置项对应的第五LUT。
在第一方面或第二方面的另一种可能的设计方式中,上述第四设置项包括采用对应第五LUT处理后的预览图像,用于呈现第五LUT对应的显示效果。如此,用户便可以按照电子设备呈现出来的调整后的显示效果,确认是否得到满意的LUT。这样,可以提升用户设置新增LUT的效率。
在第一方面或第二方面的一种可能的设计方式中,电子设备获取第一图像,可以包括:电子设备在电子设备拍照的预览界面、电子设备录像前的预览界面或者电子设备正在录像的取景界面,采集第一图像。也就是说,该方法可以应用于电子设备的拍照场景、正在录像场景和录像模式下录像前的场景。
在第一方面或第二方面的一种可能的设计方式中,第一图像可以是电子设备的摄像头采集的图像。或者,第一图像可以是由电子设备的摄像头采集的图像得到的预览图像。
第三方面,本申请提供一种电子设备,该电子设备包括存储器、显示屏、一个或多个摄像头和一个或多个处理器。该存储器、显示屏、摄像头与处理器耦合。其中,摄像头用于采集图像,显示屏用于显示摄像头采集的图像或者处理器生成的图像,存储器中存储有计算机程序代码,计算机程序代码包括计算机指令,当计算机指令被处理器执行时,使得电子设备执行如第一方面或第二方面及其任一种可能的设计方式所述的方法。
第四方面,本申请提供一种电子设备,该电子设备包括存储器、显示屏、一个或多个摄像头和一个或多个处理器。存储器、显示屏、摄像头与处理器耦合。其中,存储器中存储有计算机程序代码,该计算机程序代码包括计算机指令,当该计算机指令被处理器执行时,使得电子设备执行如下步骤:获取第一图像,第一图像为电子设备的摄像头采集的图像,第一图像包括第一拍摄对象;确定第一图像对应的第一场景,其中,第一场景用于标识第一拍摄对象对应的场景;根据第一场景确定第一颜色查找表LUT;根据第一LUT对第一图像进行处理得到第二图像,并显示第二图像,第二图像的显示效果与第一LUT对应。
在第四方面的一种可能的设计方式中,当计算机指令被处理器执行时,使得电子设备还执行如下步骤:在显示第二图像之后,采集第三图像,第三图像为电子设备的摄像头采集的图像,第三图像包括第二拍摄对象;确定第二图像对应第二场景,其中,第二场景用于标识第二拍摄对象对应的场景;根据第二场景确定第二LUT;根据第二LUT对第三图像进行处理得到第四图像,并显示第四图像,第四图像的显示效果与第 二LUT对应。
在第四方面的另一种可能的设计方式中,当该计算机指令被处理器执行时,使得电子设备还执行如下步骤:在电子设备拍照的预览界面、电子设备录像前的预览界面或者电子设备正在录像的取景界面,采集第一图像。
在第四方面的另一种可能的设计方式中,第一图像是电子设备的摄像头采集的图像;或者,第一图像是由电子设备的摄像头采集的图像得到的预览图像。
在第四方面的另一种可能的设计方式中,当该计算机指令被处理器执行时,使得电子设备还执行如下步骤:将多个第三LUT中第一场景对应的第三LUT,确定为第一图像的第一LUT。其中,多个第三LUT预先配置在电子设备中,用于对电子设备的摄像头采集的图像进行处理得到不同显示效果的图像,每个第一LUT对应一种场景下的显示效果。
在第四方面的另一种可能的设计方式中,当该计算机指令被处理器执行时,使得电子设备还执行如下步骤:将多个第三LUT中第一场景对应的第三LUT,确定为第一图像的第四LUT;其中,多个第三LUT预先配置在电子设备中,用于对电子设备的摄像头采集的图像进行处理得到不同显示效果的图像,每个第三LUT对应一种场景下的显示效果;计算第一图像的第四LUT和第五图像的第一LUT的加权和,得到第一LUT;其中,第五图像是第一图像的前一帧图像,电子设备在本次拍摄过程中采集的第1帧图像的前一帧图像的第三LUT是预设LUT。
在第四方面的另一种可能的设计方式中,当该计算机指令被处理器执行时,使得电子设备还执行如下步骤:采用预先配置的第一加权系数和第二加权系数,计算第一图像的第四LUT和第五图像的第一LUT的加权和,得到第一LUT。其中,第一加权系数是第一图像的第四LUT的加权系数,第二加权系数是第五图像的第一LUT的加权系数,第一加权系数和第二加权系数之和等于1。其中,第一加权系数越小,第二加权系数越大,多帧第二图像的过渡效果越平滑。
在第四方面的另一种可能的设计方式中,当该计算机指令被处理器执行时,使得电子设备还执行如下步骤:在采用预先配置的第一加权系数和第二加权系数,计算第一图像的第四LUT和第五图像的第一LUT的加权和,得到第一LUT之前,响应于第一预设操作,显示第一设置项和第二设置项,第一设置项用于设置第一加权系数,第二设置项用于设置第二加权系数;响应于用户对第一设置项和/或第二设置项的设置操作,将用户设置的第一加权系数作为第一图像的第四LUT的加权系数,将用户设置的第二加权系数作为第五图像的第一LUT的加权系数。其中,第一预设操作是对电子设备显示的第一预设控件的点击操作,第一预设控件用于触发电子设备设置第一图像的第四LUT和第五图像的第一LUT的权重;或者,第一预设操作是用户对电子设备的第一物理按键的点击操作。
在第四方面的另一种可能的设计方式中,当该计算机指令被处理器执行时,使得电子设备还执行如下步骤:将第一场景的指示信息和第一图像作为输入,运行预设AI模型,得到多个第三LUT的多个第三加权系数;其中,多个第三加权系数之和为1,多个第三LUT与多个第三加权系数一一对应;采用多个第三加权系数,计算多个第三LUT的加权和,得到第一LUT。
在第四方面的另一种可能的设计方式中,当该计算机指令被处理器执行时,使得电子设备还执行如下步骤:将第一场景的指示信息和第一图像作为输入,运行预设AI模型,得到多个第三LUT的多个第三加权系数;其中,多个第三加权系数之和为1,多个第三LUT与多个第三加权系数一一对应;采用多个第三加权系数,计算多个第三LUT的加权和,得到第一图像的第四LUT;计算第一图像的第四LUT和第五图像的第一LUT的加权和,得到第一LUT;其中,第五图像是第一图像的前一帧图像,电子设备在本次拍摄过程中采集的第1帧图像的前一帧图像的第三LUT是预设LUT
在第四方面的另一种可能的设计方式中,当该计算机指令被处理器执行时,使得电子设备还执行如下步骤:在根据第一场景确定第一LUT之前,获取多组数据对,每组数据对包括第六图像和第七图像,第六图像是处理第七图像得到的满足预设条件的图像;识别第七图像,确定第七图像对应的第三场景;将第七图像和第六图像,以及识别第三场景的指示信息作为输入样本,训练预设AI模型,使得预设AI模型具备确定采用何种权重对多个第三LUT求加权和得到的LUT处理第七图像能够得到第六图像的显示效果的能力。
在第四方面的另一种可能的设计方式中,当该计算机指令被处理器执行时,使得电子设备还执行如下步骤:响应于第二预设操作,显示多个第三设置项;其中,每个第三设置项对应一个第三LUT,用于设置第三LUT的第三加权系数;响应于用户对多个第三设置项中一个或多个第三设置项的设置操作,更新对应的第三加权系数;其中,电子设备采用更新后的多个第三加权系数计算多个第三LUT的加权和。
其中,第二预设操作是用户对第二预设控件的点击操作,第二预设控件用于触发电子设备设置多个第三LUT的权重;或者,第二预设操作是用户对电子设备中第二物理按键的点击操作。
在第四方面的另一种可能的设计方式中,当该计算机指令被处理器执行时,使得电子设备还执行如下步骤:响应于第三预设操作,显示一个或多个第四设置项;其中,第三预设操作用于触发电子设备新增显示效果,每个第四设置项对应一种第五LUT,每种第五LUT对应一种拍摄场景下的显示效果,第五LUT与第三LUT不同;响应于用户对任一个第四设置项的选择操作,保存用户选择的第四设置项对应的第五LUT。
在第四方面的另一种可能的设计方式中,上述第四设置项包括采用对应第五LUT处理后的预览图像,用于呈现第五LUT对应的显示效果。
第五方面,本申请提供一种电子设备,该电子设备包括存储器、显示屏、一个或多个摄像头和一个或多个处理器。存储器、显示屏、摄像头与处理器耦合。其中,存储器中存储有计算机程序代码,该计算机程序代码包括计算机指令,当该计算机指令被处理器执行时,使得电子设备执行如下步骤:获取第一图像,第一图像为电子设备的摄像头采集的图像,第一图像包括第一拍摄对象;将第一图像作为输入,运行预设人工智能AI模型,得到多个第二颜色查找表LUT的多个第三加权系数;其中,多个第三加权系数之和为1,多个第三LUT与多个第三加权系数一一对应;采用多个第三加权系数,计算多个第三LUT的加权和,得到第一LUT;根据第一LUT对第一图像进行处理得到第二图像,并显示第二图像,第二图像的显示效果与第一LUT对应。
在第五方面的另一种可能的设计方式中,当该计算机指令被处理器执行时,使得 电子设备还执行如下步骤:采用多个第三加权系数,计算多个第三LUT的加权和,得到第一图像的第四LUT;计算第一图像的第四LUT和第五图像的第一LUT的加权和,得到第一LUT;其中,第五图像是第一图像的前一帧图像,电子设备在本次拍摄过程中采集的第1帧图像的前一帧图像的第三LUT是预设LUT。
在第五方面的另一种可能的设计方式中,当该计算机指令被处理器执行时,使得电子设备还执行如下步骤:在将第一图像作为输入,运行预设AI模型,得到多个第三LUT的多个第三加权系数之前,获取多组数据对,每组数据对包括第六图像和第七图像,第六图像是处理第七图像得到的满足预设条件的图像;将第七图像和第六图像作为输入样本,训练预设AI模型,使得预设AI模型具备确定采用何种权重对多个第三LUT求加权和得到的LUT处理第七图像能够得到第六图像的显示效果的能力。
第六方面,本申请提供一种计算机可读存储介质,该计算机可读存储介质包括计算机指令,当计算机指令在电子设备上运行时,使得电子设备执行如第一方面或第二方面及其任一种可能的设计方式所述的方法。
第七方面,本申请提供一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得该计算机执行如第一方面或第二方面及任一种可能的设计方式所述的方法。该计算机可以是上述电子设备。
可以理解地,上述提供的第二方面、第三方面及其任一种可能的设计方式所述的电子设备,第四方面所述的计算机存储介质,第五方面所述的计算机程序产品所能达到的有益效果,可参考第一方面及其任一种可能的设计方式中的有益效果,此处不再赘述。
附图说明
图1为多种LUT对应的显示效果或风格的示意图;
图2为一种手机的拍照的取景界面示意图;
图3为一种手机的录像的取景界面示意图;
图4为本申请实施例提供的一种电子设备的硬件结构示意图;
图5为本申请实施例提供的一种图像处理方法的流程图;
图6为本申请实施例提供的一种手机的拍照的取景界面示意图;
图7A为本申请实施例提供的另一种图像处理方法的流程图;
图7B为本申请实施例提供的一种确定第T帧图像的最终LUT(即第一LUT)的原理示意图;
图7C为本申请实施例提供的另一种图像处理方法的流程图;
图7D为本申请实施例提供的一种确定第T帧图像的最终LUT(即第一LUT)的原理示意图;
图7E为本申请实施例提供的另一种手机的拍照的取景界面示意图;
图7F为本申请实施例提供的另一种手机的拍照的取景界面示意图;
图8为本申请实施例提供的一种手机的录像的取景界面示意图;
图9为本申请实施例提供的另一种手机的录像的取景界面示意图;
图10为本申请实施例提供的另一种手机的录像的取景界面示意图;
图11A为本申请实施例提供的另一种图像处理方法的流程图;
图11B为本申请实施例提供的另一种确定第T帧图像的最终LUT(即第一LUT)的原理示意图;
图11C为本申请实施例提供的另一种确定第T帧图像的最终LUT(即第一LUT)的原理示意图;
图12A为本申请实施例提供的另一种图像处理方法的流程图;
图12B为本申请实施例提供的另一种确定第T帧图像的最终LUT(即第一LUT)的原理示意图;
图12C为本申请实施例提供的另一种确定第T帧图像的最终LUT(即第一LUT)的原理示意图;
图13为本申请实施例提供的另一种图像处理方法的流程图;
图14A为本申请实施例提供的另一种手机的录像的取景界面示意图;
图14B为本申请实施例提供的另一种手机的录像的取景界面示意图;
图15A为本申请实施例提供的另一种确定第T帧图像的最终LUT(即第一LUT)的原理示意图;
图15B为本申请实施例提供的另一种确定第T帧图像的最终LUT(即第一LUT)的原理示意图;
图16A为本申请实施例提供的另一种手机的录像的取景界面示意图;
图16B为本申请实施例提供的另一种手机的录像的取景界面示意图;
图17A为本申请实施例提供的另一种确定第T帧图像的最终LUT(即第四LUT)的原理示意图;
图17B为本申请实施例提供的另一种确定第T帧图像的最终LUT(即第四LUT)的原理示意图;
图18A为本申请实施例提供的另一种手机的录像的取景界面示意图;
图18B为本申请实施例提供的另一种手机的录像的取景界面示意图;
图18C为本申请实施例提供的另一种手机的录像的取景界面示意图;
图19为本申请实施例提供的一种芯片系统的结构示意图。
具体实施方式
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
为了便于理解,本申请实施例这里介绍本申请实施例涉及的术语:
(1)红绿蓝(Red Green Blue,RGB):三原色RGB包括红(Red)、绿(Green)、蓝(Blue)。将这三种颜色的光按照不同比例混合,就可以得到丰富多彩的色彩。
摄像头采集的图像是由一个个像素构成的,每个像素都是由红色子像素、绿色子像素和蓝色子像素构成的。假设R、G、B三者的取值范围为0-255,如RGB(255,0,0)表示纯红色,Green(0,255,0)表示纯绿色,Blue(0,0,255)表示纯蓝色。总之,这三种颜色按照不同比例混合,就可以得到丰富多彩的色彩。
(2)颜色查找表(LUT):也可以称为LUT文件或者LUT参数,是一种红绿蓝(Red  Green Blue,RGB)的映射表。
一张图像包括很多像素,每个像素由RGB值表示。电子设备的显示屏可以根据该图像中每个像素点的RGB值来显示该图像。也就是说,这些RGB值会告诉显示屏如何发光,以混合出各种各样的色彩呈现给用户。如果想要改变该图像的色彩(或者风格、效果),则可以调整这些RGB值即可。
LUT是一种RGB的映射表,用于表征调整前后的RGB值的对应关系。例如,请参考图1,其示出一种LUT的示例。
表1
Figure PCTCN2022090630-appb-000001
当原始RGB值为(14,22,24)时,经过表1所示的LUT的映射,输出RGB值为(6,9,4,)。当原始RGB值为(61,34,67)时,经过表1所示的LUT的映射,输出RGB值为(66,17,47)。当原始RGB值为(94,14,171)时,经过表1所示的LUT的映射,输出RGB值为(117,82,187)。当原始RGB值为(241,216,222)时,经过表1所示的LUT的映射,输出RGB值为(255,247,243)。
需要说明的是,针对同一张图像,未采用LUT处理过的图像的显示效果与采用LUT处理过的图像的显示效果不同;采用不同的LUT处理同一张图像,可以得到不同风格的显示效果。本申请实施例中所述的图像的“显示效果”是指图像被显示屏显示后,可以被人眼观察到的图像效果。
例如,图1所示的LUT 1、LUT 2和LUT 3是不同的LUT。采用LUT 1处理摄像头采集的原图100,可得到图1所示的图像101。采用LUT 2处理原图100,可得到图1所示的图像102。采用LUT 3处理原图100,可得到图1所示的图像103。对比图1所示的图像101、图像102和图像103可知:图像101、图像102和图像103的显示效果不同。
常规技术中,手机拍摄(如拍照和录像)时,只能采用拍摄前预先配置的LUT、用户选择的LUT或者识别预览图像确定的LUT来处理预览图像。
示例性的,在拍照场景下,手机响应于用户对相机应用的图标的点击操作,可以显示图2中的(a)所示的拍照的取景界面201。该拍照的取景界面201可以包括摄像头采集的预览图像202和AI拍摄开关203。该预览图像202是未经过LUT处理的图像。AI拍摄开关203用于触发手机识别预览图像202对应的拍摄场景。手机可接收用户对AI拍摄开关 203的点击操作。响应于用户对AI拍摄开关203的点击操作,手机可以识别预览图像202对应的拍摄场景(如人物场景)。
其中,手机中可以保存多个预置LUT,每个预置LUT对应一种拍摄场景。例如,手机中可以保存人物场景对应的预置LUT、美食场景对应的预置LUT、植物场景对应的预置LUT、动物场景对应的预置LUT,以及大海场景对应的预置LUT等。应注意,采用每个拍摄场景对应的LUT处理该拍摄场景的图像,可以提升该拍摄场景下的显示效果。
然后,手机可以采用识别到的拍摄场景对应的预置LUT处理该预览图像202。例如,手机采用上述摄场景对应的预置LUT处理该预览图像202,可以得到图2中的(b)所示的预览图像205。具体的,响应于用户对AI拍摄开关203的点击操作,手机可以显示图2中的(b)所示的拍照的取景界面204,该拍照的取景界面204包括预览图像205。
示例性的,在录像场景下,手机可显示图3中的(a)所示的录像的取景界面301。该录像的取景界面301可以包括摄像头采集的预览图像303和拍摄风格选项302。该预览图像303是未经过LUT处理的图像。
然后,手机可接收用户对拍摄风格选项302的点击操作。响应于用户对拍摄风格选项302的点击操作,手机可以显示图3中的(b)所示的风格选择界面304,该风格选择界面304用于提示用户选择录像的拍摄风格/效果。例如,风格选择界面304可以包括提示信息“请选择您需要的拍摄风格/效果”304。该风格选择界面304还可以包括多个风格的选项,如原图选项、**风格的选项、##风格的选项和&&风格的选项。每个风格的选项用于一种预置LUT,用于触发手机采用对应的预置LUT处理录像的预览图像。
举例来说,上述多个风格(如**风格、##风格和&&风格等)可以包括:自然风格、灰调风格、油画风格、黑白风格、旅行风格、美食风格、风景风格、人物风格、宠物风格或者静物风格等。
例如,以用户选择图3中的(b)所示的##风格的选项为例。手机响应于用户对图3中的(b)所示的##风格的选项的选择操作,可以采用##风格对应的预置LUT处理录像的预览图像306,如手机可显示图3中的(c)所示的录像的取景界面305。该录像的取景界面305可以包括预览图像306。
应注意,图3中的(b)所示的原图选项对应未采用LUT处理过的图像,**风格的选项对应采用**风格的LUT处理过的图像,##风格的选项对应采用##风格的LUT处理过的图像,&&风格的选项对应采用&&风格的LUT处理过的图像。图3中的(b)所示的四张图像的显示效果不同。
综上所述,采用常规技术的方案,只能采用拍摄前预先配置的LUT、用户选择的LUT或者识别预览图像确定的LUT来处理预览图像。如此,手机只能拍摄得到上述预先配置的LUT、用户选择的LUT或者识别预览图像确定的LUT对应的风格或显示效果的照片或视频。手机拍摄的照片或视频的风格或显示效果单一,无法满足当下用户多样化的拍摄需求。
本申请实施例提供一种图像处理方法,可以应用于包括摄像头的电子设备。该电子设备可以确定摄像头采集的一帧第一图像对应的场景(即第一场景)。然后,电子设备可以确定该第一场景对应的第一LUT。最后,电子设备可以采用这一帧图像的第一LUT,对该第一图像进行图像处理得到第二图像,并显示该第二图像。其中,第二图像的显示效果与第一LUT对应的显示效果相同。
采用本方案,电子设备在拍照或录像过程中,可以根据电子设备所获取的每一帧图像动态调整LUT。这样,在拍照或录像过程中,便可以呈现出不同LUT对应的显示效果或风格,可以丰富拍照或录像得到的显示效果。
示例性的,本申请实施例中的电子设备可以为便携式计算机(如手机)、平板电脑、笔记本电脑、个人计算机(personal computer,PC)、可穿戴电子设备(如智能手表)、增强现实(augmented reality,AR)\虚拟现实(virtual reality,VR)设备、车载电脑等,以下实施例对该电子设备的具体形式不做特殊限制。
以上述电子设备是手机为例。请参考图4,其示出本申请实施例提供的一种电子设备100的结构示意图。该电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。
其中,上述传感器模块180可以包括压力传感器,陀螺仪传感器,气压传感器,磁传感器,加速度传感器,距离传感器,接近光传感器,指纹传感器180A,温度传感器,触摸传感器180B,环境光传感器,骨传导传感器等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器神经网络处理器(neural-network processing unit,NPU),和/或微控制单元(micro controller unit,MCU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,串行外设接口(serial peripheral interface,SPI),集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM) 接口,和/或通用串行总线(universal serial bus,USB)接口等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如Wi-Fi网络),蓝牙(blue tooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),NFC,红外技术(infrared,IR)等无线通信的解决方案。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。该显示屏是触摸屏。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。ISP用于处理摄像头193反馈的数据。摄像头193用于捕获静态图像或视频。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:贴膜状态识别,图像修复、图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。 存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
指纹传感器180A用于采集指纹信息。电子设备100可以利用采集的指纹信息的指纹特性进行用户身份校验(即指纹识别),以实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
触摸传感器180B,也称“触控面板(TP)”。触摸传感器180B可以设置于显示屏194,由触摸传感器180B与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180B用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180B也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
按键190包括开机键,音量键等。马达191可以产生振动提示。指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口195用于连接SIM卡。
本申请实施例提供一种图像处理方法,该方法可以应用于包括摄像头和显示屏(如触摸屏)电子设备。以上述电子设备是手机为例,如图5所示,该图像处理方法可以包括S501-S504。
S501、手机获取第一图像。该第一图像是手机的摄像头采集的图像,该第一图像包括第一拍摄对象。
在本申请实施例的应用场景(1)中,手机可以在手机拍照的预览界面采集第一图像。例如,手机可以显示图6中的(a)所示的预览界面601。该预览界面601包括手机的摄像头采集的第一图像602。该第一图像602是未采用LUT处理的图像。
在本申请实施例的应用场景(2)中,手机可以在手机录像前的预览界面采集第一图像。例如,手机可以显示图8中的(a)所示的预览界面801。该预览界面801包括手机的摄像头采集的第一图像802。该第一图像802是未采用LUT处理的图像。
在本申请实施例的应用场景(3)中,手机可以在手机正在录像的取景界面(也称为预览界面)采集第一图像。例如,图10中的(a)所示的录像的取景界面1001为还未开始录像的取景界面,取景界面1001包括预览图像1002。响应于用户在图10中的(a)所示取景界面1001的录像操作,手机可以显示图10中的(b)所示的预览界面1003。该预览界面1003包括手机的摄像头采集的第一图像1004。该第一图像1004是未采用LUT处理的图像。
需要说明的是,上述第一图像可以是手机的摄像头采集的图像。例如,该第一图像可以是手机的摄像头采集到的原始图像,该第一图像未经过ISP的图像处理。或者,第一图像可以是由手机的摄像头采集的图像得到的预览图像。例如,该第一图像可以是对手机的摄像头采集的原始图像,进行图像处理后的预览图像。
S502、手机确定第一图像对应的第一场景。其中,第一场景用于标识第一拍摄对象对 应的场景。
S503、手机根据第一场景确定第一LUT。
在本申请实施例中,手机中可以预先配置多个第三LUT。该多个第三LUT也可以称为多个预置LUT。该多个第三LUT用于对摄像头采集的预览图像进行处理得到不同显示效果的图像,每个第三LUT对应一种拍摄场景下的显示效果。例如,如图1所示,图像101是采用LUT 1(即第三LUT 1,也称为预置LUT 1)处理原图100得到的,图像102是采用LUT 2(即第三LUT 2,也称为预置LUT 2)处理原图100得到的,图像103是采用LUT 3(即第三LUT 3,也称为预置LUT 3)处理原图100得到的。对比图像101、图像102和图像103呈现出不同的显示效果。也就是说,预置LUT 1、预置LUT 2和预置LUT3可以对应不同的显示效果或风格。
本申请实施例中,不同的显示效果可以是不同拍摄场景下的显示效果。例如,该拍摄场景可以为:人物场景、旅行场景、美食场景、风景场景、宠物场景或者静物场景等。应注意,本申请实施例中所述的拍摄场景与显示效果或风格一一对应。在不同的拍摄场景下,可以采用对应的LUT处理预览图像得到相应的显示效果或风格。因此,手机可以识别第一图像,确定第一图像对应的拍摄场景(即第一场景)。然后,手机可以根据第一场景确定第一LUT。
由上述描述可知,该拍摄场景可以为人物场景、旅行场景、美食场景、风景场景、宠物场景或者静物场景等。不同的拍摄场景下采集的图像中的拍摄对象不同。例如,人物场景中采集的图像可以包括人物的图像,美食场景中采集的图像可以包括美食的图像。因此,本申请实施例中,手机可以识别第一图像中包括的拍摄对象,来确定该第一图像对应的拍摄场景。
其中,手机可以采用预先配置的图像拍摄场景检测算法,识别第一图像,以识别出该第一图像对应的拍摄场景(即第一拍景)。例如,以第一图像是图6中的(a)所示的第一图像602为例。手机识别第一图像602,可以识别出该第一图像602对应的拍摄场景(即第一场景)为人物场景。如此,手机则可以将人物场景对应的第三LUT确定为第一LUT。
需要说明的是,手机识别第一图像对应的第一场景的方法,可以参考常规技术中的相关方法,本申请实施例这里不予赘述。上述图像拍摄场景检测算法的具体示例可以参考常规技术中的相关算法,本申请实施例这里不予赘述。
在一些实施例中,如图7A所示,S503可以包括S503a。
S503a:手机将多个第三LUT中第一场景对应的第三LUT,确定为第T帧图像(即第一图像)的第一LUT。
本申请实施例这里以第T帧第一图像是图6中的(a)所示的第一图像602为例,结合图7B介绍手机执行S502-S503(包括S503a),确定第一LUT的方法。
如图7B所示,手机可以对第一图像602执行场景检测,识别出第一图像602对应的第一场景(如人物场景)。然后,手机可以执行LUT选择(即LUT Select),从多个第三LUT(如第三LUT 1、第三LUT 2和第三LUT 3等第三LUT)中选择出人物场景对应的第一LUT。
在另一些实施例中,手机在确定最终LUT时,不仅参考可以当前一帧图像(即第一图像),还参考了第一图像的前一帧图像的最终LUT。这样,可以在改变LUT的过程中,实 现不同LUT对应的显示效果或风格的平滑过渡,可以优化电子设备呈现的多帧预览图像的显示效果,提升用户拍照或录像过程中的视觉体验。
具体的,如图7C所示,S503可以包括S503A-S503B。
S503A:手机将多个第三LUT中第一场景对应的第三LUT,确定为第一图像的第四LUT。
S503B:手机计算第一图像的第四LUT和第五图像的第一LUT的加权和,得到第一LUT。其中,第五图像是第一图像的前一帧图像。手机在本次拍摄过程中采集的第1帧第一图像的前一帧图像的第三LUT是预设LUT。
其中,手机在拍照模式或录像模式下,手机的摄像头可以实时采集图像,并输出采集的每一帧图像。例如,若第一图像是手机采集的第2帧图像,则第五图像是手机采集的第1帧图像。若第一图像是手机采集的第T帧图像,则第五图像是手机采集的第T-1帧图像,T≥2,T为整数。
在一些实施例中,手机可以采用第一加权系数P 1和第二加权系数P 2,计算第T帧图像(即第一图像)的第四LUT和第T-1帧图像(即第五图像)的第一LUT的加权和,得到第T帧图像(即第一图像)的第一LUT。该第一加权系数P 1和第二加权系数P 2也可以统称为时域平滑权重。
其中,该第一加权系数P 1是第T帧图像的第四LUT的加权系数,第二加权系数P 2是第T-1帧图像的第一LUT的加权系数。上述第一加权系数P 1和第二加权系数P 2之和等于1,即P 1+P 2=1。上述第一加权系数P 1和第二加权系数P 2可以预置在手机中。
示例性的,本申请实施例中,可以将第T帧图像的第四LUT记为Q (T,2),可以将第T-1帧图像的第一LUT记为Q (T-1,3),可以将第T帧图像的第一LUT记为Q (T,3)。第0帧图像的第一LUT为预设LUT。也就是说,Q (0,3)是预先设定的值。如此,手机便可以采用以下公式(1),计算第T帧图像的第一LUT,如Q (T,3)
Q (T,3)=P 1×Q (T,2)+P 2×Q (T-1,3)    公式(1)。
例如,在T=1的情况下,Q (0,3)、第一加权系数P 1和第二加权系数P 2为已知量。因此,手机可以采用公式(1),如Q (1,3)=P 1×Q (1,2)+P 2×Q (0,3),计算第1帧图像的第一LUT,如Q (1,3)
又例如,在T=2的情况下,Q (1,3)、第一加权系数P 1和第二加权系数P 2为已知量。因此,手机可以采用公式(1),如Q (2,3)=P 1×Q (2,2)+P 2×Q (1,3),计算第2帧图像的第一LUT,如Q (2,3)
又例如,在T=3的情况下,Q (2,3)、第一加权系数P 1和第二加权系数P 2为已知量。因此,手机可以采用公式(1),如Q (3,3)=P 1×Q (3,2)+P 2×Q (2,3),计算第3帧图像的第一LUT,如Q (3,3)
又例如,在T=4的情况下,Q (4,3)、第一加权系数P 1和第二加权系数P 2为已知量。因此,手机可以采用公式(1),如Q (4,3)=P 1×Q (4,2)+P 2×Q (3,3),计算第4帧图像的第一LUT,如Q (4,3)
如此,在T=n的情况下,Q (n,3)、第一加权系数P 1和第二加权系数P 2为已知量。因此,手机可以采用公式(1),如Q (n,3)=P 1×Q (n,2)+P 2×Q (n-1,3),计算第n帧图像的第一LUT,如Q (n,3)
需要说明的是,上述第一加权系数P 1(即第T帧图像的第四LUT的加权系数)越小,第二加权系数P 2(即第T-1帧图像的第一LUT的加权系数)越大,多帧第二图像的过渡效果越平滑。
本申请实施例这里以第T帧第一图像是图6中的(a)所示的第一图像602为例,结合图7D介绍手机执行S502-S503(包括S503A-S503B),确定第一LUT的方法。
如图7D所示,手机可以对第一图像602执行场景检测,识别出第一图像602对应的第一场景(如人物场景)。然后,手机可以执行LUT选择(即LUT Select),从多个第三LUT(如第三LUT 1、第三LUT 2和第三LUT 3等第三LUT)中选择出人物场景对应的第四LUT。最后,手机可以对第T帧图像(即第一图像)的第四LUT和第T-1帧图像(即第五图像)的第一LUT进行加权和(Blending),便可以得到第T帧图像的第一LUT。
在另一些实施例中,可以由用户设置第T帧图像(即第一图像)的第四LUT和第T-1帧图像(即第五图像)的第一LUT的加权系数。具体的,上述预览界面(如预览界面601、预览界面801或预览界面1003)还可以包括第一预设控件。该第一预设控件用于触发手机设置第T帧图像的第四LUT和第T-1帧图像的第一LUT的权重,即上述第一加权系数和第二加权系数。例如,如图7E中的(a)所示,预览界面701可以包括第一预设控件703,该第一预设控件703用于触发手机设置第T帧图像的第四LUT和第T-1帧图像的第一LUT的权重。该预览界面701还包括第一图像702。具体的,在上述S503B之前,本申请实施例的方法还可以包括S503'和S503〃。
S503'、手机响应于用户对该第一预设控件的点击操作,显示第一设置项和第二设置项。
其中,该第一设置项用于设置第T帧图像的第四LUT的第一加权系数,第二设置项用于设置第T-1帧图像的第一LUT的第二加权系数。
例如,响应于用户对图7E中的(a)所示的第一预设控件703的点击操作,手机可显示图7E中的(b)所示的预览界面704。该预览界面704包括第一预设控件705、第一图像706、第一设置项707和第二设置项708。该第一设置项707用于设置第T帧图像的第四LUT的第一加权系数。该第二设置项708用于设置第T-1帧图像的第一LUT的第二加权系数。其中,第一预设控件705与第一预设控件703处于不同的状态。如第一预设控件705处于开启状态,第一预设控件703处于关闭状态。
在一些实施例中,上述预览界面(如预览界面601、预览界面801或预览界面1003)可以包括上述第一预设控件,也可以不包括上述第一预设控件。在该实施例中,手机可以接收用户在预览界面输入的第一预设操作。上述S504'可以替换为:手机响应于用户在预览界面的第一预设操作,在预览界面显示第一设置项和第二设置项。例如,该第一预设操作可以为用户在手机的显示屏(如触摸屏)输入的L形手势、S形手势或者√形手势等任一种预设手势。又例如,该第一预设操作可以是用户对手机的第一物理按键的点击操作。该第一物理按键可以是手机中的一个物理按键,或者至少两个物理按键的组合按键。
S503〃、手机响应于用户对第一设置项和/或第二设置项的设置操作,将用户设置的第一加权系数作为第T帧图像的第四LUT的加权系数,将用户设置的第二加权系数作为第T-1帧图像的第一LUT的加权系数。该第一加权系数和第二加权系数可以统称为时域平滑权重。
其中,用户设置的加权系数(包括第一加权系数和第二加权系数)不同,则手机采用 用户设置的加权系数得到的第T帧图像的第一LUT。采用不同第T帧图像的第一LUT处理同一第一图像,可以得到不同的显示效果。在一些实施例中,手机还可以显示用户调整第一加权系数和第二加权系数后,采用第T帧图像的第一LUT处理后的显示效果。
例如,图7E中的(b)所示的第一设置项707对应的第一加权系数、图7F中的(a)所示的第一设置项710对应的第一加权系数、图7F中的(b)所示的第一设置项713对应的第一加权系数均不同。并且,图7E中的(b)所示的第二设置项708对应的第二加权系数、图7F中的(a)所示的第二设置项711对应的第二加权系数、图7F中的(b)所示的第二设置项714对应的第二加权系数均不同。因此,图7E中的(b)所示的预览图像706、图7F中的(a)所示的预览图像709和图7F中的(b)所示的预览图像712的显示效果均不同。如此,用户便可以根据调整后的显示效果,设置合适的加权系数。图7F中的(c)所示的715为采用图7F中的(b)所示的权重(即加权系数)确定的LUT处理后的图像。
示例性的,假设用户设置的第一加权系数为P 1',第二加权系数为P 2'。在该实施例中,可以将第T帧图像的第四LUT记为Q (T,2),可以将第T-1帧图像的第一LUT记为Q (T-1,3),可以将第T帧图像的第一LUT记为Q (T,3)。第0帧图像的第一LUT为预设LUT。也就是说,Q (0,3)是预先设定的值。如此,手机便可以采用以下公式(2),计算第T帧图像的第一LUT,如Q (T,3)
Q (T,3)=P 1'×Q (T,2)+P 2'×Q (T-1,3)    公式(2)。
例如,在T=1的情况下,Q (0,3)、第一加权系数P 1'和第二加权系数P 2'为已知量。因此,手机可以采用公式(2),如Q (1,3)=P 1'×Q (1,2)+P 2'×Q (0,3),计算第1帧图像的第一LUT,如Q (1,3)
又例如,在T=2的情况下,Q (1,3)、第一加权系数P 1'和第二加权系数P 2'为已知量。因此,手机可以采用上述公式(2),如Q (2,3)=P 1×Q (2,2)+P 2×Q (1,3),计算第2帧图像的第一LUT,如Q (2,3)
需要说明的是,手机拍摄或录像的过程中,用户随时可以触发手机执行上述S504'和S504“,重新设置第一加权系数和第二加权系数。例如,假设T=2之后,T=3之前,将第一加权系数设置为P 1“,第二加权系数设置为P 2“。之后,手机可以采用公式(3),计算第T帧图像的第一LUT,如Q (3,3)
例如,在T=3的情况下,Q (2,3)、第一加权系数P 1“和第二加权系数P 2“为已知量。因此,手机可以采用上述公式(3),如Q (3,3)=P 1“×Q (3,2)+P 2“×Q (2,3),计算第3帧图像的第一LUT,如Q (3,3)
又例如,在T=4的情况下,Q (4,3)、第一加权系数P 1和第二加权系数P 2为已知量。因此,手机可以采用上述公式(3),如Q (4,3)=P 1“×Q (4,2)+P 2“×Q (3,3),计算第4帧图像的第一LUT,如Q (4,3)
需要说明的是,上述第一加权系数(即第T帧图像的第四LUT的加权系数)越小,第二加权系数(即第T-1帧图像的第一LUT的加权系数)越大,多帧第二图像的过渡效果越平滑。
S504、手机根据第一LUT对第一图像进行处理得到第二图像,并显示所述第二图像。该第二图像的显示效果与第一图像的第一LUT对应。
示例性的,在上述应用场景(1)中,以第一图像是图6中的(a)所示的第一图像602 为例。手机执行S504,可以得到图6中的(b)所示的第二图像604,并显示图6中的(b)所示的预览界面603。该预览界面603包括采用第T帧图像的第一LUT处理得到的第二图像604。针对同一张图像,未采用LUT处理过的图像的显示效果与采用LUT处理过的图像的显示效果不同。例如,图6中的(a)所示的第一图像602未采用LUT处理过,图6中的(b)所示的第二图像604是采用LUT处理过的图像;第一图像602的显示效果与第二图像604的显示效果不同。本申请实施例中所述的图像的“显示效果”是指图像被显示屏显示后,可以被人眼观察到的图像效果。响应于用户对图6中的(b)所示的“拍摄快门”的点击操作,手机可以保存该第二图像604,显示图6中的(c)所示的拍照的预览界面605。该拍照的预览界面605包括预览图像606。
例如,本申请实施例这里结合图7D介绍S504。手机可以执行S504,采用图7D所示的时域平滑权重(包括上述第一加权系数和第二加权系数),计算第T帧图像的第四LUT和第T-1帧图像的第一LUT的加权和,得到图7D所示的第T帧第一LUT。然后,手机可以采用图7D所示的第T帧第一LUT,对摄像头采集的预览图像进行图像处理得到图7D所示的第二图像604。
示例性的,在上述应用场景(2)中,以第一图像是图8中的(a)所示的第一图像802为例。手机执行S504,可以得到图8中的(b)所示的第二图像804,并显示图8中的(b)所示的预览界面803。该预览界面803包括采用第T帧图像的第一LUT处理得到的第二图像804。其中,图8中的(b)所示的第二图像804的显示效果与图8中的(a)所示的第一图像802的显示效果不同。
手机拍照过程中,手机的摄像头的取景界面可能会发生较大变化。例如,用户可能会移动手机,使手机的取景内容发生变化。又例如,用户可能会切换手机的前后置摄像头,使手机的取景内容发生变化。如果手机的取景内容发生较大变化,执行本方案,手机的显示效果/风格可能会随着取景内容的变化而发生变化。
具体的,在S304之后,手机可以采集第三图像,该第三图像为手机的摄像头采集的图像,该第三图像包括第二拍摄对象;手机确定第二图像对应第二场景,该第二场景用于标识第二拍摄对象对应的场景;手机根据第二场景确定第二LUT;手机根据第二LUT对第三图像进行处理得到第四图像,并显示第四图像。该第四图像的显示效果与第二LUT对应。
例如,假设图8中的(b)所示预览图像804是前置摄像头采集的图像。手机响应于用户对图8中的(b)所示的摄像头切换选项的点击操作,可以切换使用后置摄像头采集图像,如手机可显示图9中的(a)所示的录像的取景界面901。该录像的取景界面901包括预览图像(可作为第四图像)902。作为第四图像的预览图像902可以是根据摄像头采集的第三图像进行处理得到的。由于预览图像902与预览图像804的图像内容发生了较大变化;因此,预览图像902与预览图像804的拍摄场景也可能发生了较大变化。例如,预览图像804的拍摄场景为人物场景(即第一场景),预览图像902的拍摄场景可能为美食场景(即第二场景)。如此,手机则可以自动调整LUT。例如,手机可以显示图9中的(b)所示的录像的取景界面903。该录像的取景界面903包括预览图像(可作为第四图像)904。其中,预览图像(可作为第四图像)904与预览图像(可作为第二图像)902的拍摄场景不同,预览图像904处理时所采用的LUT与预览图像902处理时所采用的LUT不同;因 此,预览图像904的显示效果与预览图像902的显示效果不同。
示例性的,在上述应用场景(3)中,以第一图像是图10中的(b)所示预览界面1003中的第一图像1004为例。手机执行S504,可以得到图10中的(c)所示的第二图像1006,并显示图10中的(b)所示的预览界面1005。该预览界面1005包括采用第T帧图像的第一LUT处理得到的第二图像1006。第二图像1006的显示效果与第一图像1004的显示效果不同。
本申请实施例提供的图像处理方法中,手机可以确定摄像头采集的一帧第一图像对应的场景(即第一场景)。然后,手机可以确定该第一场景对应的第一LUT。最后,手机可以采用这一帧图像的第一LUT,对该第一图像进行图像处理得到第二图像,并显示该第二图像。其中,第二图像的显示效果与第一LUT对应的显示效果相同。
采用本方案,采用本方案,手机在拍照或录像过程中,可以根据手机周期性获取的每一帧图像动态调整LUT。这样,在拍照或录像过程中,便可以呈现出不同LUT对应的显示效果或风格,可以丰富拍照或录像得到的显示效果。
并且,手机在确定最终LUT时,不仅参考了当前一帧图像,还参考了前一帧图像的最终LUT。这样,可以在改变LUT的过程中,实现不同LUT对应的显示效果或风格的平滑过渡,可以优化手机呈现的多帧预览图像的显示效果,提升用户拍照或录像过程中的视觉体验。
需要说明的是,摄像头采集的图像可能不只包括一种拍摄场景的图像,可能包括多种拍摄场景(称为复杂的拍摄场景)的图像。例如,如图9中的(a)所示,预览图像902中包括人物的图像、美食的图像和建筑的图像。在这种情况下,如果手机执行S503所示的方法,则只能将第一图像的第一场景对应的一个第三LUT作为第一LUT;或者,只能将第一图像的第一场景对应的一个第三LUT作为第四LUT来确定第一LUT。也就是说,在上述复杂的拍摄场景中,采用S503所示的方法,第一LUT只参考了第一图像的第一场景对应的一个第三LUT,而没有参考复杂的拍摄场景中除第一场景之外的其他拍摄场景对应的第三LUT。这样,可能会影响手机的显示效果。
基于此,在另一些实施例中,手机可以将第T帧图像(即第一图像)作为预设AI模型(如预设AI模型a)的输入,运行预设AI模型得到上述多个第三LUT的权重。然后,手机可以计算该多个第三LUT的加权和,便可以得到第一LUT。具体的,如图11A所示,上述S502-S503可以替换为S1101-S1102。
S1101、手机将第T帧图像(即第一图像)作为输入,运行预设AI模型a,得到多个第三LUT的多个第三加权系数。该多个第三加权系数之和为1,多个第三LUT与多个第三加权系数一一对应。
其中,上述预设AI模型a可以是用于进行LUT权重学习的神经网络模型。例如,该预设AI模型a可以是以下任一种神经网络模型:VGG-net、Resnet和Lenet。本申请实施例中,预设AI模型a的训练过程可以包括Sa和Sb。
Sa、手机获取多组数据对,每组数据对包括第六图像和第七图像,第六图像是处理第七图像得到的满足预设条件的图像。
其中,该预设条件具体可以为:处理后的显示效果(也称为显示效果)满足预先设定的标准显示效果。也就是说,上述第六图像相当于标准图,第七图像是未处理的原图。其 中,上述第六图像可以是对第七图像进行(photo shop,PS)得到的。应注意,上述多个多组数据对可以包括多个不同拍摄场景下的数据对。
Sb、手机将第七图像和第六图像作为输入样本,训练预设AI模型a,使得预设AI模型a具备确定采用何种权重对多个第三LUT求加权和得到的LUT处理该第七图像能够得到第六图像的显示效果的能力。
示例性的,手机将第七图像和第六图像作为输入样本输入预设AI模型a后,预设AI模型a可以重复执行以下操作(1)-操作(2),直至预设AI模型a处理第七图像得到的第八图像达到第六图像的显示效果,则表示预设AI模型a具备了上述能力。
操作(1):第七图像作为输入(Input),预设AI模型a采用多个第三LUT的权重,对第七图像(Input)进行处理得到第八图像(Output)。预设AI模型a第一次对第七图像(Input)进行处理得到第八图像(Output)时,所采用的权重是默认权重。该默认权重包括多个默认加权系数。多个默认加权系数与多个第三LUT一一对应。该多个默认加权系数预先配置在手机中。
操作(2):预设AI模型a采用梯度下降法,对比第八图像(Output)与第六图像(即标准图),更新操作(1)中的权重。
需要说明的是,开始训练预设AI模型a时候,上述多个默认加权系数可能都是相同的。随着训练的进行,预设AI模型a会逐渐调整多个第三LUT的权重,学习到确定采用何种权重对多个第三LUT求加权和得到的LUT处理该第二图像能够得到第一图像的显示效果的能力。
S1102、手机采用多个第三加权系数,计算多个第三LUT的加权和,得到第T帧图像的第一LUT。
示例性的,本申请实施例这里以第T帧图像(即第一图像)是图9中的(a)所示的第一图像902为例,结合图11B介绍手机执行S1101-S1102,确定第T帧图像的第一LUT的方法。以及,手机执行S504,得到第二图像的方法。
首先,手机可以执行S1101,将第一图像902作为输入,运行图11B所示的预设AI模型a,便可以得到图11B所示的多个第三加权系数,该多个第三加权系数之和为1,多个第三LUT与多个第三加权系数一一对应。例如,假设图11B所示的预设AI模型a输出M个第三加权系数,M≥2,M是整数。假设M个第三加权系数中,第三LUT 1(即预置LUT 1)对应的第三加权系数为K (T,1),第三LUT 2(即预置LUT 2)对应的第三加权系数为K (T,2),第三LUT 3(即预置LUT 3)对应的第三加权系数为K (T,3),第三LUT M(即预置LUT M)对应的第三加权系数为K (T,M)
然后,手机可以执行S1102,采用上述多个第三加权系数,按照以下公式(4)计算M个第三LUT的加权和,得到第T帧图像的第一LUT。本申请实施例中,可以将第T帧图像的第一LUT记为Q (T,m,3),可以将第三LUT m记为Q (T,m,1)。
Figure PCTCN2022090630-appb-000002
之后,手机可以执行S504,采用图11B所示的第T帧图像的第一LUT,对第一图像902进行图像处理得到图11B所示的第二图像904。
在该实施例中,针对复杂的拍摄场景,手机确定第T帧图像的第一LUT,不仅参考了 第一图像的第一场景对应的一个第三LUT,还参考了多个第三LUT中除第一场景之外的其他拍摄场景对应的第三LUT。这样,可以提升手机的显示效果。
在另一些实施例中,手机在确定最终LUT时,不仅参考可以当前一帧图像(即第一图像),还参考了第一图像的前一帧图像的最终LUT。这样,可以在改变LUT的过程中,实现不同LUT对应的显示效果或风格的平滑过渡,可以优化电子设备呈现的多帧预览图像的显示效果,提升用户拍照或录像过程中的视觉体验。
具体的,S1102可以包括:手机采用多个第三加权系数,计算多个第三LUT的加权和,得到第T帧图像的第四LUT;手机计算第T帧图像(即第一图像)的第四LUT与第T-1帧图像(即第五图像)的第一LUT的加权和,得到第T帧图像的第一LUT。请参考图11C,其示出本实施例中手机执行S1101-S1102确定第T帧图像的第一LUT的方法;以及手机执行S504得到第二图像的方法原理示意图。
在另一些实施例中,手机可以将第T帧图像(即第一图像)和第一图像的场景检测结果均作为AI模型(如预设AI模型b)的输入,运行AI模型得到上述多个第三LUT的权重。然后,手机可以计算该多个第三LUT的加权和,便可以得到第一LUT。具体的,如图12A所示,S503可以替换为S1201-S1202。
S1201、手机将第一场景的指示信息和第一图像(即第T帧图像)作为输入,运行预设AI模型b,得到多个第三LUT的多个第三加权系数。该多个第三加权系数之和为1,该多个第三LUT与多个第三加权系数一一对应。
其中,上述预设AI模型b可以是用于进行LUT权重学习的神经网络模型。例如,该预设AI模型b可以是以下任一种神经网络模型:VGG-net、Resnet和Lenet。本申请实施例中,预设AI模型b的训练过程可以包括Si、Sii和Siii。
Si、手机获取多组数据对,每组数据对包括第六图像和第七图像,第六图像是处理第七图像得到的满足预设条件的图像。
其中,Si与上述Sa相同,本申请实施例这里不予赘述。
Sii、手机识别第七图像,确定第二图像对应的第三场景。
其中,手机识别第七图像确定第七图像对应的第三场景的方法,可以参考手机识别第一图像对应的第一场景的方法,本申请实施例这里不予赘述。
Siii、手机将第七图像和第六图像,以及识别第三场景的指示信息作为输入样本,训练预设AI模型b,使得预设AI模型b具备确定采用何种权重对多个第三LUT求加权和得到的LUT处理第七图像能够得到第六图像的显示效果。
需要说明的是,与上述预设AI模型a不同的是,预设AI模型b的输入样本增加了第第图像对应的第三场景的指示信息。该预设AI模型b的训练原理与上述预设AI模型a的训练原理相同。不同的是,第七图像对应的第三场景的指示信息可以更加明确的指示第七图像对应的拍摄场景。
应理解,如果识别到第七图像的拍摄场景为第三场景,则表示该第七图像是第三场景的图像的可能性较高。那么,将第二拍摄对象对应的第三LUT的加权系数设置为较大值,有利于提升显示效果。由此可见,该第三场景的指示信息可以对预设AI模型b的训练起到引导的作用,引导预设AI模型b向倾向于该第三场景的方向训练。这样,可以加速预设AI模型b的收敛,减少预设AI模型b的训练次数。
S1202、手机采用多个第三加权系数,计算多个第三LUT的加权和,得到第T帧图像(即第一图像)的第一LUT。
示例性的,本申请实施例这里以第T帧图像(即第一图像)是图9中的(a)所示的第一图像902为例,结合图12B介绍手机执行S1201-S1202,确定第T帧图像的第一LUT的方法。以及,手机执行S504,得到第二图像的方法。
首先,手机可以执行S502,对第T帧图像(即第一图像)902进行场景检测结果,得到图12B所示的第一图像902对应的第一场景。
然后,手机可以执行S1201,将第一图像902和第一场景的指示信息作为输入,运行图12B所示的预设AI模型b,便可以得到图12B所示的多个第三加权系数。该多个第三加权系数之和为1,多个第三LUT与多个第三加权系数一一对应。例如,假设图12B所示的预设AI模型b输出M个第三加权系数,M≥2,M是整数。手机可以执行S1202,采用多个第三加权系数,计算M个第三LUT的加权和,得到第T帧图像的第一LUT。之后,手机可以执行S505,采用图12B所示的第T帧第一LUT,对第一图像902进行图像处理得到图12B所示的第二图像904。
在该实施例中,针对复杂的拍摄场景,手机确定第T帧图像的第一LUT,不仅参考了第一图像的第一场景对应的一个第三LUT,还参考了多个第三LUT中除第一场景之外的其他拍摄场景对应的第三LUT。并且,手机确定多个第三加权系数时,还参考了第一图像。这样,可以提升手机的显示效果。
在另一些实施例中,手机在确定最终LUT时,不仅可以参考当前一帧图像(即第一图像),还参考了第一图像的前一帧图像的最终LUT。这样,可以在改变LUT的过程中,实现不同LUT对应的显示效果或风格的平滑过渡,可以优化电子设备呈现的多帧预览图像的显示效果,提升用户拍照或录像过程中的视觉体验。
具体的,S1203可以包括:手机采用多个第三加权系数,计算多个第三LUT的加权和,得到第T帧图像的第四LUT;手机计算第T帧图像(即第一图像)的第四LUT与第T-1帧图像(即第五图像)的第一LUT的加权和,得到第T帧图像的第一LUT。请参考图12C,其示出本实施例中手机执行S1201-S1202确定第T帧图像的第一LUT的方法;以及手机执行S504得到第二图像的方法原理示意图。
在另一些实施例中,用户可以调整上述预设AI模型a或预设AI模型b输出的多个第三加权系数中的至少一个第三加权系数。也就是说,手机可以接收用户对上述多个第三加权系数的调整操作,采用用户调整后的多个第三加权系数,计算上述第T帧图像的第一LUT。具体的,在上述S1102或S1202之前,本申请实施例的方法还可以包括S1301-S1302。相应的,上述S1102或S1202可以替换为S1303。例如,如图13所示,在S1202之前,本申请实施例的方法还可以包括S1301-S1302。相应的,S1202可以替换为S1303。
S1301、手机响应于用户对第二预设控件的点击操作,显示多个第三设置项。每个第三设置项对应一个第三LUT,用于设置第三LUT的第三加权系数。
具体的,上述预览界面还可以包括第二预设控件。该第二预设控件用于触发手机显示所述多个第三加权系数的多个第三设置项,以便于用户可以通过该多个第三设置项设置上述多个第三LUT的权重。
示例性的,如图14A中的(a)所示,预览界面1401包括第二预设控件1402。响应于 用户对该第二预设控件1402的点击操作,如图14A中的(b)所示,手机可以在预览界面1403显示多个第三设置项1405,如“##风格(如人物场景)”设置项、“**风格(如美食场景)”设置项和“&&风格(如建筑场景)”设置项等。本申请实施例中,以第三设置项是图14A中的(a)所示的滚动条为例,介绍本申请实施例的方法。由上述实施例可知:每种拍摄风格和拍摄场景可以对应一种第三LUT。手机可以通过上述第三设置项设置对应第三LUT的权重(即加权系数)。
响应于用户对第二预设控件1402的点击操作,该第二预设控件1402的显示状态发生变化,如手机可显示图14A中的(b)所示的第二预设控件1406。第二预设控件1402对应的显示状态(如白底黑字的显示状态)用于指示第二预设控件处于关闭状态。第二预设控件1406对应的显示状态(如黑底白字的显示状态)用于指示第二预设控件处于开启状态。预览界面1403还包括第二图像1404。第二图像1404的显示效果为:采用多个第三设置项1405所示的多个第三加权系数进行加权和计算,最终得到的第T帧第四LUT处理第一图像得到的显示效果。
在一些实施例中,上述预览界面可以包括上述第二预设控件,也可以不包括上述第二预设控件。在该实施例中,手机可以接收用户在预览界面输入的第二预设操作。上述S1301可以替换为:手机响应于用户在预览界面的第二预设操作,在预览界面显示多个第三设置项。例如,该第二预设操作可以为用户在手机的显示屏(如触摸屏)输入的L形手势、S形手势或者√形手势等任一种预设手势。该第二预设操作对应的预设手势与第一预设操作对应的预设手势不同。又例如,该第二预设操作可以是用户对手机的第二物理按键的点击操作。该第一物理按键可以是手机中的一个物理按键,或者至少两个物理按键的组合按键。该第二物理按键与上述第一物理按键不同。
S1302、手机响应于用户对多个第三设置项中一个或多个第三设置项的设置操作,更新对应的第三加权系数。
例如,手机可以接收用户对图14A中的(b)所示的多个第三设置项1405的设置操作,显示图14B中的(a)所示的预览界面1407。该预览界面1407包括多个第三设置项1409。该多个第三设置项1409所示的多个第三加权系数与多个第三设置项1405所示的多个第三加权系数不同。也就是说,手机响应于用户对多个第三设置项1405的设置操作,将多个第三加权系数由多个第三设置项1405所示的第三加权系数更新为多个第三设置项1409所示的第三加权系数。
其中,预览界面1407还包括第二图像1408。第二图像1408的显示效果为:采用多个第三设置项1409所示的多个第三加权系数进行加权和计算,最终得到的第T帧第一LUT处理第一图像得到的显示效果。对比图14B中的(a)和图14A中的(b)可知:第二图像1408的显示效果与第二图像1404的显示效果不同。
又例如,手机可以接收用户对图14B中的(a)所示的多个第三设置项1409的设置操作,显示图14B中的(b)所示的预览界面1410。该预览界面1410包括多个第三设置项1412。该多个第三设置项1412所示的多个第三加权系数与多个第三设置项1409所示的多个第三加权系数不同。也就是说,手机响应于用户对多个第三设置项1409的设置操作,将多个第三加权系数由多个第三设置项1409所示的第三加权系数更新为多个第三设置项1412所示的第三加权系数。
其中,预览界面1410还包括第二图像1411。第二图像1411的显示效果为:采用多个第三设置项1412所示的多个第三加权系数进行加权和计算,最终得到的第T帧第一LUT处理第一图像得到的显示效果。对比图14B中的(b)和图14B中的(a)可知:第二图像1411的显示效果与第二图像1408的显示效果不同。
需要说明的是,手机执行S1302之后,手机可能会接收到用户对多个第三设置项中一个或多个第三设置项的设置操作。手机更新后的多个第三加权系数之和不一定为1。
其中,用户可以通过调整上述任一个第三设置项,实时调整上述多个第三加权系数。并且,用户可以观察调整多个第三加权系数后第二图像的显示效果,设置为多个第三LUT设置合适的加权系数。
在一些实施例中,手机可以接收用户对图14B中的(b)所示的第二预设控件1406的点击操作。响应于用户对第二预设控件1406的点击操作,手机可以隐藏上述多个第三设置项,显示图14B中的(c)所示的预览界面1413。该预览界面1413包括第二预览控件1402和第二图像1414。
S1303、手机采用更新后的多个第三加权系数,计算多个第三LUT的加权和,得到第T帧图像(即第一图像)的第一LUT。
示例性的,本申请实施例这里图15A介绍手机执行S1301-S1303,确定第T帧图像的第一LUT的方法。以及,手机执行S504,得到第二图像的方法。
手机将摄像头采集的第一图像作为输入执行S1101或S1202之后,便可以得到图15A所示的多个第三加权系数,如预设AI模型a或第二预先AI模型输出的多个第三加权系数。手机可以执行S1301-S1302,采用用户自定义的第三加权系数更新上述多个第三加权系数,得到更新的多个第三加权系数。然后,手机可以执行S1303,采用更新的多个第三加权系数,按照以下公式(5)计算M个第三LUT的加权和,得到第T帧图像的第一LUT。本申请实施例中,可以将第T帧图像的第一LUT记为Q (T,3),可以将第一LUT m记为Q (T,m,1)。
Figure PCTCN2022090630-appb-000003
其中,
Figure PCTCN2022090630-appb-000004
是第三LUT m(即预置LUT m)更新后的第三加权系数。
之后,手机可以执行S504,采用图15A所示的第T帧图像的第一LUT,对第一图像进行图像处理得到图15A所示的第二图像1411。
在该实施例中,针对复杂的拍摄场景,手机不仅可以通过预设AI模型a或预设AI模型b确定多个第三LUT的加权系数,还可以为用户提供调整该多个第三LUT的加权系数的服务。如此,手机便可以根据按照用户调整后的加权系数计算第T帧图像的第四LUT。这样,手机可以按照用户的需求拍摄出用户想要的照片或者视频,可以提升用户的拍摄体验。
在另一些实施例中,手机在确定最终LUT时,不仅参考可以当前一帧图像(即第一图像),还参考了第一图像的前一帧图像的最终LUT。这样,可以在改变LUT的过程中,实现不同LUT对应的显示效果或风格的平滑过渡,可以优化电子设备呈现的多帧预览图像的显示效果,提升用户拍照或录像过程中的视觉体验。
具体的,S1303可以包括:手机采用多个第三加权系数,计算多个第三LUT的加权和, 得到第T帧图像的第四LUT;手机计算第T帧图像(即第一图像)的第四LUT与第T-1帧图像(即第五图像)的第一LUT的加权和,得到第T帧图像的第一LUT。请参考图15B,其示出本实施例中手机执行S1301-S1303确定第T帧图像的第一LUT的方法;以及手机执行S504得到第二图像的方法原理示意图。
在另一些实施例中,用户可以在手机中新增LUT。例如,假设手机中预置了M个第三LUT。那么,手机可以响应于用户新增LUT的操作,在手机中增设第M+1个第三LUT、第+2个第三LUT等。具体的,本申请实施例的方法还可以包括S1601-S1603。
S1601、响应于用户的第二预设操作,手机显示第三预设控件。该第三预设控件用于触发手机新增LUT(即LUT对应的显示效果)。
其中,响应于上述第二预设操作,手机不仅可以显示多个第三设置项,还可以显示第三预设控件。例如,响应于第二预设操作,手机可显示图16A中的(a)所示的预览界面1601。该录像的预览界面1601包括第一图像1602和第三预设控件1603。该第三预设控件1603用于触发手机新增LUT,即新增LUT对应的显示效果。
S1602、响应于用户对第三预设控件的点击操作,手机显示一个或多个第四设置项,每个第四设置项对应一种第五LUT,每种第五LUT对应一种拍摄场景下的显示效果,该第五LUT与第三LUT不同。
例如,响应于用户对图16A中的(a)所示的第三预设控件1603的点击操作,手机可显示图16A中的(b)所示的预览界面1604。该预览界面1604包括一个或多个第四设置项,如“%%风格”设置项、“@@风格”设置项、“&^风格”设置项和“^^风格”设置项等。每个第四设置项对应一个第五LUT。
S1603、响应于用户对任一个第四设置项的选择操作,手机保存用户选择的第四设置项对应的第五LUT。
示例性的,响应于用户对图16A中的(b)所示的“@@风格”设置项的选择操作,手机可以保存该“@@风格”设置项对应的第五LUT。也就是说,该“@@风格”设置项对应的第五LUT可以作为一个第三LUT,用于手机执行S503确定第T帧图像的第一LUT。
例如,响应于用户对图16A中的(b)所示的“确定”按钮的点击操作,手机可以显示图16A中的(c)所示的预览界面1605。相比于图16A中的(a)所示的预览界面1601,图16A中的(c)所示的预览界面1605还包括“@@风格”对应的第四设置项。
在一些实施例中,上述每个第四设置项还包括采用对应第五LUT处理后的预览图像,用于呈现该第五LUT对应的显示效果。例如,如图16A中的(b)所示,“%%风格”设置项、“@@风格”设置项、“&^风格”设置项和“^^风格”设置项中均展示了采用对应第五LUT处理后的预览图像。
需要说明的是,上述第五LUT可以预先保存在手机中,但是该第五LUT并未应用于手机的照相应用。手机执行S1601-S1602之后,用户选择的第五LUT便可以应用于手机的照相应用。例如,“@@风格”设置项对应的第五LUT可以作为一个第三LUT,用于手机执行S503确定第T帧图像的第一LUT。
在另一些实施例中,手机不会提供上述多个第五LUT供用户选择,而是由用户自行设置需要的LUT。在该实施例中,响应于用户对第三预设控件的点击操作,手机可以显示第四界面。该第四界面包括RGB的LUT参数的三个调节选项,该三个调节选项用于设置新 增LUT。例如,响应于用户对图16A中的(a)所示的第三预设控件1603的点击操作,手机可显示图16B中的(a)所示的第四界面16007。该第四界面16007包括三个调节选项1608。
手机可以接收用户对三个调节选项1608的调整操作,响应于用户的调整操作,保存用户设置的新增LUT。例如,手机可以接收用户对三个调节选项1608的调整操作,显示图16B中的(b)所示的第四界面1609。第四界面1609中包括三个调节选项1610。三个调节选项1610对应的LUT与三个调节选项1608对应的LUT不同。响应于用户对图16B中的(b)所示的“确定”按钮的点击操作,手机可以保存三个调节选项1610对应的LUT(即新增LUT)。
需要说明的是,LUT(也称为3D LUT)是一个比较复杂的三维查找表。LUT的设置会涉及到很多参数(如亮度和颜色等)的调整。人工设置很难细化到LUT的每一个参数的调整。因此,本申请实施例中,可以使用全局调整的方式为用户提供LUT的新增功能。也就是说,上述RGB的LUT参数的三个调节选项1608和RGB的LUT参数的三个调节选项1610是一种支持全局调整的LUT三个调节选项。
本申请实施例这里介绍上述支持全局调整的LUT三个调节选项。首先,可以初始化一个初始LUT。该初始LUT的cube(输出值与输入值完全相同)。例如,表2示出一种初始LUT,表2所示的初始LUT的输出值与输入值完全相同,均为(10,20,30)。然后,可以对LUT三个调节选项的进度条的值进行归一化。例如,可以将“0”-“+100”可以归一化到[1.1,10.0],可以将“-100”-“0”可以归一化到[0.0,1.0]。最后,可以将归一化后的值作为颜色通道系数(如采用Rgain、Ggain、Bgain表示),乘在初始LUT的输入值上,便可以得到新增LUT的输出值。如此,便可以由表2所示的初始LUT,得到表3所示的新增LUT。
表2
Figure PCTCN2022090630-appb-000005
表3
Figure PCTCN2022090630-appb-000006
例如,假设图16B中的(a)所示的原图1611中一个像素点的RGB值为(10,20,30)。假设用户设置的图16B中的(b)所示的三个调节选项1608对应的进度条的值为(45,30,65)。手机可以将(45,30,65)中的每个值由“0”-“+100”归一化到[1.1,10.0],得到(5.0,3.7,5.8)。即Rgain=5.0,Ggain=3.7,Bgain=5.8。然后,手机可以采用Rgain、Ggain、Bgain分别乘以初始LUT的输入值,便可以得到新增LUT的输出值。例如,手机可以计算RGB值(10,20,30)与(5.0,3.7,5.8)中对应gain值的乘积,得到表4所示的新增LUT的GRB 的输出值(50,74,174)。其中,50=10*Rgain=10*5.0,74=20*Ggain=20*3.7=74,174=30*Bgain=30*5.8。
表4
Figure PCTCN2022090630-appb-000007
在另一些实施例中,上述第四界面还可以包括亮度系数滑动条、暗区亮度系数/亮区亮度系数滑动条、各通道灰阶曲线调整等更多用户设置项。本申请实施例这里不予赘述。
示例性的,结合图15A,手机还可以执行S1601-S1603,如图17A或图17B所示响应于用户新增LUT的操作,在多个第三LUT中新增第五LUT。
本申请实施例中,手机还可以响应于用户的操作,在手机中新增LUT。一般而言,新增LUT是用户按照自己的需求设置的,该新增LUT与用户的拍摄需求的契合度较高。如此,手机采用该新增LUT处理摄像头采集的图像,可以拍摄出用户满意度较高的照片或者视频,可以提升用户的拍摄体验。
在另一些实施例中,本申请实施例的方法可以应用于手机对手机图库(或者相册)中的照片或视频进行图像处理的场景(简称为:拍摄后的图像处理场景)中。
在拍摄后的图像处理场景中,手机响应于用户对相册中任一张照片预设操作,可以执行S501-S504,得到并显示第二图像。
例如,手机可以显示图18A中的(a)所示的相册列表界面1801,该相册列表界面1801包括多张照片的预览项。一般而言,手机可以响应于用户对相册列表界面1801中“小女孩”照片(相当于第一图像)的预览项1802的点击操作,可以直接显示该预览项1802对应的“小女孩”照片(相当于第一图像)。本申请实施例中,手机可以响应于用户对“小女孩”照片(相当于第一图像)的预览项1802的点击操作,可以执行S501-S504,得到并显示图18A中的(b)所示的第二图像1803。图18A中的(b)所示的照片的详情页不仅包括第二图像1803,还包括编辑按钮1804。该编辑按钮1804用于触发手机编辑第二图像1803。
或者,在拍摄后的图像处理场景中,用户可以在一张照片的编辑界面中触发手机执行S501-S504,得到并显示第二图像。
例如,手机可以显示图18B中的(a)所示的照片1805(即第一图像)的详情页。手机响应于用户对图18B中的(a)所示的编辑按钮1806的点击操作,可显示图18B中的(b)所示的编辑界面1807。该编辑界面1807包括“智能AI”按钮1808、“裁剪”按钮、“滤镜”按钮和“调节”按钮。“智能AI”按钮1809用于触发手机调整第一图像的LUT。“裁剪”按钮用于触发手机裁剪第一图像。“滤镜”按钮用于触发手机为第一图像添加滤镜效果。“调节”按钮用于触发手机调整第一图像的对比度、饱和度和亮度等参数。
响应于用户对“智能AI”按钮1809的点击操作,手机可执行S501-S504,得到并显示图18B中的(c)所示的第二图像1811。图18B中的(c)所示的编辑界面不仅包括第二图像1811,还包括保存按钮1810。该保存按钮1810用于触发手机保存第二图像1811。响应于用户对保存按钮1810的点击操作,手机可以保存第二图像907并显示图18C所示的第二 图像1811的照片详情页。
需要说明的是,手机对手机图库(或者相册)中的视频进行图像处理的方法,与手机对手机图库中的照片进行图像处理的方法类似,本申请实施例这里不予赘述。不同的是,手机需要处理视频中每一帧图像。
本申请实施例提供了一种电子设备,该电子设备可以包括:显示屏(如触摸屏)、摄像头、存储器和一个或多个处理器。该显示屏、摄像头、存储器和处理器耦合。该存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令。当处理器执行计算机指令时,电子设备可执行上述方法实施例中手机执行的各个功能或者步骤。该电子设备的结构可以参考图4所示的电子设备400的结构。
本申请实施例还提供一种芯片系统,如图19所示,该芯片系统1900包括至少一个处理器1901和至少一个接口电路1902。
上述处理器1901和接口电路1902可通过线路互联。例如,接口电路1902可用于从其它装置(例如电子设备的存储器)接收信号。又例如,接口电路1902可用于向其它装置(例如处理器1901)发送信号。示例性的,接口电路1902可读取存储器中存储的指令,并将该指令发送给处理器1901。当所述指令被处理器1901执行时,可使得电子设备执行上述实施例中手机190执行的各个步骤。当然,该芯片系统还可以包含其他分立器件,本申请实施例对此不作具体限定。
本申请实施例还提供一种计算机存储介质,该计算机存储介质包括计算机指令,当所述计算机指令在电子设备上运行时,使得该电子设备执行上述方法实施例中手机执行的各个功能或者步骤。
本申请实施例还提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行上述方法实施例中手机执行的各个功能或者步骤。
通过以上实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可 以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (19)

  1. 一种图像处理方法,其特征在于,所述方法包括:
    电子设备获取第一图像,所述第一图像为所述电子设备的摄像头采集的图像,所述第一图像包括第一拍摄对象;
    所述电子设备确定所述第一图像对应的第一场景,其中,所述第一场景用于标识所述第一拍摄对象对应的场景;
    所述电子设备根据所述第一场景确定第一LUT;
    所述电子设备根据所述第一LUT对所述第一图像进行处理得到第二图像,并显示所述第二图像,所述第二图像的显示效果与所述第一LUT对应。
  2. 根据权利要求1所述的方法,其特征在于,在显示所述第二图像之后,所述方法还包括:
    所述电子设备采集第三图像,所述第三图像为所述电子设备的摄像头采集的图像,所述第三图像包括第二拍摄对象;
    所述电子设备确定所述第二图像对应第二场景,其中,所述第二场景用于标识所述第二拍摄对象对应的场景;
    所述电子设备根据所述第二场景确定第二LUT,所述第二LUT与所述第一LUT不同;
    所述电子设备根据所述新的第二LUT对所述第三图像进行处理得到第四图像,并显示所述第四图像,所述第四图像的显示效果与所述第二LUT对应。
  3. 根据权利要求1或2所述的方法,其特征在于,所述电子设备获取第一图像,包括:
    所述电子设备在所述电子设备拍照的预览界面、所述电子设备录像前的预览界面或者所述电子设备正在录像的取景界面,采集所述第一图像。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述第一图像是所述电子设备的摄像头采集的图像;或者,所述第一图像是由所述电子设备的摄像头采集的图像得到的预览图像。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述电子设备根据所述第一场景确定第一LUT,包括:
    所述电子设备将多个第三LUT中所述第一场景对应的第三LUT,确定为所述第一图像的第一LUT;
    其中,所述多个第三LUT预先配置在所述电子设备中,用于对所述电子设备的摄像头采集的图像进行处理得到不同显示效果的图像,每个第一LUT对应一种场景下的显示效果。
  6. 根据权利要求1-4中任一项所述的方法,其特征在于,所述电子设备根据所述第一场景确定第一LUT,包括:
    所述电子设备将所述多个第三LUT中所述第一场景对应的第三LUT,确定为所述第一图像的第四LUT;其中,所述多个第三LUT预先配置在所述电子设备中,用于对所述电子设备的摄像头采集的图像进行处理得到不同显示效果的图像,每个第三LUT对应一种场景下的显示效果;
    所述电子设备计算所述第一图像的第四LUT和第五图像的第一LUT的加权和,得到所述第一LUT;其中,所述第五图像是所述第一图像的前一帧图像,所述电子设备在本次拍摄过程中采集的第1帧图像的前一帧图像的第三LUT是预设LUT。
  7. 根据权利要求6所述的方法,其特征在于,所述电子设备计算所述第一图像的第四LUT和第五图像的第一LUT的加权和,得到所述第一LUT,包括:
    所述电子设备采用预先配置的第一加权系数和第二加权系数,计算所述第一图像的第四LUT和所述第五图像的第一LUT的加权和,得到所述第一LUT;
    其中,所述第一加权系数是所述第一图像的第四LUT的加权系数,所述第二加权系数是所述第五图像的第一LUT的加权系数,所述第一加权系数和所述第二加权系数之和等于1;
    其中,所述第一加权系数越小,所述第二加权系数越大,多帧所述第二图像的过渡效果越平滑。
  8. 根据权利要求7所述的方法,其特征在于,在所述电子设备采用预先配置的第一加权系数和第二加权系数,计算所述第一图像的第四LUT和第五图像的第一LUT的加权和,得到所述第一LUT之前,所述方法还包括:
    所述电子设备响应于第一预设操作,显示第一设置项和第二设置项,所述第一设置项用于设置所述第一加权系数,所述第二设置项用于设置所述第二加权系数;
    所述电子设备响应于用户对所述第一设置项和/或所述第二设置项的设置操作,将用户设置的第一加权系数作为所述第一图像的第四LUT的加权系数,将用户设置的第二加权系数作为所述第五图像的第一LUT的加权系数;
    其中,所述第一预设操作是对所述电子设备显示的第一预设控件的点击操作,所述第一预设控件用于触发所述电子设备设置所述第一图像的第四LUT和所述第五图像的第一LUT的权重;或者,所述第一预设操作是用户对所述电子设备的第一物理按键的点击操作。
  9. 根据权利要求1-4中任一项所述的方法,其特征在于,所述电子设备根据所述第一场景确定第一LUT,包括:
    所述电子设备将所述第一场景的指示信息和所述第一图像作为输入,运行预设AI模型,得到多个第三LUT的多个第三加权系数;其中,所述多个第三加权系数之和为1,所述多个第三LUT与所述多个第三加权系数一一对应;
    所述电子设备采用所述多个第三加权系数,计算所述多个第三LUT的加权和,得到所述第一LUT。
  10. 根据权利要求1-4中任一项所述的方法,其特征在于,所述电子设备根据所述第一场景确定第一LUT,包括:
    所述电子设备将所述第一场景的指示信息和所述第一图像作为输入,运行预设AI模型,得到多个第三LUT的多个第三加权系数;其中,所述多个第三加权系数之和为1,所述多个第三LUT与所述多个第三加权系数一一对应;
    所述电子设备采用所述多个第三加权系数,计算所述多个第三LUT的加权和,得到所述第一图像的第四LUT;
    所述电子设备计算所述第一图像的第四LUT和第五图像的第一LUT的加权和, 得到所述第一LUT;其中,所述第五图像是所述第一图像的前一帧图像,所述电子设备在本次拍摄过程中采集的第1帧图像的前一帧图像的第三LUT是预设LUT。
  11. 根据权利要求10所述的方法,其特征在于,在所述电子设备根据所述第一场景确定第一LUT之前,所述方法还包括:
    所述电子设备获取多组数据对,每组数据对包括第六图像和第七图像,所述第六图像是处理所述第七图像得到的满足预设条件的图像;
    所述电子设备识别所述第七图像,确定所述第七图像对应的第三场景;
    所述电子设备将所述第七图像和所述第六图像,以及识别所述第三场景的指示信息作为输入样本,训练所述预设AI模型,使得所述预设AI模型具备确定采用何种权重对所述多个第三LUT求加权和得到的LUT处理所述第七图像能够得到所述第六图像的显示效果的能力。
  12. 根据权利要求9-11中任一项所述的方法,其特征在于,所述方法还包括:
    所述电子设备响应于第二预设操作,显示多个第三设置项;其中,每个第三设置项对应一个第三LUT,用于设置所述第三LUT的第三加权系数;
    所述电子设备响应于用户对所述多个第三设置项中一个或多个第三设置项的设置操作,更新对应的第三加权系数;其中,所述电子设备采用更新后的多个第三加权系数计算所述多个第三LUT的加权和;
    其中,所述第二预设操作是用户对第二预设控件的点击操作,所述第二预设控件用于触发所述电子设备设置所述多个第三LUT的权重;或者,所述第二预设操作是用户对所述电子设备中第二物理按键的点击操作。
  13. 根据权利要求1-12中任一项所述的方法,其特征在于,所述方法还包括:
    所述电子设备响应于第三预设操作,显示一个或多个第四设置项;其中,所述第三预设操作用于触发所述电子设备新增显示效果,每个第四设置项对应一种第五LUT,每种第五LUT对应一种拍摄场景下的显示效果,所述第五LUT与所述第三LUT不同;
    响应于用户对任一个第四设置项的选择操作,所述电子设备保存用户选择的第四设置项对应的第五LUT。
  14. 根据权利要求13所述的方法,其特征在于,所述第四设置项包括采用对应第五LUT处理后的预览图像,用于呈现所述第五LUT对应的显示效果。
  15. 一种图像处理方法,其特征在于,所述方法包括:
    电子设备获取第一图像,所述第一图像为所述电子设备的摄像头采集的图像,所述第一图像包括第一拍摄对象;
    所述电子设备将所述第一图像作为输入,运行预设人工智能AI模型,得到多个第二颜色查找表LUT的多个第三加权系数;其中,所述多个第三加权系数之和为1,所述多个第三LUT与所述多个第三加权系数一一对应;
    所述电子设备采用所述多个第三加权系数,计算所述多个第三LUT的加权和,得到所述第一LUT;
    所述电子设备根据所述第一LUT对所述第一图像进行处理得到第二图像,并显示所述第二图像,所述第二图像的显示效果与所述第一LUT对应。
  16. 根据权利要求15所述的方法,其特征在于,所述电子设备采用所述多个第三 加权系数,计算所述多个第三LUT的加权和,得到所述第一LUT,包括:
    所述电子设备采用所述多个第三加权系数,计算所述多个第三LUT的加权和,得到所述第一图像的第四LUT;
    所述电子设备计算所述第一图像的第四LUT和第五图像的第一LUT的加权和,得到所述第一LUT;其中,所述第五图像是所述第一图像的前一帧图像,所述电子设备在本次拍摄过程中采集的第1帧图像的前一帧图像的第三LUT是预设LUT。
  17. 根据权利要求15或16所述的方法,其特征在于,在所述电子设备将所述第一图像作为输入,运行预设AI模型,得到多个第三LUT的多个第三加权系数之前,所述方法还包括:
    所述电子设备获取多组数据对,每组数据对包括第六图像和第七图像,所述第六图像是处理所述第七图像得到的满足预设条件的图像;
    所述电子设备将所述第七图像和所述第六图像作为输入样本,训练所述预设AI模型,使得所述预设AI模型具备确定采用何种权重对所述多个第三LUT求加权和得到的LUT处理所述第七图像能够得到所述第六图像的显示效果的能力。
  18. 一种电子设备,其特征在于,所述电子设备包括存储器、显示屏、一个或多个摄像头和一个或多个处理器;所述存储器、所述显示屏、所述摄像头与所述处理器耦合;其中,所述摄像头用于采集图像,所述显示屏用于显示所述摄像头采集的图像或者所述处理器生成的图像,所述存储器中存储有计算机程序代码,所述计算机程序代码包括计算机指令,当所述计算机指令被所述处理器执行时,使得所述电子设备执行如权利要求1-17任一项所述的方法。
  19. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-17中任一项所述的方法。
PCT/CN2022/090630 2021-07-31 2022-04-29 一种图像处理方法及电子设备 WO2023010912A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP22797244.5A EP4152741A4 (en) 2021-07-31 2022-04-29 IMAGE PROCESSING METHOD AND ELECTRONIC DEVICE

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110877402.XA CN115633250A (zh) 2021-07-31 2021-07-31 一种图像处理方法及电子设备
CN202110877402.X 2021-07-31

Publications (2)

Publication Number Publication Date
WO2023010912A1 true WO2023010912A1 (zh) 2023-02-09
WO2023010912A9 WO2023010912A9 (zh) 2023-11-16

Family

ID=84901175

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/090630 WO2023010912A1 (zh) 2021-07-31 2022-04-29 一种图像处理方法及电子设备

Country Status (3)

Country Link
EP (1) EP4152741A4 (zh)
CN (1) CN115633250A (zh)
WO (1) WO2023010912A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117560552B (zh) * 2024-01-10 2024-05-31 荣耀终端有限公司 拍摄控制方法、电子设备及可读存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105323456A (zh) * 2014-12-16 2016-02-10 维沃移动通信有限公司 用于拍摄装置的图像预览方法、图像拍摄装置
CN107820020A (zh) * 2017-12-06 2018-03-20 广东欧珀移动通信有限公司 拍摄参数的调整方法、装置、存储介质及移动终端
CN109068056A (zh) * 2018-08-17 2018-12-21 Oppo广东移动通信有限公司 一种电子设备及其拍摄图像的滤镜处理方法、存储介质
US20200128173A1 (en) * 2017-03-23 2020-04-23 Samsung Electronics Co., Ltd Electronic device, and method for processing image according to camera photographing environment and scene by using same
WO2021052292A1 (zh) * 2019-09-18 2021-03-25 华为技术有限公司 视频采集方法和电子设备
CN112948048A (zh) * 2021-03-25 2021-06-11 维沃移动通信(深圳)有限公司 信息处理方法、装置、电子设备及存储介质
WO2021136091A1 (zh) * 2019-12-30 2021-07-08 维沃移动通信有限公司 闪光灯的补光方法和电子设备

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4304623B2 (ja) * 2005-06-01 2009-07-29 ソニー株式会社 撮像装置及び撮像装置における撮像結果の処理方法
US10847073B2 (en) * 2016-10-17 2020-11-24 Huawei Technologies Co., Ltd. Image display optimization method and apparatus
US20190205929A1 (en) * 2017-12-28 2019-07-04 Facebook, Inc. Systems and methods for providing media effect advertisements in a social networking system
WO2019160194A1 (ko) * 2018-02-14 2019-08-22 엘지전자 주식회사 이동 단말기 및 그 제어방법
CN110611776B (zh) * 2018-05-28 2022-05-24 腾讯科技(深圳)有限公司 特效处理方法、计算机设备和计算机存储介质
CN109741288B (zh) * 2019-01-04 2021-07-13 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN111163350B (zh) * 2019-12-06 2022-03-01 Oppo广东移动通信有限公司 一种图像处理方法、终端及计算机存储介质
CN111416950B (zh) * 2020-03-26 2023-11-28 腾讯科技(深圳)有限公司 视频处理方法、装置、存储介质及电子设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105323456A (zh) * 2014-12-16 2016-02-10 维沃移动通信有限公司 用于拍摄装置的图像预览方法、图像拍摄装置
US20200128173A1 (en) * 2017-03-23 2020-04-23 Samsung Electronics Co., Ltd Electronic device, and method for processing image according to camera photographing environment and scene by using same
CN107820020A (zh) * 2017-12-06 2018-03-20 广东欧珀移动通信有限公司 拍摄参数的调整方法、装置、存储介质及移动终端
CN109068056A (zh) * 2018-08-17 2018-12-21 Oppo广东移动通信有限公司 一种电子设备及其拍摄图像的滤镜处理方法、存储介质
WO2021052292A1 (zh) * 2019-09-18 2021-03-25 华为技术有限公司 视频采集方法和电子设备
WO2021136091A1 (zh) * 2019-12-30 2021-07-08 维沃移动通信有限公司 闪光灯的补光方法和电子设备
CN112948048A (zh) * 2021-03-25 2021-06-11 维沃移动通信(深圳)有限公司 信息处理方法、装置、电子设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4152741A4

Also Published As

Publication number Publication date
CN115633250A (zh) 2023-01-20
EP4152741A4 (en) 2023-12-06
EP4152741A1 (en) 2023-03-22
WO2023010912A9 (zh) 2023-11-16

Similar Documents

Publication Publication Date Title
US20230396886A1 (en) Multi-channel video recording method and device
US11759143B2 (en) Skin detection method and electronic device
CN113810602B (zh) 一种拍摄方法及电子设备
EP4124019A1 (en) Video capturing method and electronic device
US20220319077A1 (en) Image-text fusion method and apparatus, and electronic device
WO2023020006A1 (zh) 基于可折叠屏的拍摄控制方法及电子设备
CN113965694B (zh) 录像方法、电子设备及计算机可读存储介质
WO2022242213A1 (zh) 一种刷新率调整方法和电子设备
CN112889027A (zh) 自动分屏的方法、图形用户界面及电子设备
US20240179397A1 (en) Video processing method and electronic device
WO2023241209A9 (zh) 桌面壁纸配置方法、装置、电子设备及可读存储介质
WO2023010912A1 (zh) 一种图像处理方法及电子设备
WO2022267861A1 (zh) 一种拍摄方法及设备
CN113965693B (zh) 一种视频拍摄方法、设备和存储介质
CN112269554B (zh) 显示系统及显示方法
CN113850709A (zh) 图像变换方法和装置
WO2023010913A1 (zh) 一种图像处理方法及电子设备
CN111885768A (zh) 调节光源的方法、电子设备和系统
CN114915722B (zh) 处理视频的方法和装置
WO2023051320A1 (zh) 更换电子设备屏幕壁纸的方法、装置和电子设备
CN115908596B (zh) 一种图像处理方法及电子设备
WO2024114257A1 (zh) 转场动效生成方法和电子设备
WO2022170918A1 (zh) 合拍方法和电子设备
CN114640743A (zh) 一种界面动效的显示方法及设备
CN117880410A (zh) 用于投屏显示的方法及电子设备

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022797244

Country of ref document: EP

Effective date: 20221108

NENP Non-entry into the national phase

Ref country code: DE