WO2023010912A9 - 一种图像处理方法及电子设备 - Google Patents

一种图像处理方法及电子设备 Download PDF

Info

Publication number
WO2023010912A9
WO2023010912A9 PCT/CN2022/090630 CN2022090630W WO2023010912A9 WO 2023010912 A9 WO2023010912 A9 WO 2023010912A9 CN 2022090630 W CN2022090630 W CN 2022090630W WO 2023010912 A9 WO2023010912 A9 WO 2023010912A9
Authority
WO
WIPO (PCT)
Prior art keywords
image
lut
electronic device
preset
scene
Prior art date
Application number
PCT/CN2022/090630
Other languages
English (en)
French (fr)
Other versions
WO2023010912A1 (zh
Inventor
肖斌
崔瀚涛
王宇
朱聪超
邵涛
胡树红
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Priority to EP22797244.5A priority Critical patent/EP4152741A4/en
Publication of WO2023010912A1 publication Critical patent/WO2023010912A1/zh
Publication of WO2023010912A9 publication Critical patent/WO2023010912A9/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/617Upgrading or updating of programs or applications for camera control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera

Definitions

  • the present application relates to the field of photography technology, and in particular, to an image processing method and electronic equipment.
  • Existing mobile phones generally have camera and video functions, and more and more people use mobile phones to take photos and videos to record every detail of their lives.
  • the preview image can only be processed using a color look-up table (LUT) pre-configured before shooting, a LUT selected by the user, or a LUT determined by identifying the preview image.
  • LUT color look-up table
  • the mobile phone can only take photos or videos with the style or display effect corresponding to the above-mentioned preconfigured or selected parameters.
  • the photos or videos taken by the mobile phone have a single style or display effect.
  • This application provides an image processing method and electronic device, which can dynamically adjust the LUT during the process of taking pictures or video recordings, and enrich the display effects obtained by taking pictures or video recordings.
  • this application provides an image processing method.
  • the electronic device can acquire the first image.
  • the first image is an image collected by a camera of the electronic device, and the first image includes a first photographed object.
  • the electronic device can determine the first scene corresponding to the first image, and the first scene is used to identify the scene corresponding to the first photographed object.
  • the electronic device may determine the first LUT according to the first scene.
  • the electronic device can process the first image according to the first LUT to obtain the second image, and display the second image. The display effect of the second image corresponds to the first LUT.
  • the electronic device can dynamically adjust the LUT according to each frame of image acquired by the electronic device.
  • the display effects or styles corresponding to different LUTs can be presented, which can enrich the display effects obtained by taking pictures or recording videos.
  • the electronic device may collect a third image.
  • the third image is an image collected by a camera of the electronic device, and the third image includes the second shot. object.
  • the electronic device can determine that the second image corresponds to the second scene, and the second scene is used to identify the scene corresponding to the second photographed object; the electronic device determines the second LUT according to the second scene; the electronic device processes the third image according to the second LUT to obtain The fourth image is displayed, and the display effect of the fourth image corresponds to the second LUT.
  • the electronic device can use different LUTs to process the images.
  • display effects or styles corresponding to different LUTs can be presented, and the display effects obtained by taking photos or videos can be enriched.
  • the electronic device determines the first LUT according to the first scene, which may include: the electronic device determines the third LUT corresponding to the first scene among the plurality of third LUTs as the first The first LUT of the image.
  • the electronic device can identify the shooting scene corresponding to the first image (ie, the first scene), and determine the first LUT based on the shooting scene.
  • a plurality of third LUTs are pre-configured in the electronic device and are used to process images collected by the camera of the electronic device to obtain images with different display effects.
  • Each first LUT corresponds to a display effect in a scene.
  • the electronic device determines the first LUT according to the first scene, which may include: the electronic device determines the third LUT corresponding to the first scene among the plurality of third LUTs as the first The fourth LUT of the image; the electronic device calculates the weighted sum of the fourth LUT of the first image and the first LUT of the fifth image to obtain the first LUT.
  • the fifth image is the previous frame image of the first image
  • the third LUT of the previous frame image of the first frame image collected by the electronic device during this shooting process is a preset LUT.
  • a plurality of third LUTs are pre-configured in the electronic device and are used to process images collected by the camera of the electronic device to obtain images with different display effects. Each third LUT corresponds to a display effect in a scene.
  • the electronic device when determining the final LUT, not only refers to the current frame of image, but also refers to the final LUT of the previous frame of image.
  • the display effect of multi-frame preview images presented by electronic devices can be optimized, and the user's visual experience during taking pictures or recording videos can be improved.
  • the electronic device calculates the weighted sum of the fourth LUT of the first image and the first LUT of the fifth image to obtain the first LUT, which may include: the electronic device uses a preconfigured The first weighting coefficient and the second weighting coefficient are used to calculate the weighted sum of the fourth LUT of the first image and the first LUT of the fifth image to obtain the first LUT.
  • the first weighting coefficient is the weighting coefficient of the fourth LUT of the first image
  • the second weighting coefficient is the weighting coefficient of the first LUT of the fifth image
  • the sum of the first weighting coefficient and the second weighting coefficient is equal to 1.
  • the first weighting coefficient and the second weighting coefficient may be preset weights preconfigured in the electronic device.
  • the first weighting coefficient and the second weighting coefficient can be set by the user in the electronic device.
  • the electronic device may respond.
  • the first setting item and the second setting item are displayed.
  • the first setting item is used to set the first weighting coefficient
  • the second setting item is used to set the second weighting coefficient.
  • the electronic device may use the first weighting coefficient set by the user as the weighting coefficient of the fourth LUT of the first image, and use the second weighting coefficient set by the user as the weighting coefficient of the fourth LUT of the first image.
  • the weighting coefficient is used as the weighting coefficient of the first LUT of the fifth image.
  • the first preset operation is a click operation on a first preset control displayed by the electronic device, and the first preset control is used to trigger the electronic device to set the weight of the fourth LUT of the first image and the first LUT of the fifth image. ;
  • the first preset operation is the user's click operation on the first physical button of the electronic device.
  • the electronic device is pre-configured with a preset artificial intelligence (artificial intelligence, AI) model (such as the preset AI model b).
  • AI artificial intelligence
  • the preset AI model b has the ability to identify the first image and the scene detection result of the first image, and output the weight of each third LUT in the plurality of third LUTs.
  • the electronic device can obtain the weight of each third LUT through the preset AI model b; then, calculate the weighted sum of multiple third LUTs based on the obtained weights to obtain the first LUT.
  • the above-mentioned electronic device determines the first LUT according to the first scene, which may include: the electronic device takes the indication information of the first scene and the first image as input, runs a preset AI model, and obtains a plurality of third LUTs of a plurality of third LUTs. Three weighting coefficients; the electronic device uses multiple third weighting coefficients to calculate the weighted sum of multiple third LUTs to obtain the first LUT.
  • the sum of the plurality of third weighting coefficients is 1, and the plurality of third LUTs correspond to the plurality of third weighting coefficients on a one-to-one basis.
  • the electronic device determines the first LUT of the first image, not only referring to a third LUT corresponding to the first scene of the first image, but also referring to multiple third LUTs except The third LUT corresponding to other shooting scenes except the first scene. In this way, the display effect of the electronic device can be improved.
  • the electronic device determines the first LUT according to the first scene, which may include: the electronic device takes the indication information of the first scene and the first image as input, and runs a preset AI model, Obtain a plurality of third weighting coefficients of a plurality of third LUTs; the electronic device uses the plurality of third weighting coefficients to calculate the weighted sum of the plurality of third LUTs to obtain a fourth LUT of the first image; the electronic device calculates a The weighted sum of the fourth LUT and the first LUT of the fifth image results in the first LUT.
  • the fifth image is the previous frame image of the first image
  • the third LUT of the previous frame image of the first frame image collected by the electronic device during this shooting process is a preset LUT.
  • the sum of the plurality of third weighting coefficients is 1, and the plurality of third LUTs correspond to the plurality of third weighting coefficients on a one-to-one basis.
  • the electronic device when determining the final LUT, not only refers to the current frame of image, but also refers to the final LUT of the previous frame of image.
  • the display effect of multi-frame preview images presented by electronic devices can be optimized, and the user's visual experience during taking pictures or recording videos can be improved.
  • the electronic device before the electronic device obtains the weight of each third LUT through the preset AI model, the electronic device can first train the preset AI model b, so that the preset AI model b It has the ability to identify the first image and the scene detection result of the first image, and output the weight of each third LUT in the plurality of third LUTs.
  • the electronic device can acquire multiple sets of data pairs, each set of data pairs includes a sixth image and a seventh image, and the sixth image is an image that satisfies the preset condition obtained by processing the seventh image. Then, the electronic device can recognize the seventh image and determine the third scene corresponding to the seventh image. Finally, the electronic device can use the seventh image and the sixth image and the instruction information for identifying the third scene as input samples to train the preset AI model, so that the preset AI model has the ability to determine what weight to use to calculate multiple third LUTs.
  • the LUT obtained by the weighted sum has the ability to process the seventh image to obtain the display effect of the sixth image.
  • the input sample of the preset AI model b adds indication information of the third scene corresponding to the seventh image.
  • the training principle of this preset AI model b is the same as the training principle of the above-mentioned preset AI model. The difference is that the indication information of the third scene corresponding to the seventh image can more clearly indicate the shooting scene corresponding to the seventh image.
  • the shooting scene of the seventh image is the third scene, it means that the possibility of the seventh image being the image of the third scene is relatively high. Then, setting the weighting coefficient of the third LUT corresponding to the subject to a larger value will help improve the display effect. It can be seen that the instruction information of the third scene can play a guiding role in the training of the preset AI model b, and guide the preset AI model b to train in a direction tending to the third scene. In this way, the convergence of the preset AI model b can be accelerated and the number of training times of the second preset AI model can be reduced.
  • this application provides an image processing method.
  • an electronic device can acquire a first image.
  • the first image is an image collected by a camera of the electronic device, and the first image includes a first shooting object.
  • the electronic device can take the first image as input, run a preset AI model (such as preset AI model a), and obtain a plurality of third weighting coefficients of a plurality of third LUTs.
  • the sum of the plurality of third weighting coefficients is 1, and the plurality of third LUTs correspond to the plurality of third weighting coefficients one-to-one.
  • the electronic device uses multiple third weighting coefficients to calculate the weighted sum of multiple third LUTs to obtain the first LUT.
  • the electronic device processes the first image according to the first LUT to obtain a second image, and displays the second image. The display effect of the second image corresponds to the first LUT.
  • the electronic device can dynamically adjust the LUT according to each frame of image acquired by the electronic device.
  • the display effects or styles corresponding to different LUTs can be presented, which can enrich the display effects obtained by taking pictures or recording videos.
  • the electronic device determines the first LUT of the first image, it not only refers to a third LUT corresponding to the first scene of the first image, but also refers to the plurality of third LUTs corresponding to other shooting scenes other than the first scene.
  • the third LUT In this way, the display effect of the electronic device can be improved.
  • the electronic device uses multiple third weighting coefficients, calculates the weighted sum of multiple third LUTs, and obtains the first LUT, including: the electronic device uses multiple third weighting coefficients, Calculate the weighted sum of multiple third LUTs to obtain the fourth LUT of the first image; the electronic device calculates the weighted sum of the fourth LUT of the first image and the first LUT of the fifth image to obtain the first LUT; wherein, the fifth The image is the previous frame image of the first image, and the third LUT of the previous frame image of the first frame image collected by the electronic device during this shooting process is the preset LUT.
  • the electronic device when determining the final LUT, not only refers to the current frame of image, but also refers to the final LUT of the previous frame of image.
  • the display effect of multi-frame preview images presented by electronic devices can be optimized, and the user's visual experience during taking pictures or recording videos can be improved.
  • the electronic device before the electronic device takes the first image as input, runs the preset AI model, and obtains multiple third weighting coefficients of multiple third LUTs, the electronic device can train the preset AI model.
  • AI model a the method for the electronic device to train the preset AI model a includes: the electronic device obtains multiple sets of data pairs, each set of data pairs includes a sixth image and a seventh image, and the sixth image is obtained by processing the seventh image and satisfies the preset conditions.
  • the electronic device takes the seventh image and the sixth image as input samples, and trains the preset AI model, so that the preset AI model is capable of processing the seventh image by determining what weight to use to weight the LUT obtained by summing multiple third LUTs. The ability to obtain the display effect of the sixth image.
  • the user can adjust the weight of the output of the above-mentioned preset AI model a or preset AI model b.
  • the method of the present application may also include: the electronic device responds to the user's second preset operation and displays a plurality of third setting items; wherein each third setting item corresponds to a third LUT and is used to set the third setting item of the third LUT. Three weighting coefficients; the electronic device updates the corresponding third weighting coefficient in response to the user's setting operation on one or more third setting items among the plurality of third setting items. The electronic device uses the updated third weighting coefficients to calculate the weighted sum of the plurality of third LUTs.
  • the above-mentioned second preset operation is the user's click operation on the second preset control.
  • the second preset control is used to trigger the electronic device to set the weights of multiple third LUTs; or the second preset operation is the user's click operation on the electronic device. Click operation of the second physical button.
  • the user can adjust the weight of the output of the preset AI model a or the preset AI model b.
  • the electronic device can adjust the LUT according to the user's needs, so that it can capture images with higher user satisfaction.
  • the user can also add a LUT in the electronic device.
  • the method of this application also includes: the electronic device displays one or more fourth setting items in response to the user's third preset operation; wherein the third preset operation is used to trigger a new display effect of the electronic device, and each fourth The setting item corresponds to a fifth LUT, and each fifth LUT corresponds to a display effect in a shooting scene.
  • the fifth LUT is different from the third LUT; in response to the user's selection operation of any fourth setting item in the preview interface, The electronic device saves the fifth LUT corresponding to the fourth setting item selected by the user.
  • the fourth setting item includes a preview image processed using a corresponding fifth LUT, for presenting a display effect corresponding to the fifth LUT.
  • the user can confirm whether a satisfactory LUT is obtained according to the adjusted display effect presented by the electronic device. In this way, the efficiency of users setting new LUTs can be improved.
  • the electronic device obtains the first image, which may include: a preview interface of the electronic device when the electronic device takes a photo, a preview interface of the electronic device before recording, or a preview interface of the electronic device while the electronic device is recording.
  • Viewfinder interface collect the first image. That is to say, this method can be applied to the photo taking scene of the electronic device, the recording scene and the scene before recording in the recording mode.
  • the first image may be an image collected by a camera of the electronic device.
  • the first image may be a preview image obtained from an image collected by a camera of the electronic device.
  • the present application provides an electronic device, which includes a memory, a display screen, one or more cameras, and one or more processors.
  • the memory, display screen, camera and processor are coupled.
  • the camera is used to collect images
  • the display screen is used to display the images collected by the camera or the images generated by the processor.
  • Computer program codes are stored in the memory.
  • the computer program codes include computer instructions. When the computer instructions are executed by the processor, the electronic The device performs the method described in the first aspect or the second aspect and any possible design manner thereof.
  • the present application provides an electronic device, which includes a memory, a display screen, one or more cameras, and one or more processors. Memory, display, camera and processor are coupled. Wherein, computer program code is stored in the memory. The computer program code includes computer instructions. When the computer instructions are executed by the processor, the electronic device performs the following steps: acquiring a first image. The first image is collected by a camera of the electronic device. Image, the first image includes a first photographic object; determine a first scene corresponding to the first image, wherein the first scene is used to identify a scene corresponding to the first photographic object; determine a first color lookup table LUT according to the first scene; according to The first LUT processes the first image to obtain a second image, and displays the second image. The display effect of the second image corresponds to the first LUT.
  • the electronic device when the computer instructions are executed by the processor, the electronic device also performs the following steps: after displaying the second image, collecting a third image, and the third image is the camera of the electronic device The collected image, the third image includes the second photographed object; determine that the second image corresponds to the second scene, wherein the second scene is used to identify the scene corresponding to the second photographed object; determine the second LUT according to the second scene; according to the second The LUT processes the third image to obtain a fourth image, and displays the fourth image. The display effect of the fourth image corresponds to the second LUT.
  • the electronic device when the computer instruction is executed by the processor, the electronic device also performs the following steps: a preview interface for the electronic device to take pictures, a preview interface before the electronic device records video, or the electronic device In the viewfinder interface of recording, the first image is collected.
  • the first image is an image collected by a camera of the electronic device; or, the first image is a preview image obtained from an image collected by the camera of the electronic device.
  • the electronic device when the computer instruction is executed by the processor, the electronic device also performs the following steps: determine the third LUT corresponding to the first scene among the plurality of third LUTs as The first LUT of the first image.
  • a plurality of third LUTs are pre-configured in the electronic device and are used to process images collected by the camera of the electronic device to obtain images with different display effects.
  • Each first LUT corresponds to a display effect in a scene.
  • the electronic device when the computer instruction is executed by the processor, the electronic device also performs the following steps: determine the third LUT corresponding to the first scene among the plurality of third LUTs as A fourth LUT of the first image; wherein a plurality of third LUTs are pre-configured in the electronic device for processing images collected by the camera of the electronic device to obtain images with different display effects, and each third LUT corresponds to a scene The display effect below; calculate the weighted sum of the fourth LUT of the first image and the first LUT of the fifth image to obtain the first LUT; where the fifth image is the previous frame image of the first image, and the electronic device is The third LUT of the previous frame of the first frame image collected during the shooting process is the preset LUT.
  • the electronic device when the computer instruction is executed by the processor, the electronic device also performs the following steps: using the preconfigured first weighting coefficient and the second weighting coefficient to calculate the first image
  • the weighted sum of the fourth LUT and the first LUT of the fifth image yields the first LUT.
  • the first weighting coefficient is the weighting coefficient of the fourth LUT of the first image
  • the second weighting coefficient is the weighting coefficient of the first LUT of the fifth image
  • the sum of the first weighting coefficient and the second weighting coefficient is equal to 1.
  • the smaller the first weighting coefficient is, the larger the second weighting coefficient is, and the smoother the transition effect of the multi-frame second image is.
  • the electronic device when the computer instructions are executed by the processor, the electronic device also performs the following steps: using the preconfigured first weighting coefficient and the second weighting coefficient, calculating the first The weighted sum of the fourth LUT of the image and the first LUT of the fifth image, before obtaining the first LUT, in response to the first preset operation, the first setting item and the second setting item are displayed, and the first setting item is used to set the A weighting coefficient, the second setting item is used to set the second weighting coefficient; in response to the user's setting operation on the first setting item and/or the second setting item, the first weighting coefficient set by the user is used as the fourth weighting coefficient of the first image.
  • the weighting coefficient of the LUT uses the second weighting coefficient set by the user as the weighting coefficient of the first LUT of the fifth image.
  • the first preset operation is a click operation on a first preset control displayed by the electronic device, and the first preset control is used to trigger the electronic device to set the weight of the fourth LUT of the first image and the first LUT of the fifth image.
  • the first preset operation is the user's click operation on the first physical button of the electronic device.
  • the electronic device when the computer instruction is executed by the processor, the electronic device also performs the following steps: using the indication information of the first scene and the first image as input, running the preset AI model to obtain multiple third weighting coefficients of multiple third LUTs; wherein the sum of the multiple third weighting coefficients is 1, and the multiple third LUTs correspond to the multiple third weighting coefficients one-to-one; using multiple third weighting coefficients Three weighting coefficients are used to calculate the weighted sum of multiple third LUTs to obtain the first LUT.
  • the electronic device when the computer instruction is executed by the processor, the electronic device also performs the following steps: using the indication information of the first scene and the first image as input, running the preset AI model to obtain multiple third weighting coefficients of multiple third LUTs; wherein the sum of the multiple third weighting coefficients is 1, and the multiple third LUTs correspond to the multiple third weighting coefficients one-to-one; using multiple third weighting coefficients Three weighting coefficients, calculate the weighted sum of multiple third LUTs to obtain the fourth LUT of the first image; calculate the weighted sum of the fourth LUT of the first image and the first LUT of the fifth image to obtain the first LUT; where, The fifth image is the previous frame image of the first image, and the third LUT of the previous frame image of the first frame image collected by the electronic device during this shooting process is the preset LUT
  • the electronic device when the computer instructions are executed by the processor, the electronic device also performs the following steps: before determining the first LUT according to the first scenario, obtain multiple sets of data pairs, each The set of data pairs includes a sixth image and a seventh image.
  • the sixth image is an image that meets the preset conditions obtained by processing the seventh image; identifies the seventh image and determines the third scene corresponding to the seventh image; combines the seventh image and the seventh image.
  • Six images, and the instruction information for identifying the third scene are used as input samples to train the preset AI model, so that the preset AI model has the ability to determine what weight to use to calculate the weighted sum of multiple third LUTs and the seventh image obtained by LUT processing can be obtained The ability of the sixth image to display effects.
  • the electronic device when the computer instructions are executed by the processor, the electronic device also performs the following steps: in response to the second preset operation, display a plurality of third setting items; wherein, Each third setting item corresponds to a third LUT, which is used to set the third weighting coefficient of the third LUT; in response to the user's setting operation on one or more third setting items among the plurality of third setting items, the corresponding a third weighting coefficient; wherein, the electronic device uses the updated plurality of third weighting coefficients to calculate the weighted sum of the plurality of third LUTs.
  • the second preset operation is the user's click operation on the second preset control, and the second preset control is used to trigger the electronic device to set the weights of multiple third LUTs; or, the second preset operation is the user's click operation on the electronic device.
  • the click operation of the second physical button is the user's click operation on the second preset control.
  • the electronic device when the computer instructions are executed by the processor, the electronic device also performs the following steps: in response to the third preset operation, display one or more fourth setting items; Among them, the third preset operation is used to trigger a new display effect of the electronic device.
  • Each fourth setting item corresponds to a fifth LUT
  • each fifth LUT corresponds to a display effect in a shooting scene.
  • the fifth LUT and the third The three LUTs are different; in response to the user's selection operation on any fourth setting item, the fifth LUT corresponding to the fourth setting item selected by the user is saved.
  • the above-mentioned fourth setting item includes using a preview image processed corresponding to the fifth LUT to present a display effect corresponding to the fifth LUT.
  • the present application provides an electronic device, which includes a memory, a display screen, one or more cameras, and one or more processors. Memory, display, camera and processor are coupled. Wherein, computer program code is stored in the memory. The computer program code includes computer instructions. When the computer instructions are executed by the processor, the electronic device performs the following steps: acquiring a first image. The first image is collected by a camera of the electronic device.
  • the first image includes a first photographic object; taking the first image as input, running a preset artificial intelligence AI model to obtain a plurality of third weighting coefficients of a plurality of second color lookup tables LUT; wherein, a plurality of third weighting coefficients The sum of the coefficients is 1, and multiple third LUTs correspond to multiple third weighting coefficients one-to-one; multiple third weighting coefficients are used to calculate the weighted sum of multiple third LUTs to obtain the first LUT; according to the first LUT
  • the first image is processed to obtain a second image, and the second image is displayed. The display effect of the second image corresponds to the first LUT.
  • the electronic device when the computer instruction is executed by the processor, the electronic device also performs the following steps: using multiple third weighting coefficients to calculate the weighted sum of multiple third LUTs, Obtain the fourth LUT of the first image; calculate the weighted sum of the fourth LUT of the first image and the first LUT of the fifth image to obtain the first LUT; where the fifth image is the previous frame image of the first image, and the electronic The third LUT of the previous frame of the first frame image captured by the device during this shooting process is the preset LUT.
  • the electronic device when the computer instruction is executed by the processor, the electronic device also performs the following steps: using the first image as input, running the preset AI model to obtain multiple third Before the multiple third weighting coefficients of the three LUTs are obtained, multiple sets of data pairs are obtained.
  • Each set of data pairs includes a sixth image and a seventh image.
  • the sixth image is an image that satisfies the preset conditions obtained by processing the seventh image; the seventh image is obtained.
  • the image and the sixth image are used as input samples to train the preset AI model so that the preset AI model has the ability to determine what weight to use to weight the sum of multiple third LUTs and obtain the LUT processing seventh image to obtain the display effect of the sixth image.
  • Ability
  • the present application provides a computer-readable storage medium.
  • the computer-readable storage medium includes computer instructions.
  • the electronic device causes the electronic device to perform the first aspect or the second aspect and any of the above.
  • One possible design approach is described.
  • the present application provides a computer program product.
  • the computer program product When the computer program product is run on a computer, it causes the computer to execute the method described in the first aspect or the second aspect and any possible design manner.
  • the computer may be the electronic device described above.
  • Figure 1 is a schematic diagram of the display effects or styles corresponding to various LUTs
  • Figure 2 is a schematic viewfinder interface for taking pictures on a mobile phone
  • Figure 3 is a schematic viewfinder interface for video recording on a mobile phone
  • Figure 4 is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of the present application.
  • Figure 5 is a flow chart of an image processing method provided by an embodiment of the present application.
  • Figure 6 is a schematic viewfinder interface for taking pictures of a mobile phone provided by an embodiment of the present application.
  • Figure 7A is a flow chart of another image processing method provided by an embodiment of the present application.
  • FIG. 7B is a schematic diagram of the principle of determining the final LUT (i.e., the first LUT) of the T-th frame image provided by an embodiment of the present application;
  • Figure 7C is a flow chart of another image processing method provided by an embodiment of the present application.
  • Figure 7D is a schematic diagram of the principle of determining the final LUT (i.e., the first LUT) of the T-th frame image provided by an embodiment of the present application;
  • Figure 7E is a schematic viewfinder interface for taking photos of another mobile phone provided by an embodiment of the present application.
  • Figure 7F is a schematic viewfinder interface for taking photos of another mobile phone provided by an embodiment of the present application.
  • Figure 8 is a schematic diagram of a viewing interface for video recording on a mobile phone provided by an embodiment of the present application.
  • Figure 9 is a schematic diagram of a viewfinding interface for video recording on another mobile phone provided by an embodiment of the present application.
  • Figure 10 is a schematic diagram of a viewfinding interface for video recording on another mobile phone provided by an embodiment of the present application.
  • Figure 11A is a flow chart of another image processing method provided by an embodiment of the present application.
  • Figure 11B is another schematic diagram of the principle of determining the final LUT (i.e., the first LUT) of the T-th frame image provided by the embodiment of the present application;
  • Figure 11C is another schematic diagram of the principle of determining the final LUT (i.e., the first LUT) of the T-th frame image provided by the embodiment of the present application;
  • Figure 12A is a flow chart of another image processing method provided by an embodiment of the present application.
  • Figure 12B is another schematic diagram of the principle of determining the final LUT (i.e., the first LUT) of the T-th frame image provided by the embodiment of the present application;
  • Figure 12C is another schematic diagram of the principle of determining the final LUT (i.e., the first LUT) of the T-th frame image provided by the embodiment of the present application;
  • Figure 13 is a flow chart of another image processing method provided by an embodiment of the present application.
  • Figure 14A is a schematic viewfinder interface of another mobile phone video recording provided by an embodiment of the present application.
  • Figure 14B is a schematic viewfinder interface of another mobile phone video recording provided by an embodiment of the present application.
  • Figure 15A is another schematic diagram of the principle of determining the final LUT (i.e., the first LUT) of the T-th frame image provided by the embodiment of the present application;
  • Figure 15B is another schematic diagram of the principle of determining the final LUT (i.e., the first LUT) of the T-th frame image provided by the embodiment of the present application;
  • Figure 16A is a schematic viewfinder interface of another mobile phone video recording provided by an embodiment of the present application.
  • Figure 16B is a schematic viewfinder interface of another mobile phone video recording provided by an embodiment of the present application.
  • Figure 17A is another schematic diagram of the principle of determining the final LUT (i.e., the fourth LUT) of the T-th frame image provided by an embodiment of the present application;
  • Figure 17B is another schematic diagram of the principle of determining the final LUT (i.e., the fourth LUT) of the T-th frame image provided by the embodiment of the present application;
  • Figure 18A is a schematic viewfinder interface of another mobile phone video recording provided by an embodiment of the present application.
  • Figure 18B is a schematic viewfinder interface of another mobile phone video recording provided by an embodiment of the present application.
  • Figure 18C is a schematic viewfinder interface of another mobile phone video recording provided by an embodiment of the present application.
  • Figure 19 is a schematic structural diagram of a chip system provided by an embodiment of the present application.
  • first and second are used for descriptive purposes only and cannot be understood as indicating or implying relative importance or implicitly indicating the quantity of indicated technical features. Therefore, features defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of this embodiment, unless otherwise specified, “plurality” means two or more.
  • RGB Red Green Blue
  • the image collected by the camera is composed of pixels, and each pixel is composed of red sub-pixels, green sub-pixels and blue sub-pixels.
  • R, G, and B is 0-255.
  • RGB(255,0,0) represents pure red
  • Green(0,255,0) represents pure green
  • Blue(0,0,255) Indicates pure blue. In short, by mixing these three colors in different proportions, you can get a variety of colors.
  • Color lookup table It can also be called a LUT file or LUT parameter. It is a red, green, blue (Red Green Blue, RGB) mapping table.
  • An image consists of many pixels, each represented by an RGB value.
  • the display screen of the electronic device can display the image based on the RGB value of each pixel in the image. In other words, these RGB values will tell the display how to emit light to mix a variety of colors for the user. If you want to change the color (or style, effect) of the image, you can adjust these RGB values.
  • LUT is an RGB mapping table, used to represent the corresponding relationship between RGB values before and after adjustment.
  • Figure 1 shows an example of a LUT.
  • the output RGB value is (6,9,4,).
  • the output RGB value is (66,17,47).
  • the output RGB value is (94,14,171), after mapping by the LUT shown in Table 1, the output RGB value is (117,82,187).
  • the output RGB value is (255,247,243).
  • the display effect of the image without LUT processing is different from the display effect of the image processed by LUT; using different LUTs to process the same image can obtain different styles of display effects.
  • the "display effect" of an image described in the embodiments of this application refers to the image effect that can be observed by human eyes after the image is displayed on the display screen.
  • LUT 1, LUT 2, and LUT 3 shown in Figure 1 are different LUTs.
  • LUT 1 to process the original image 100 collected by the camera the image 101 shown in Figure 1 can be obtained.
  • LUT 2 to process the original image 100 the image 102 shown in Figure 1 can be obtained.
  • LUT 3 to process the original image 100 the image 103 shown in Figure 1 can be obtained. Comparing the image 101, the image 102 and the image 103 shown in Figure 1 shows that the display effects of the image 101, the image 102 and the image 103 are different.
  • the preview image can only be processed using a LUT preconfigured before shooting, a LUT selected by the user, or a LUT determined by identifying the preview image.
  • the mobile phone may display the photo-taking viewfinder interface 201 shown in (a) in Figure 2 in response to the user's click operation on the icon of the camera application.
  • the viewfinder interface 201 for taking pictures may include a preview image 202 collected by the camera and an AI shooting switch 203.
  • the preview image 202 is an image that has not undergone LUT processing.
  • the AI shooting switch 203 is used to trigger the mobile phone to recognize the shooting scene corresponding to the preview image 202.
  • the mobile phone can receive the user's click operation on the AI shooting switch 203.
  • the mobile phone can identify the shooting scene (such as a character scene) corresponding to the preview image 202.
  • multiple preset LUTs can be saved in the mobile phone, and each preset LUT corresponds to a shooting scene.
  • the mobile phone can save preset LUTs corresponding to character scenes, preset LUTs corresponding to food scenes, preset LUTs corresponding to plant scenes, preset LUTs corresponding to animal scenes, and preset LUTs corresponding to sea scenes, etc. It should be noted that using the LUT corresponding to each shooting scene to process the image of the shooting scene can improve the display effect of the shooting scene.
  • the mobile phone can process the preview image 202 using the preset LUT corresponding to the recognized shooting scene.
  • the mobile phone uses the preset LUT corresponding to the above-mentioned shooting scene to process the preview image 202, and can obtain the preview image 205 shown in (b) in Figure 2.
  • the mobile phone in response to the user's click operation on the AI shooting switch 203, can display the viewfinding interface 204 for taking pictures shown in (b) in Figure 2, and the viewfinding interface 204 for taking pictures includes the preview image 205.
  • the mobile phone may display the video viewing viewfinder interface 301 shown in (a) of Figure 3 .
  • the video framing interface 301 may include a preview image 303 collected by the camera and a shooting style option 302. This preview image 303 is an image that has not undergone LUT processing.
  • the mobile phone may then receive the user's click operation on the shooting style option 302.
  • the mobile phone may display the style selection interface 304 shown in (b) of Figure 3.
  • the style selection interface 304 is used to prompt the user to select a shooting style/effect for recording.
  • the style selection interface 304 may include prompt information "Please select the shooting style/effect you need" 304.
  • the style selection interface 304 may also include multiple style options, such as original image options, ** style options, ## style options and && style options. Each style option is used for a preset LUT, which is used to trigger the phone to use the corresponding preset LUT to process the preview image of the video.
  • the above multiple styles can include: natural style, gray tone style, oil painting style, black and white style, travel style, food style, landscape style, character style, Pet style or still life style, etc.
  • the mobile phone can use the preset LUT corresponding to the ## style to process the preview image 306 of the video recording, for example, the mobile phone can display ( in Figure 3 c) The viewfinder interface 305 of the video recording shown.
  • the video framing interface 305 may include a preview image 306 .
  • the original image option shown in (b) in Figure 3 corresponds to the image that has not been processed with LUT
  • the ** style option corresponds to the image that has been processed with ** style LUT
  • the ## style option corresponds to the image that has been processed with # # Style LUT processed image
  • && style option corresponds to image processed with && style LUT.
  • the four images shown in (b) in Figure 3 have different display effects.
  • solutions using conventional technology can only use LUTs preconfigured before shooting, LUTs selected by the user, or LUTs determined by identifying the preview image to process the preview image.
  • the mobile phone can only take photos or videos of the style or display effect corresponding to the above-mentioned pre-configured LUT, the LUT selected by the user, or the LUT determined by recognizing the preview image.
  • the photos or videos taken by mobile phones have a single style or display effect, which cannot meet the diverse shooting needs of current users.
  • Embodiments of the present application provide an image processing method, which can be applied to electronic devices including cameras.
  • the electronic device can determine the scene corresponding to the first frame of the image collected by the camera (ie, the first scene). Then, the electronic device can determine the first LUT corresponding to the first scene. Finally, the electronic device can use the first LUT of this frame of image, perform image processing on the first image to obtain a second image, and display the second image.
  • the display effect of the second image is the same as the display effect corresponding to the first LUT.
  • the electronic device can dynamically adjust the LUT according to each frame of image acquired by the electronic device.
  • the display effects or styles corresponding to different LUTs can be presented, which can enrich the display effects obtained by taking pictures or recording videos.
  • the electronic device in the embodiment of the present application can be a portable computer (such as a mobile phone), a tablet computer, a notebook computer, a personal computer (PC), a wearable electronic device (such as a smart watch), an augmented reality (augmented reality, AR) ⁇ virtual reality (VR) equipment, vehicle-mounted computers, etc.
  • a portable computer such as a mobile phone
  • a tablet computer such as a notebook computer
  • a personal computer PC
  • a wearable electronic device such as a smart watch
  • an augmented reality (augmented reality, AR) ⁇ virtual reality (VR) equipment such as a smart watch
  • VR virtual reality
  • FIG. 4 shows a schematic structural diagram of an electronic device 100 provided by an embodiment of the present application.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2.
  • Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, And subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the above-mentioned sensor module 180 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor 180A, a temperature sensor, a touch sensor 180B, an ambient light sensor, a bone conduction sensor, etc. .
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown in the figures, or some components may be combined, some components may be separated, or some components may be arranged differently.
  • the components illustrated may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor neural network processor (neural-network processing unit, NPU), and/or Micro control unit (micro controller unit, MCU), etc.
  • application processor application processor, AP
  • modem processor a graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU baseband processor neural network processor
  • MCU Micro control unit
  • different processing units can be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 110 . If the processor 110 needs to use the instructions or data again, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • Interfaces can include integrated circuit (inter-integrated circuit, I2C) interface, serial peripheral interface (serial peripheral interface, SPI), integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, pulse code modulation (pulse code modulation) , PCM) interface, universal asynchronous receiver/transmitter (UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, Subscriber identity module (SIM) interface, and/or universal serial bus (USB) interface, etc.
  • I2C integrated circuit
  • SPI serial peripheral interface
  • I2S integrated circuit built-in audio
  • I2S integrated circuit sound
  • pulse code modulation pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous receiver/transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM Subscriber identity module
  • USB universal serial bus
  • the interface connection relationships between the modules illustrated in the embodiment of the present invention are only schematic illustrations and do not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the charging management module 140 is used to receive charging input from the charger.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, internal memory 121, external memory, display screen 194, camera 193, wireless communication module 160, etc.
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 can be implemented through the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor and the baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example: Antenna 1 can be reused as a diversity antenna for a wireless LAN. In other embodiments, antennas may be used in conjunction with tuning switches.
  • the mobile communication module 150 can provide solutions for wireless communication including 2G/3G/4G/5G applied on the electronic device 100 .
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as Wi-Fi network), Bluetooth (blue tooth, BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (FM), NFC, infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • Bluetooth blue tooth, BT
  • global navigation satellite system global navigation satellite system
  • FM frequency modulation
  • NFC infrared technology
  • infrared infrared, IR
  • the electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is an image processing microprocessor and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • the display screen 194 is used to display images, videos, etc.
  • the display is a touch screen.
  • the electronic device 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the electronic device 100 can implement the shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 193.
  • Camera 193 is used to capture still images or video.
  • the electronic device 100 may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • NPU is a neural network (NN) computing processor.
  • NN neural network
  • the NPU can realize intelligent cognitive applications of the electronic device 100, such as: film status recognition, image repair, image recognition, face recognition, speech recognition, text understanding, etc.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement the data storage function. Such as saving music, videos, etc. files in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the processor 110 executes instructions stored in the internal memory 121 to execute various functional applications and data processing of the electronic device 100 .
  • the internal memory 121 may include a program storage area and a data storage area. Among them, the stored program area can store an operating system, at least one application program required for a function (such as a sound playback function, an image playback function, etc.).
  • the storage data area may store data created during use of the electronic device 100 (such as audio data, phone book, etc.).
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one disk storage device, flash memory device, universal flash storage (UFS), etc.
  • the electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the fingerprint sensor 180A is used to collect fingerprint information.
  • the electronic device 100 can use the fingerprint characteristics of the collected fingerprint information to perform user identity verification (ie, fingerprint recognition) to achieve fingerprint unlocking, access to application locks, fingerprint photography, fingerprint answering of incoming calls, etc.
  • user identity verification ie, fingerprint recognition
  • Touch sensor 180B is also called “touch panel (TP)".
  • the touch sensor 180B can be disposed on the display screen 194.
  • the touch sensor 180B and the display screen 194 form a touch screen, which is also called a "touch screen”.
  • the touch sensor 180B is used to detect a touch operation on or near the touch sensor 180B.
  • the touch sensor can pass the detected touch operation to the application processor to determine the touch event type.
  • Visual output related to the touch operation may be provided through display screen 194 .
  • the touch sensor 180B may also be disposed on the surface of the electronic device 100 at a location different from that of the display screen 194 .
  • the buttons 190 include a power button, a volume button, etc.
  • the motor 191 can generate vibration prompts.
  • the indicator 192 may be an indicator light, which may be used to indicate charging status, power changes, or may be used to indicate messages, missed calls, notifications, etc.
  • the SIM card interface 195 is used to connect a SIM card.
  • Embodiments of the present application provide an image processing method, which can be applied to electronic devices including cameras and display screens (such as touch screens). Taking the above electronic device as a mobile phone as an example, as shown in Figure 5, the image processing method may include S501-S504.
  • the mobile phone obtains the first image.
  • the first image is an image collected by the camera of the mobile phone, and the first image includes the first photographed object.
  • the mobile phone can collect the first image on the preview interface of the mobile phone taking pictures.
  • the mobile phone may display the preview interface 601 shown in (a) of Figure 6 .
  • the preview interface 601 includes the first image 602 collected by the camera of the mobile phone.
  • the first image 602 is an image without LUT processing.
  • the mobile phone can collect the first image on the preview interface before recording on the mobile phone.
  • the mobile phone may display the preview interface 801 shown in (a) of Figure 8 .
  • the preview interface 801 includes the first image 802 collected by the camera of the mobile phone.
  • the first image 802 is an image without LUT processing.
  • the mobile phone can collect the first image in the viewfinder interface (also called the preview interface) where the mobile phone is recording.
  • the viewfinder interface 1001 for recording shown in (a) of FIG. 10 is a viewfinder interface that has not yet started recording, and the viewfinder interface 1001 includes a preview image 1002 .
  • the mobile phone can display the preview interface 1003 shown in (b) of Figure 10.
  • the preview interface 1003 includes the first image 1004 collected by the camera of the mobile phone.
  • the first image 1004 is an image without LUT processing.
  • the above-mentioned first image may be an image collected by the camera of the mobile phone.
  • the first image may be an original image collected by the camera of the mobile phone, and the first image has not been image processed by the ISP.
  • the first image may be a preview image obtained from an image collected by a camera of the mobile phone.
  • the first image may be an original image collected by the camera of the mobile phone and a preview image after image processing.
  • the mobile phone determines the first scene corresponding to the first image. Wherein, the first scene is used to identify the scene corresponding to the first shooting object.
  • the mobile phone determines the first LUT based on the first scene.
  • multiple third LUTs may be pre-configured in the mobile phone.
  • the plurality of third LUTs may also be called a plurality of preset LUTs.
  • the plurality of third LUTs are used to process the preview images collected by the camera to obtain images with different display effects.
  • Each third LUT corresponds to a display effect in a shooting scene.
  • image 101 is obtained by using LUT 1 (i.e., the third LUT 1, also known as preset LUT 1) to process the original image 100
  • image 102 is obtained by using LUT 2 (i.e., the third LUT 2, also known as preset LUT 1).
  • preset LUT 2 i.e., the third LUT 2, also known as preset LUT 2
  • preset LUT 2 is obtained by processing the original image 100.
  • Image 103 is obtained by processing the original image 100 using LUT 3 (the third LUT 3, also called preset LUT 3). Comparing image 101, image 102 and image 103 presents different display effects. In other words, preset LUT 1, preset LUT 2 and preset LUT3 can correspond to different display effects or styles.
  • different display effects may be display effects under different shooting scenes.
  • the shooting scene can be: people scene, travel scene, food scene, landscape scene, pet scene or still life scene, etc.
  • the shooting scenes described in the embodiments of this application correspond to the display effects or styles one-to-one.
  • the corresponding LUT can be used to process the preview image to obtain the corresponding display effect or style. Therefore, the mobile phone can recognize the first image and determine the shooting scene corresponding to the first image (ie, the first scene). Then, the mobile phone can determine the first LUT based on the first scene.
  • the shooting scene can be a character scene, a travel scene, a food scene, a landscape scene, a pet scene or a still life scene, etc.
  • the objects captured in the images collected under different shooting scenarios are different.
  • images collected in a people scene may include images of people
  • images collected in a food scene may include images of food. Therefore, in this embodiment of the present application, the mobile phone can identify the shooting object included in the first image to determine the shooting scene corresponding to the first image.
  • the mobile phone can use a preconfigured image shooting scene detection algorithm to identify the first image, so as to identify the shooting scene corresponding to the first image (ie, the first shooting scene). For example, assume that the first image is the first image 602 shown in (a) in FIG. 6 . The mobile phone recognizes the first image 602 and can recognize that the shooting scene (ie, the first scene) corresponding to the first image 602 is a human scene. In this way, the mobile phone can determine the third LUT corresponding to the character scene as the first LUT.
  • S503 may include S503a.
  • S503a The mobile phone determines the third LUT corresponding to the first scene among the plurality of third LUTs as the first LUT of the T-th frame image (ie, the first image).
  • the mobile phone can perform scene detection on the first image 602 and identify the first scene (such as a character scene) corresponding to the first image 602 . Then, the mobile phone can perform LUT selection (i.e., LUT Select) to select the first LUT corresponding to the character scene from multiple third LUTs (such as third LUT 1, third LUT 2, and third LUT 3, etc.) .
  • LUT selection i.e., LUT Select
  • the mobile phone when determining the final LUT, may not only refer to the current frame image (ie, the first image), but also refer to the final LUT of the previous frame image of the first image.
  • the smooth transition of the display effects or styles corresponding to different LUTs can be achieved, the display effect of the multi-frame preview image presented by the electronic device can be optimized, and the user's visual experience during the photo or video recording process can be improved.
  • S503 may include S503A-S503B.
  • S503A The mobile phone determines the third LUT corresponding to the first scene among the plurality of third LUTs as the fourth LUT of the first image.
  • S503B The mobile phone calculates the weighted sum of the fourth LUT of the first image and the first LUT of the fifth image to obtain the first LUT.
  • the fifth image is the previous frame image of the first image.
  • the third LUT of the previous frame of the first frame of the first image captured by the mobile phone during this shooting process is the preset LUT.
  • the camera of the mobile phone can collect images in real time and output each frame of image collected. For example, if the first image is the second frame of image collected by the mobile phone, then the fifth image is the first frame of image collected by the mobile phone. If the first image is the T-th frame image collected by the mobile phone, then the fifth image is the T-1th frame image collected by the mobile phone, T ⁇ 2, and T is an integer.
  • the mobile phone can use the first weighting coefficient P 1 and the second weighting coefficient P 2 to calculate the fourth LUT of the T-th frame image (i.e., the first image) and the T-1-th frame image (i.e., the fifth image). ) to obtain the first LUT of the T-th frame image (i.e., the first image).
  • the first weighting coefficient P 1 and the second weighting coefficient P 2 may also be collectively referred to as time domain smoothing weights.
  • the first weighting coefficient P 1 is the weighting coefficient of the fourth LUT of the T-th frame image
  • the second weighting coefficient P 2 is the weighting coefficient of the first LUT of the T-1th frame image.
  • the above-mentioned first weighting coefficient P 1 and second weighting coefficient P 2 can be preset in the mobile phone.
  • the fourth LUT of the T-th frame image can be recorded as Q (T, 2)
  • the first LUT of the T-1th frame image can be recorded as Q (T-1, 3 )
  • the first LUT of the T-th frame image can be recorded as Q (T, 3) .
  • the first LUT of the 0th frame image is the default LUT.
  • Q (0, 3) is a preset value.
  • the mobile phone can use the following formula (1) to calculate the first LUT of the T-th frame image, such as Q (T, 3) .
  • the first image of the T-th frame is the first image 602 shown in (a) in Figure 6 as an example.
  • the mobile phone executes S502-S503 (including S503A-S503B) to determine the first LUT. Methods.
  • the mobile phone can perform scene detection on the first image 602 and identify the first scene (such as a character scene) corresponding to the first image 602 . Then, the mobile phone can perform LUT selection (i.e., LUT Select) to select the fourth LUT corresponding to the character scene from multiple third LUTs (such as third LUT 1, third LUT 2, and third LUT 3). . Finally, the mobile phone can perform a weighted sum (Blending) of the fourth LUT of the T-th frame image (i.e., the first image) and the first LUT of the T-1-th frame image (i.e., the fifth image) to obtain the T-th frame image. The first LUT.
  • LUT selection i.e., LUT Select
  • the mobile phone can perform a weighted sum (Blending) of the fourth LUT of the T-th frame image (i.e., the first image) and the first LUT of the T-1-th frame image (i.e., the fifth image) to obtain the T-th
  • the weighting coefficients of the fourth LUT of the T-th frame image (ie, the first image) and the first LUT of the T-1-th frame image (ie, the fifth image) may be set by the user.
  • the above preview interface (such as preview interface 601, preview interface 801 or preview interface 1003) may also include a first preset control.
  • the first preset control is used to trigger the mobile phone to set the weights of the fourth LUT of the T-th frame image and the first LUT of the T-1-th frame image, that is, the above-mentioned first weighting coefficient and the second weighting coefficient.
  • the preview interface 701 may include a first preset control 703, which is used to trigger the mobile phone to set the fourth LUT and the T-1th LUT of the T-th frame image.
  • the preview interface 701 also includes a first image 702.
  • the method in the embodiment of the present application may also include S503' and S503′′.
  • the mobile phone displays the first setting item and the second setting item in response to the user's click operation on the first preset control.
  • the first setting item is used to set the first weighting coefficient of the fourth LUT of the T-th frame image
  • the second setting item is used to set the second weighting coefficient of the first LUT of the T-1th frame image
  • the mobile phone may display the preview interface 704 shown in (b) in Figure 7E.
  • the preview interface 704 includes a first preset control 705, a first image 706, a first setting item 707 and a second setting item 708.
  • the first setting item 707 is used to set the first weighting coefficient of the fourth LUT of the T-th frame image.
  • the second setting item 708 is used to set the second weighting coefficient of the first LUT of the T-1th frame image.
  • the first preset control 705 and the first preset control 703 are in different states. For example, the first preset control 705 is in the on state, and the first preset control 703 is in the off state.
  • the above-mentioned preview interface may include the above-mentioned first preset control, or may not include the above-mentioned first preset control.
  • the mobile phone can receive the first preset operation input by the user on the preview interface.
  • the above S504' can be replaced with: the mobile phone displays the first setting item and the second setting item on the preview interface in response to the user's first preset operation on the preview interface.
  • the first preset operation may be any preset gesture such as an L-shaped gesture, an S-shaped gesture, or a ⁇ -shaped gesture input by the user on the display screen of the mobile phone (such as a touch screen).
  • the first preset operation may be the user's click operation on the first physical button of the mobile phone.
  • the first physical button may be a physical button in the mobile phone, or a combination of at least two physical buttons.
  • the mobile phone uses the first weighting coefficient set by the user as the weighting coefficient of the fourth LUT of the T-th frame image, and uses the second weighting coefficient set by the user as the weighting coefficient of the fourth LUT of the T-th frame image.
  • the weighting coefficient is used as the weighting coefficient of the first LUT of the T-1th frame image.
  • the first weighting coefficient and the second weighting coefficient may be collectively referred to as time domain smoothing weights.
  • the mobile phone uses the weighting coefficient set by the user to obtain the first LUT of the T-th frame image.
  • the mobile phone can also display the display effect processed by the first LUT of the T-th frame image after the user adjusts the first weighting coefficient and the second weighting coefficient.
  • the first weighting coefficients corresponding to the first setting items 713 shown in (b) are all different.
  • the second weighting coefficients corresponding to the second setting items 714 shown in (b) are all different. Therefore, the preview image 706 shown in (b) in FIG. 7E, the preview image 709 shown in (a) in FIG. 7F, and the preview image 712 shown in (b) in FIG. 7F all have different display effects. In this way, the user can set the appropriate weighting coefficient according to the adjusted display effect.
  • 715 shown in (c) in FIG. 7F is the image after LUT processing determined by using the weight (ie, weighting coefficient) shown in (b) in FIG. 7F .
  • the fourth LUT of the T-th frame image can be denoted as Q (T, 2)
  • the first LUT of the T-1th frame image can be denoted as Q (T-1, 3)
  • the The first LUT of the T-th frame image is marked as Q (T, 3)
  • the first LUT of the 0th frame image is the default LUT.
  • Q (0, 3) is a preset value.
  • the mobile phone can use the following formula (2) to calculate the first LUT of the T-th frame image, such as Q (T, 3) .
  • the transition effect of the second image of multiple frames is smoother.
  • the mobile phone processes the first image according to the first LUT to obtain a second image, and displays the second image.
  • the display effect of the second image corresponds to the first LUT of the first image.
  • the first image is the first image 602 shown in (a) in FIG. 6 .
  • the second image 604 shown in (b) in Figure 6 can be obtained, and the preview interface 603 shown in (b) in Figure 6 is displayed.
  • the preview interface 603 includes a second image 604 obtained by processing the first LUT of the T-th frame image.
  • the display effect of the image without LUT processing is different from the display effect of the image processed by LUT.
  • the first image 602 shown in (a) in Figure 6 has not been processed by LUT
  • the second image 604 shown in (b) in Figure 6 is an image processed by LUT
  • the effect is different from the display effect of the second image 604.
  • the "display effect" of an image described in the embodiments of this application refers to the image effect that can be observed by human eyes after the image is displayed on the display screen.
  • the mobile phone can save the second image 604 and display the preview interface 605 for taking photos shown in (c) of Figure 6.
  • the photographed preview interface 605 includes a preview image 606 .
  • the embodiment of this application introduces S504 here in conjunction with Figure 7D.
  • the mobile phone can execute S504, using the time domain smoothing weight shown in Figure 7D (including the above-mentioned first weighting coefficient and the second weighting coefficient) to calculate the fourth LUT of the T-th frame image and the first LUT of the T-1th frame image.
  • the weighted sum is used to obtain the first LUT of the T-th frame as shown in Figure 7D.
  • the mobile phone can use the first LUT of the T-th frame shown in Figure 7D to perform image processing on the preview image collected by the camera to obtain the second image 604 shown in Figure 7D.
  • the first image is the first image 802 shown in (a) in FIG. 8 .
  • the second image 804 shown in (b) in Figure 8 can be obtained, and the preview interface 803 shown in (b) in Figure 8 can be displayed.
  • the preview interface 803 includes a second image 804 obtained by processing the first LUT of the T-th frame image.
  • the display effect of the second image 804 shown in (b) of FIG. 8 is different from the display effect of the first image 802 shown in (a) of FIG. 8 .
  • the viewfinder interface of the mobile phone's camera may change significantly.
  • the user may move the phone to change the viewing content of the phone.
  • the user may switch the front and rear cameras of the mobile phone to change the viewing content of the mobile phone. If the viewing content of the mobile phone changes significantly, if this solution is implemented, the display effect/style of the mobile phone may change with the change of the viewing content.
  • the mobile phone can collect a third image.
  • the third image is an image collected by the camera of the mobile phone.
  • the third image includes the second shooting object; the mobile phone determines that the second image corresponds to the second scene, and the second scene Used to identify the scene corresponding to the second photographed object; the mobile phone determines the second LUT according to the second scene; the mobile phone processes the third image according to the second LUT to obtain a fourth image, and displays the fourth image.
  • the display effect of the fourth image corresponds to the second LUT.
  • the mobile phone can switch to use the rear camera to collect images.
  • the mobile phone can display the viewfinder interface 901 of the video recording shown in (a) of Figure 9 .
  • the video framing interface 901 includes a preview image (which can be used as a fourth image) 902 .
  • the preview image 902 as the fourth image may be processed according to the third image collected by the camera. Since the image content of the preview image 902 and the preview image 804 has changed significantly; therefore, the shooting scenes of the preview image 902 and the preview image 804 may have also changed greatly.
  • the shooting scene of the preview image 804 is a human scene (ie, the first scene), and the shooting scene of the preview image 902 may be a food scene (ie, the second scene).
  • the phone can automatically adjust the LUT.
  • the mobile phone may display the video framing interface 903 shown in (b) of FIG. 9 .
  • the video framing interface 903 includes a preview image (which can be used as a fourth image) 904 .
  • the preview image (can be used as the fourth image) 904 and the preview image (can be used as the second image) 902 have different shooting scenes, and the LUT used when processing the preview image 904 is different from the LUT used when processing the preview image 902; therefore , the display effect of preview image 904 is different from the display effect of preview image 902.
  • the first image is the first image 1004 in the preview interface 1003 shown in (b) of Figure 10 .
  • the second image 1006 shown in (c) in Figure 10 can be obtained, and the preview interface 1005 shown in (b) in Figure 10 can be displayed.
  • the preview interface 1005 includes a second image 1006 obtained by processing the first LUT of the T-th frame image.
  • the display effect of the second image 1006 is different from the display effect of the first image 1004.
  • the mobile phone can determine the scene corresponding to the first frame of the image collected by the camera (ie, the first scene). Then, the mobile phone can determine the first LUT corresponding to the first scene. Finally, the mobile phone can use the first LUT of this frame of image, perform image processing on the first image to obtain a second image, and display the second image.
  • the display effect of the second image is the same as the display effect corresponding to the first LUT.
  • the mobile phone can dynamically adjust the LUT according to each frame of image periodically acquired by the mobile phone.
  • the display effects or styles corresponding to different LUTs can be presented, which can enrich the display effects obtained by taking pictures or recording videos.
  • the mobile phone determines the final LUT, it not only refers to the current frame of image, but also refers to the final LUT of the previous frame of image.
  • the display effect of the multi-frame preview image presented by the mobile phone can be optimized, and the user's visual experience during taking pictures or recording videos can be improved.
  • the images collected by the camera may not only include images of one shooting scene, but may include images of multiple shooting scenes (called complex shooting scenes).
  • the preview image 902 includes images of people, images of food, and images of buildings.
  • the mobile phone executes the method shown in S503, it can only use a third LUT corresponding to the first scene of the first image as the first LUT; or, it can only use a third LUT corresponding to the first scene of the first image as the first LUT.
  • a third LUT acts as a fourth LUT to determine the first LUT.
  • the first LUT only refers to a third LUT corresponding to the first scene of the first image, but does not refer to the complex shooting scene except the first LUT.
  • the third LUT corresponding to other shooting scenes outside the scene. In this way, the display effect of the mobile phone may be affected.
  • the mobile phone can use the T-th frame image (i.e., the first image) as the input of the preset AI model (such as the preset AI model a), and run the preset AI model to obtain the above-mentioned plurality of third The weight of the LUT. Then, the mobile phone can calculate the weighted sum of the plurality of third LUTs to obtain the first LUT.
  • the above-mentioned S502-S503 can be replaced by S1101-S1102.
  • the mobile phone takes the T-th frame image (i.e., the first image) as input, runs the preset AI model a, and obtains multiple third weighting coefficients of multiple third LUTs.
  • the sum of the plurality of third weighting coefficients is 1, and the plurality of third LUTs correspond to the plurality of third weighting coefficients on a one-to-one basis.
  • the above-mentioned preset AI model a may be a neural network model used for LUT weight learning.
  • the preset AI model a can be any of the following neural network models: VGG-net, Resnet, and Lenet.
  • the training process of the preset AI model a may include Sa and Sb.
  • the mobile phone obtains multiple sets of data pairs.
  • Each set of data pairs includes a sixth image and a seventh image.
  • the sixth image is an image that satisfies the preset conditions and is obtained by processing the seventh image.
  • the preset condition may be: the processed display effect (also called a display effect) satisfies a preset standard display effect.
  • the above-mentioned sixth image is equivalent to the standard image
  • the seventh image is the unprocessed original image.
  • the above-mentioned sixth image may be obtained by performing (photo shop, PS) on the seventh image.
  • the above-mentioned multiple sets of data pairs may include multiple data pairs in different shooting scenarios.
  • the mobile phone uses the seventh image and the sixth image as input samples to train the preset AI model a, so that the preset AI model a has the ability to determine what weight to use to calculate the weighted sum of multiple third LUTs and process the seventh LUT.
  • the image has the ability to obtain the display effect of the sixth image.
  • the preset AI model a can repeatedly perform the following operations (1)-operation (2) until the preset AI model a processes If the eighth image obtained from the seventh image reaches the display effect of the sixth image, it means that the preset AI model a has the above capabilities.
  • Operation (1) The seventh image is used as input (Input), and the preset AI model a uses the weights of multiple third LUTs to process the seventh image (Input) to obtain the eighth image (Output).
  • the weight used is the default weight.
  • the default weight includes multiple default weighting coefficients. Multiple default weighting coefficients correspond to multiple third LUTs one-to-one. The multiple default weighting coefficients are pre-configured in the mobile phone.
  • Operation (2) The preset AI model a uses the gradient descent method, compares the eighth image (Output) with the sixth image (i.e., standard image), and updates the weights in operation (1).
  • the multiple default weighting coefficients mentioned above may be the same.
  • the preset AI model a will gradually adjust the weights of multiple third LUTs and learn to determine which weight to use to weight the sum of multiple third LUTs.
  • the LUT obtained by processing the second image can obtain the first The ability of images to display effects.
  • the mobile phone uses multiple third weighting coefficients to calculate the weighted sum of multiple third LUTs to obtain the first LUT of the T-th frame image.
  • the T-th frame image (i.e., the first image) is the first image 902 shown in (a) in FIG. 9 as an example.
  • the mobile phone executes S1101-S1102 to determine the The method of the first LUT of T frame images. And, the mobile phone executes S504 to obtain the second image.
  • the mobile phone can execute S1101, take the first image 902 as input, and run the preset AI model a shown in Figure 11B to obtain a plurality of third weighting coefficients shown in Figure 11B.
  • the sum is 1, and the plurality of third LUTs correspond to the plurality of third weighting coefficients in a one-to-one manner.
  • the preset AI model a shown in Figure 11B outputs M third weighting coefficients, M ⁇ 2, and M is an integer.
  • the third weighting coefficient corresponding to the third LUT 1 (that is, the preset LUT 1) is K (T, 1)
  • the third weighting coefficient corresponding to the third LUT 2 that is, the preset LUT 2
  • the coefficient is K (T, 2)
  • the third weighting coefficient corresponding to the third LUT 3 (that is, the preset LUT 3) is K (T, 3)
  • the third weighting coefficient corresponding to the third LUT M (that is, the preset LUT M)
  • the coefficient is K (T, M) .
  • the mobile phone can execute S1102, using the plurality of third weighting coefficients mentioned above, and calculating the weighted sum of M third LUTs according to the following formula (4) to obtain the first LUT of the T-th frame image.
  • the first LUT of the T-th frame image can be recorded as Q (T, m, 3)
  • the third LUT m can be recorded as Q (T, m, 1).
  • the mobile phone can execute S504, using the first LUT of the T-th frame image shown in Figure 11B to perform image processing on the first image 902 to obtain the second image 904 shown in Figure 11B.
  • the mobile phone determines the first LUT of the T-th frame image, not only referring to a third LUT corresponding to the first scene of the first image, but also referring to multiple third LUTs except The third LUT corresponding to other shooting scenes except the first scene. In this way, the display effect of the mobile phone can be improved.
  • the mobile phone when determining the final LUT, may not only refer to the current frame image (ie, the first image), but also refer to the final LUT of the previous frame image of the first image. In this way, during the process of changing the LUT, a smooth transition of display effects or styles corresponding to different LUTs can be achieved, the display effect of multi-frame preview images presented by electronic devices can be optimized, and the user's visual experience during taking pictures or recording videos can be improved.
  • S1102 may include: the mobile phone uses multiple third weighting coefficients to calculate the weighted sum of multiple third LUTs to obtain the fourth LUT of the T-th frame image; the mobile phone calculates the T-th frame image (i.e., the first image). The weighted sum of the four LUTs and the first LUT of the T-1th frame image (i.e., the fifth image) obtains the first LUT of the T-th frame image.
  • FIG. 11C shows the method in this embodiment for the mobile phone to perform S1101-S1102 to determine the first LUT of the T-th frame image; and a schematic diagram of the method for the mobile phone to perform S504 to obtain the second image.
  • the mobile phone can use the T-th frame image (i.e., the first image) and the scene detection result of the first image as input to the AI model (such as the preset AI model b), and run the AI model to obtain the above multiple The weight of the third LUT. Then, the mobile phone can calculate the weighted sum of the plurality of third LUTs to obtain the first LUT.
  • the AI model such as the preset AI model b
  • S503 can be replaced by S1201-S1202.
  • the mobile phone takes the indication information of the first scene and the first image (ie, the T-th frame image) as input, runs the preset AI model b, and obtains multiple third weighting coefficients of multiple third LUTs.
  • the sum of the plurality of third weighting coefficients is 1, and the plurality of third LUTs correspond to the plurality of third weighting coefficients on a one-to-one basis.
  • the above-mentioned preset AI model b may be a neural network model used for LUT weight learning.
  • the preset AI model b can be any of the following neural network models: VGG-net, Resnet, and Lenet.
  • the training process of the preset AI model b may include Si, Sii and Siii.
  • the mobile phone obtains multiple sets of data pairs.
  • Each set of data pairs includes a sixth image and a seventh image.
  • the sixth image is an image that satisfies the preset conditions obtained by processing the seventh image.
  • Si is the same as the above-mentioned Sa, and will not be described again in the embodiments of this application.
  • the mobile phone recognizes the seventh image and determines the third scene corresponding to the second image.
  • the method for the mobile phone to identify the seventh image and determine the third scene corresponding to the seventh image can refer to the method for the mobile phone to identify the first scene corresponding to the first image, which will not be described in detail here in the embodiments of the present application.
  • the mobile phone takes the seventh image and the sixth image and the instruction information for identifying the third scene as input samples to train the preset AI model b, so that the preset AI model b has the ability to determine what weight to use to calculate multiple third LUTs.
  • the seventh image processed by the LUT obtained by the weighted sum can obtain the display effect of the sixth image.
  • the input sample of the preset AI model b adds indication information of the third scene corresponding to the third image.
  • the training principle of the preset AI model b is the same as the training principle of the above-mentioned preset AI model a. The difference is that the indication information of the third scene corresponding to the seventh image can more clearly indicate the shooting scene corresponding to the seventh image.
  • the shooting scene of the seventh image is the third scene, it means that the possibility of the seventh image being the image of the third scene is relatively high.
  • setting the weighting coefficient of the third LUT corresponding to the second photographic object to a larger value will help improve the display effect.
  • the instruction information of the third scene can play a guiding role in the training of the preset AI model b, and guide the preset AI model b to train in a direction tending to the third scene. In this way, the convergence of the preset AI model b can be accelerated and the number of training times of the preset AI model b can be reduced.
  • the mobile phone uses multiple third weighting coefficients to calculate the weighted sum of multiple third LUTs to obtain the first LUT of the T-th frame image (ie, the first image).
  • the embodiment of the present application takes the T-th frame image (i.e., the first image) as the first image 902 shown in (a) in Figure 9 as an example.
  • the mobile phone performs S1201-S1202 to determine the The method of the first LUT of T frame images.
  • the mobile phone executes S504 to obtain the second image.
  • the mobile phone can execute S502 to perform scene detection on the T-th frame image (ie, the first image) 902 to obtain the first scene corresponding to the first image 902 shown in Figure 12B.
  • the mobile phone can execute S1201, taking the first image 902 and the indication information of the first scene as input, and running the preset AI model b shown in Figure 12B, to obtain a plurality of third weighting coefficients shown in Figure 12B.
  • the sum of the plurality of third weighting coefficients is 1, and the plurality of third LUTs correspond to the plurality of third weighting coefficients on a one-to-one basis.
  • the preset AI model b shown in FIG. 12B outputs M third weighting coefficients, M ⁇ 2, and M is an integer.
  • the mobile phone can execute S1202, use multiple third weighting coefficients to calculate the weighted sum of M third LUTs, and obtain the first LUT of the T-th frame image.
  • the mobile phone can execute S505, using the first LUT of the T-th frame shown in Figure 12B to perform image processing on the first image 902 to obtain the second image 904 shown in Figure 12B.
  • the mobile phone determines the first LUT of the T-th frame image, not only referring to a third LUT corresponding to the first scene of the first image, but also referring to multiple third LUTs except The third LUT corresponding to other shooting scenes except the first scene. Moreover, when the mobile phone determines the plurality of third weighting coefficients, it also refers to the first image. In this way, the display effect of the mobile phone can be improved.
  • the mobile phone when determining the final LUT, may not only refer to the current frame of the image (i.e., the first image), but also refer to the final LUT of the previous frame of the first image. In this way, during the process of changing the LUT, a smooth transition of display effects or styles corresponding to different LUTs can be achieved, the display effect of multi-frame preview images presented by electronic devices can be optimized, and the user's visual experience during taking pictures or recording videos can be improved.
  • S1203 may include: the mobile phone uses multiple third weighting coefficients to calculate the weighted sum of multiple third LUTs to obtain the fourth LUT of the T-th frame image; the mobile phone calculates the T-th frame image (i.e., the first image). The weighted sum of the four LUTs and the first LUT of the T-1th frame image (i.e., the fifth image) obtains the first LUT of the T-th frame image.
  • FIG. 12C shows the method in this embodiment for the mobile phone to perform S1201-S1202 to determine the first LUT of the T-th frame image; and a schematic diagram of the method for the mobile phone to perform S504 to obtain the second image.
  • the user can adjust at least one third weighting coefficient among the plurality of third weighting coefficients output by the above-mentioned preset AI model a or preset AI model b. That is to say, the mobile phone can receive the user's adjustment operation on the plurality of third weighting coefficients, and use the plurality of third weighting coefficients adjusted by the user to calculate the first LUT of the T-th frame image.
  • the method in the embodiment of the present application may also include S1301-S1302.
  • the above S1102 or S1202 can be replaced by S1303.
  • the method in the embodiment of the present application may also include S1301-S1302.
  • S1202 can be replaced with S1303.
  • the mobile phone displays multiple third setting items in response to the user's click operation on the second preset control.
  • Each third setting item corresponds to a third LUT and is used to set the third weighting coefficient of the third LUT.
  • the above preview interface may also include a second preset control.
  • the second preset control is used to trigger the mobile phone to display a plurality of third setting items of the plurality of third weighting coefficients, so that the user can set the weights of the plurality of third LUTs through the plurality of third setting items.
  • the preview interface 1401 includes a second preset control 1402 .
  • the mobile phone can display a plurality of third setting items 1405 on the preview interface 1403, such as "## style (such as character scene) )" setting item, "** style (such as food scene)” setting item and "&& style (such as architectural scene)” setting item, etc.
  • the third setting item is the scroll bar shown in (a) in FIG. 14A as an example to introduce the method of the embodiment of the present application. It can be seen from the above embodiments that each shooting style and shooting scene can correspond to a third LUT.
  • the mobile phone can set the weight (ie, weighting coefficient) corresponding to the third LUT through the above third setting item.
  • the display state of the second preset control 1402 changes.
  • the mobile phone may display the second preset control 1406 shown in (b) of Figure 14A.
  • the corresponding display state of the second preset control 1402 (such as the display state of black text on a white background) is used to indicate that the second preset control is in a closed state.
  • the corresponding display state of the second preset control 1406 (such as the display state of white text on a black background) is used to indicate that the second preset control is in an on state.
  • the preview interface 1403 also includes a second image 1404.
  • the display effect of the second image 1404 is: the display effect obtained by processing the first image by using a plurality of third weighting coefficients shown in a plurality of third setting items 1405 for weighted sum calculation, and finally obtaining the fourth LUT of the T-th frame.
  • the preview interface may include the second preset control, or may not include the second preset control.
  • the mobile phone may receive the second preset operation input by the user on the preview interface.
  • the above S1301 can be replaced with: the mobile phone displays a plurality of third setting items on the preview interface in response to the user's second preset operation on the preview interface.
  • the second preset operation may be any preset gesture such as an L-shaped gesture, an S-shaped gesture, or a ⁇ -shaped gesture input by the user on the display screen of the mobile phone (such as a touch screen).
  • the preset gesture corresponding to the second preset operation is different from the preset gesture corresponding to the first preset operation.
  • the second preset operation may be the user's click operation on the second physical button of the mobile phone.
  • the first physical button may be a physical button in the mobile phone, or a combination of at least two physical buttons.
  • the second physical button is different from the above-mentioned first physical button.
  • the mobile phone updates the corresponding third weighting coefficient in response to the user's setting operation on one or more third setting items among the plurality of third setting items.
  • the mobile phone may receive the user's setting operation on the plurality of third setting items 1405 shown in (b) of Figure 14A, and display the preview interface 1407 shown in (a) of Figure 14B.
  • the preview interface 1407 includes a plurality of third setting items 1409.
  • the plurality of third weighting coefficients shown in the plurality of third setting items 1409 are different from the plurality of third weighting coefficients shown in the plurality of third setting items 1405 . That is to say, in response to the user's setting operation on the plurality of third setting items 1405, the mobile phone updates the plurality of third weighting coefficients from the third weighting coefficients indicated by the plurality of third setting items 1405 to the plurality of third setting items.
  • the preview interface 1407 also includes a second image 1408.
  • the display effect of the second image 1408 is: the display effect obtained by using the plurality of third weighting coefficients shown in the plurality of third setting items 1409 to perform weighted sum calculation and finally obtain the display effect obtained by processing the first image by the first LUT of the T-th frame. Comparing (a) in Figure 14B and (b) in Figure 14A, it can be seen that the display effect of the second image 1408 is different from the display effect of the second image 1404.
  • the mobile phone may receive the user's setting operation on the plurality of third setting items 1409 shown in (a) in Figure 14B, and display the preview interface 1410 shown in (b) in Figure 14B.
  • the preview interface 1410 includes a plurality of third setting items 1412.
  • the plurality of third weighting coefficients shown in the plurality of third setting items 1412 are different from the plurality of third weighting coefficients shown in the plurality of third setting items 1409 . That is to say, in response to the user's setting operation on the plurality of third setting items 1409, the mobile phone updates the plurality of third weighting coefficients from the third weighting coefficients indicated by the plurality of third setting items 1409 to the plurality of third setting items.
  • the preview interface 1410 also includes a second image 1411.
  • the display effect of the second image 1411 is: the display effect obtained by using the plurality of third weighting coefficients shown in the plurality of third setting items 1412 to perform weighted sum calculation and finally obtain the display effect obtained by processing the first image by the first LUT of the T-th frame. Comparing (b) in Figure 14B and (a) in Figure 14B, it can be seen that the display effect of the second image 1411 is different from the display effect of the second image 1408.
  • the mobile phone may receive the user's setting operation for one or more third setting items among the plurality of third setting items.
  • the sum of multiple third weighting coefficients after the mobile phone is updated is not necessarily 1.
  • the user can adjust the plurality of third weighting coefficients in real time by adjusting any of the third setting items. Furthermore, the user can observe the display effect of the second image after adjusting the plurality of third weighting coefficients, and set appropriate weighting coefficients for the plurality of third LUTs.
  • the mobile phone may receive the user's click operation on the second preset control 1406 shown in (b) of Figure 14B.
  • the mobile phone can hide the plurality of third setting items and display the preview interface 1413 shown in (c) in Figure 14B.
  • the preview interface 1413 includes a second preview control 1402 and a second image 1414.
  • the mobile phone uses the updated plurality of third weighting coefficients to calculate the weighted sum of the plurality of third LUTs to obtain the first LUT of the T-th frame image (ie, the first image).
  • Figure 15A in this embodiment of the present application introduces the method of the mobile phone executing S1301-S1303 to determine the first LUT of the T-th frame image. And, the mobile phone executes S504 to obtain the second image.
  • the mobile phone uses the first image collected by the camera as input to perform S1101 or S1202, it can obtain multiple third weighting coefficients shown in Figure 15A, such as multiple third weighting coefficients output by the preset AI model a or the second preset AI model. coefficient.
  • the mobile phone can execute S1301-S1302, update the above-mentioned plurality of third weighting coefficients using the user-defined third weighting coefficient, and obtain a plurality of updated third weighting coefficients.
  • the mobile phone can execute S1303, use the updated plurality of third weighting coefficients, and calculate the weighted sum of M third LUTs according to the following formula (5) to obtain the first LUT of the T-th frame image.
  • the first LUT of the T-th frame image can be recorded as Q (T, 3)
  • the first LUT m can be recorded as Q (T, m, 1).
  • the mobile phone can execute S504, using the first LUT of the T-th frame image shown in Figure 15A to perform image processing on the first image to obtain the second image 1411 shown in Figure 15A.
  • the mobile phone can not only determine the weighting coefficients of multiple third LUTs through the preset AI model a or the preset AI model b, but can also provide the user with the ability to adjust the multiple third LUTs. Weighting factor services. In this way, the mobile phone can calculate the fourth LUT of the T-th frame image based on the weighting coefficient adjusted by the user. In this way, the mobile phone can take photos or videos that the user wants according to the user's needs, which can improve the user's shooting experience.
  • the mobile phone when determining the final LUT, may not only refer to the current frame image (ie, the first image), but also refer to the final LUT of the previous frame image of the first image. In this way, during the process of changing the LUT, a smooth transition of display effects or styles corresponding to different LUTs can be achieved, the display effect of multi-frame preview images presented by electronic devices can be optimized, and the user's visual experience during taking pictures or recording videos can be improved.
  • S1303 may include: the mobile phone uses multiple third weighting coefficients to calculate the weighted sum of multiple third LUTs to obtain the fourth LUT of the T-th frame image; the mobile phone calculates the T-th frame image (i.e., the first image). The weighted sum of the four LUTs and the first LUT of the T-1th frame image (i.e., the fifth image) obtains the first LUT of the T-th frame image.
  • FIG. 15B shows the method in this embodiment for the mobile phone to perform S1301-S1303 to determine the first LUT of the T-th frame image; and a schematic diagram of the method for the mobile phone to perform S504 to obtain the second image.
  • the user can add a new LUT to the mobile phone.
  • M third LUTs are preset in the mobile phone.
  • the mobile phone can add the M+1 third LUT, the +2 third LUT, etc. in the mobile phone in response to the user's operation of adding a LUT.
  • the method in the embodiment of this application may also include S1601-S1603.
  • the mobile phone In response to the user's second preset operation, the mobile phone displays the third preset control.
  • the third preset control is used to trigger the mobile phone to add a new LUT (ie, the display effect corresponding to the LUT).
  • the mobile phone in response to the above second preset operation, can not only display a plurality of third setting items, but also display a third preset control.
  • the mobile phone may display the preview interface 1601 shown in (a) in FIG. 16A.
  • the video preview interface 1601 includes a first image 1602 and a third preset control 1603.
  • the third preset control 1603 is used to trigger a new LUT on the mobile phone, that is, a display effect corresponding to the new LUT.
  • the mobile phone In response to the user's click operation on the third preset control, the mobile phone displays one or more fourth setting items, each fourth setting item corresponds to a fifth LUT, and each fifth LUT corresponds to a shooting scene.
  • the display effect of the fifth LUT is different from the third LUT.
  • the mobile phone may display the preview interface 1604 shown in (b) in Figure 16A.
  • the preview interface 1604 includes one or more fourth setting items, such as “%% style” setting item, “@@ style” setting item, “& ⁇ style” setting item and “ ⁇ style” setting item, etc.
  • Each fourth setting item corresponds to a fifth LUT.
  • the mobile phone In response to the user's selection operation on any fourth setting item, the mobile phone saves the fifth LUT corresponding to the fourth setting item selected by the user.
  • the mobile phone may save the fifth LUT corresponding to the "@@style” setting item. That is to say, the fifth LUT corresponding to the "@@ style” setting item can be used as a third LUT for the mobile phone to execute S503 to determine the first LUT of the T-th frame image.
  • the mobile phone may display the preview interface 1605 shown in (c) of Figure 16A.
  • the preview interface 1605 shown in (c) of Figure 16A also includes a fourth setting item corresponding to "@@ style".
  • each of the above fourth setting items also includes a preview image processed using the corresponding fifth LUT, for presenting the display effect corresponding to the fifth LUT.
  • a preview image processed using the corresponding fifth LUT for presenting the display effect corresponding to the fifth LUT.
  • the "%% style" setting item, the "@@ style” setting item, the "& ⁇ style” setting item and the " ⁇ style” setting item all display the corresponding Preview image after fifth LUT processing.
  • the above-mentioned fifth LUT can be saved in the mobile phone in advance, but the fifth LUT is not used in the camera application of the mobile phone.
  • the fifth LUT selected by the user can be applied to the camera application of the mobile phone.
  • the fifth LUT corresponding to the "@@ style" setting item can be used as a third LUT for the mobile phone to execute S503 to determine the first LUT of the T-th frame image.
  • the mobile phone does not provide the above-mentioned plurality of fifth LUTs for the user to select, but the user sets the required LUT by himself.
  • the mobile phone in response to the user's click operation on the third preset control, the mobile phone may display the fourth interface.
  • the fourth interface includes three adjustment options for RGB LUT parameters, and the three adjustment options are used to set the newly added LUT.
  • the mobile phone in response to the user's click operation on the third preset control 1603 shown in (a) in Figure 16A, the mobile phone may display the fourth interface 16007 shown in (a) in Figure 16B.
  • the fourth interface 16007 includes three adjustment options 1608.
  • the mobile phone can receive the user's adjustment operations on the three adjustment options 1608, and in response to the user's adjustment operations, save the new LUT set by the user.
  • the mobile phone may receive the user's adjustment operations on three adjustment options 1608 and display the fourth interface 1609 shown in (b) in Figure 16B.
  • the fourth interface 1609 includes three adjustment options 1610 .
  • the LUT corresponding to the three adjustment options 1610 is different from the LUT corresponding to the three adjustment options 1608 .
  • the mobile phone can save the LUTs corresponding to the three adjustment options 1610 (ie, newly added LUTs).
  • LUT also called 3D LUT
  • 3D LUT is a relatively complex three-dimensional lookup table.
  • the settings of LUT will involve the adjustment of many parameters (such as brightness and color, etc.). Manual settings are difficult to refine every parameter of the LUT. Therefore, in the embodiment of the present application, global adjustment can be used to provide users with new functions of LUT. That is to say, the above three adjustment options 1608 of the RGB LUT parameters and the three adjustment options 1610 of the RGB LUT parameters are three LUT adjustment options that support global adjustment.
  • an initial LUT can be initialized.
  • the cube of this initial LUT (the output value is exactly the same as the input value).
  • Table 2 shows an initial LUT.
  • the output value of the initial LUT shown in Table 2 is exactly the same as the input value, both being (10, 20, 30).
  • the values of the progress bar of the three adjustment options of the LUT can be normalized. For example, "0"-"+100" can be normalized to [1.1, 10.0], and "-100"-"0" can be normalized to [0.0, 1.0].
  • the normalized value can be used as the color channel coefficient (such as expressed by Rgain, Ggain, Bgain), and multiplied by the input value of the initial LUT, the output value of the new LUT can be obtained.
  • the new LUT shown in Table 3 can be obtained from the initial LUT shown in Table 2.
  • the RGB value of a pixel in the original image 1611 shown in (a) in Figure 16B is (10, 20, 30). It is assumed that the values of the progress bar corresponding to the three adjustment options 1608 shown in (b) in FIG. 16B set by the user are (45, 30, 65).
  • the mobile phone can calculate the product of the corresponding gain values in the RGB value (10, 20, 30) and (5.0, 3.7, 5.8) to obtain the GRB output value (50, 74, 174) of the newly added LUT shown in Table 4.
  • the above-mentioned fourth interface may also include more user setting items such as a brightness coefficient slide bar, a dark area brightness coefficient/bright area brightness coefficient slide bar, each channel grayscale curve adjustment, and so on.
  • a brightness coefficient slide bar a dark area brightness coefficient/bright area brightness coefficient slide bar
  • each channel grayscale curve adjustment a channel grayscale curve adjustment
  • the mobile phone can also perform S1601-S1603, as shown in Figure 17A or Figure 17B, in response to the user's operation of adding a LUT, adding a fifth LUT among multiple third LUTs.
  • the mobile phone can also add a new LUT to the mobile phone in response to the user's operation.
  • the new LUT is set by the user according to his or her own needs, and the new LUT is highly consistent with the user's shooting needs.
  • the mobile phone uses this new LUT to process the images collected by the camera, and can take photos or videos with high user satisfaction, which can improve the user's shooting experience.
  • the method of the embodiment of the present application can be applied to a scenario in which a mobile phone performs image processing on photos or videos in the mobile phone gallery (or album) (referred to as: post-shooting image processing scenario).
  • the mobile phone in response to the user's preset operation on any photo in the album, the mobile phone can execute S501-S504 to obtain and display the second image.
  • the mobile phone may display the album list interface 1801 shown in (a) of FIG. 18A , where the album list interface 1801 includes preview items of multiple photos.
  • the mobile phone in response to the user's click operation on the preview item 1802 of the "little girl" photo (equivalent to the first image) in the album list interface 1801, the mobile phone can directly display the "little girl" photo corresponding to the preview item 1802 ( Equivalent to the first image).
  • the mobile phone can respond to the user's click operation on the preview item 1802 of the "little girl" photo (equivalent to the first image), and can perform S501-S504 to obtain and display the information shown in (b) in Figure 18A
  • the photo details page shown in (b) of FIG. 18A includes not only the second image 1803 but also an edit button 1804 .
  • the edit button 1804 is used to trigger the mobile phone to edit the second image 1803 .
  • the user can trigger the mobile phone to execute S501-S504 in the editing interface of a photo to obtain and display the second image.
  • the mobile phone may display the details page of the photo 1805 (ie, the first image) shown in (a) in FIG. 18B.
  • the mobile phone may display the editing interface 1807 shown in (b) in FIG. 18B.
  • the editing interface 1807 includes an "intelligent AI” button 1808, a "crop” button, a “filter” button and an “adjust” button.
  • the “Smart AI” button 1809 is used to trigger the mobile phone to adjust the LUT of the first image.
  • the “Crop” button is used to trigger the phone to crop the first image.
  • the “Filter” button is used to trigger the phone to add a filter effect to the first image.
  • the “Adjust” button is used to trigger the mobile phone to adjust parameters such as contrast, saturation and brightness of the first image.
  • the mobile phone may execute S501-S504 to obtain and display the second image 1811 shown in (c) in Figure 18B.
  • the editing interface shown in (c) in FIG. 18B includes not only the second image 1811 but also a save button 1810.
  • the save button 1810 is used to trigger the mobile phone to save the second image 1811.
  • the mobile phone can save the second image 907 and display the photo details page of the second image 1811 shown in Figure 18C.
  • the method in which the mobile phone performs image processing on the videos in the mobile phone gallery is similar to the method in which the mobile phone performs image processing on the photos in the mobile phone gallery, and will not be described in detail here in the embodiments of the present application. The difference is that the mobile phone needs to process every frame of the video.
  • Embodiments of the present application provide an electronic device, which may include a display screen (such as a touch screen), a camera, a memory, and one or more processors.
  • the display, camera, memory and processor are coupled.
  • the memory is used to store computer program code, which includes computer instructions.
  • the processor executes the computer instructions, the electronic device can perform each function or step performed by the mobile phone in the above method embodiment.
  • the structure of the electronic device may refer to the structure of the electronic device 400 shown in FIG. 4 .
  • the chip system 1900 includes at least one processor 1901 and at least one interface circuit 1902.
  • interface circuitry 1902 may be used to receive signals from other devices, such as memory of an electronic device.
  • interface circuit 1902 may be used to send signals to other devices (eg, processor 1901).
  • the interface circuit 1902 can read instructions stored in the memory and send the instructions to the processor 1901.
  • the electronic device can be caused to perform various steps performed by the mobile phone 190 in the above embodiment.
  • the chip system may also include other discrete devices, which are not specifically limited in the embodiments of this application.
  • Embodiments of the present application also provide a computer storage medium that includes computer instructions.
  • the computer instructions When the computer instructions are run on an electronic device, the electronic device causes the electronic device to perform various functions or steps performed by the mobile phone in the above method embodiments.
  • Embodiments of the present application also provide a computer program product.
  • the computer program product When the computer program product is run on a computer, it causes the computer to perform various functions or steps performed by the mobile phone in the above method embodiments.
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of modules or units is only a logical function division.
  • there may be other division methods for example, multiple units or components may be The combination can either be integrated into another device, or some features can be omitted, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated.
  • the components shown as units may be one physical unit or multiple physical units, that is, they may be located in one place, or they may be distributed to multiple different places. . Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in various embodiments of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above integrated units can be implemented in the form of hardware or software functional units.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solutions of the embodiments of the present application are essentially or contribute to the existing technology, or all or part of the technical solution can be embodied in the form of a software product, and the software product is stored in a storage medium , including several instructions to cause a device (which can be a microcontroller, a chip, etc.) or a processor to execute all or part of the steps of the methods described in various embodiments of this application.
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

本申请公开了一种图像处理方法及电子设备,涉及拍照技术领域,可在拍照或录像过程中动态调整LUT,丰富拍照或录像的显示效果。电子设备获取第一图像,该第一图像为电子设备的摄像头采集的图像,第一图像包括第一拍摄对象;电子设备确定第一图像对应的第一场景,第一场景用于标识第一拍摄对象对应的场景;电子设备根据第一场景确定第一LUT;电子设备根据第一LUT对第一图像进行处理得到第二图像,并显示第二图像,第二图像的显示效果与第一LUT对应。

Description

一种图像处理方法及电子设备
本申请要求于2021年07月31日提交国家知识产权局、申请号为202110877402.X、发明名称为“一种图像处理方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及拍照技术领域,尤其涉及一种图像处理方法及电子设备。
背景技术
现有的手机一般具有拍照和录像功能,越来越来的人使用手机拍摄照片和视频来记录生活的点点滴滴。目前,手机拍摄(如拍照和录像)时,只能采用拍摄前预先配置的颜色查找表(Look Up Table,LUT)、用户选择的LUT或者识别预览图像确定的LUT来处理预览图像。如此,手机只能拍摄得到上述预先配置或者选择的参数对应的风格或显示效果的照片或视频,手机拍摄的照片或视频的风格或显示效果单一。
发明内容
本申请提供一种图像处理方法及电子设备,可以在拍照或录像过程中动态调整LUT,丰富拍照或录像得到的显示效果。
第一方面,本申请提供一种图像处理方法。该方法中,电子设备可以获取第一图像。该第一图像是电子设备的摄像头采集的图像,该第一图像包括第一拍摄对象。之后,电子设备可以确定第一图像对应的第一场景,第一场景用于标识第一拍摄对象对应的场景。然后,电子设备可以根据第一场景确定第一LUT。最后,电子设备可以根据第一LUT对第一图像进行处理得到第二图像,并显示第二图像。该第二图像的显示效果与第一LUT对应。
采用本方案,电子设备在拍照或录像过程中,可以根据电子设备获取的每一帧图像动态调整LUT。这样,在拍照或录像过程中,便可以呈现出不同LUT对应的显示效果或风格,可以丰富拍照或录像得到的显示效果。
在第一方面的一种可能的设计方式中,在电子设备显示第二图像之后,电子设备可以采集第三图像,该第三图像为电子设备的摄像头采集的图像,第三图像包括第二拍摄对象。电子设备可以确定第二图像对应第二场景,第二场景用于标识第二拍摄对象对应的场景;电子设备根据第二场景确定第二LUT;电子设备根据第二LUT对第三图像进行处理得到第四图像,并显示第四图像,第四图像的显示效果与第二LUT对应。
也就是说,当电子设备的摄像头采集不同拍摄对象的图像时,通过本申请的方法,电子设备可以采用不同的LUT处理图像。这样,可以呈现出不同LUT对应的显示效果或风格,可以丰富拍照或录像得到的显示效果。
在第一方面的另一种可能的设计方式中,电子设备根据第一场景确定第一LUT,可以包括:电子设备将多个第三LUT中第一场景对应的第三LUT,确定为第一图像的第一LUT。
该设计方式中,电子设备可以识别第一图像对应的拍摄场景(即第一场景),根 据该拍摄场景确定第一LUT。其中,多个第三LUT预先配置在电子设备中,用于对电子设备的摄像头采集的图像进行处理得到不同显示效果的图像,每个第一LUT对应一种场景下的显示效果。
在第一方面的另一种可能的设计方式中,电子设备根据第一场景确定第一LUT,可以包括:电子设备将多个第三LUT中第一场景对应的第三LUT,确定为第一图像的第四LUT;电子设备计算第一图像的第四LUT和第五图像的第一LUT的加权和,得到第一LUT。其中,第五图像是第一图像的前一帧图像,电子设备在本次拍摄过程中采集的第1帧图像的前一帧图像的第三LUT是预设LUT。其中,多个第三LUT预先配置在电子设备中,用于对电子设备的摄像头采集的图像进行处理得到不同显示效果的图像,每个第三LUT对应一种场景下的显示效果。
在该设计方式中,电子设备在确定最终LUT时,不仅参考了当前一帧图像,还参考了上一帧图像的最终LUT。这样,可以在改变LUT的过程中,实现不同LUT对应的显示效果或风格的平滑过渡,可以优化电子设备呈现的多帧预览图像的显示效果,提升用户拍照或录像过程中的视觉体验。
在第一方面的另一种可能的设计方式中,电子设备计算第一图像的第四LUT和第五图像的第一LUT的加权和,得到第一LUT,可以包括:电子设备采用预先配置的第一加权系数和第二加权系数,计算第一图像的第四LUT和第五图像的第一LUT的加权和,得到第一LUT。其中,第一加权系数是第一图像的第四LUT的加权系数,第二加权系数是第五图像的第一LUT的加权系数,第一加权系数和第二加权系数之和等于1。
其中,第一加权系数越小,第二加权系数越大,多帧第二图像的过渡效果越平滑。在这种设计方式中,上述第一加权系数和第二加权系数,可以是预先配置在电子设备中的预设权重。
在第一方面的另一种可能的设计方式中,第一加权系数和第二加权系数,可以由用户在电子设备中设置。
具体的,在电子设备采用预先配置的第一加权系数和第二加权系数,计算第一图像的第四LUT和第五图像的第一LUT的加权和,得到第一LUT之前,电子设备可以响应于第一预设操作,显示第一设置项和第二设置项。该第一设置项用于设置第一加权系数,第二设置项用于设置第二加权系数。然后,电子设备响应于用户对第一设置项和/或第二设置项的设置操作,可以将用户设置的第一加权系数作为第一图像的第四LUT的加权系数,将用户设置的第二加权系数作为第五图像的第一LUT的加权系数。
其中,第一预设操作是对电子设备显示的第一预设控件的点击操作,第一预设控件用于触发电子设备设置第一图像的第四LUT和第五图像的第一LUT的权重;或者,第一预设操作是用户对电子设备的第一物理按键的点击操作。
在第一方面的另一种可能的设计方式中,电子设备中预先配置有预设人工智能(artificial intelligence,AI)模型(如预设AI模型b)。该预设AI模型b具备识别第一图像和第一图像的场景检测结果,输出多个第三LUT中每个第三LUT的权重的能力。电子设备可以通过该预设AI模型b得到每个第三LUT的权重;然后,根据得到的权重计算对多个第三LUT的加权和,得到第一LUT。
具体的,上述电子设备根据第一场景确定第一LUT,可以包括:电子设备将第一场景的指示信息和第一图像作为输入,运行预设AI模型,得到多个第三LUT的多个第三加权系数;电子设备采用多个第三加权系数,计算多个第三LUT的加权和,得到第一LUT。其中,多个第三加权系数之和为1,多个第三LUT与多个第三加权系数一一对应。
在该设计方式中,针对复杂的拍摄场景,电子设备确定第一图像的第一LUT,不仅参考了第一图像的第一场景对应的一个第三LUT,还参考了多个第三LUT中除第一场景之外的其他拍摄场景对应的第三LUT。这样,可以提升电子设备的显示效果。
在第一方面的另一种可能的设计方式中,电子设备根据第一场景确定第一LUT,可以包括:电子设备将第一场景的指示信息和第一图像作为输入,运行预设AI模型,得到多个第三LUT的多个第三加权系数;电子设备采用多个第三加权系数,计算多个第三LUT的加权和,得到第一图像的第四LUT;电子设备计算第一图像的第四LUT和第五图像的第一LUT的加权和,得到第一LUT。其中,第五图像是第一图像的前一帧图像,电子设备在本次拍摄过程中采集的第1帧图像的前一帧图像的第三LUT是预设LUT。其中,多个第三加权系数之和为1,多个第三LUT与多个第三加权系数一一对应。
在该设计方式中,电子设备在确定最终LUT时,不仅参考了当前一帧图像,还参考了上一帧图像的最终LUT。这样,可以在改变LUT的过程中,实现不同LUT对应的显示效果或风格的平滑过渡,可以优化电子设备呈现的多帧预览图像的显示效果,提升用户拍照或录像过程中的视觉体验。
在第一方面的另一种可能的设计方式中,在电子设备通过预设AI模型得到每个第三LUT的权重之前,电子设备可以先训练该预设AI模型b,使预设AI模型b具备识别第一图像和第一图像的场景检测结果,输出多个第三LUT中每个第三LUT的权重的能力。
具体的,电子设备可以获取多组数据对,每组数据对包括第六图像和第七图像,第六图像是处理第七图像得到的满足预设条件的图像。然后,电子设备可以识别第七图像,确定第七图像对应的第三场景。最后,电子设备可以将第七图像和第六图像,以及识别第三场景的指示信息作为输入样本,训练预设AI模型,使得预设AI模型具备确定采用何种权重对多个第三LUT求加权和得到的LUT处理第七图像能够得到第六图像的显示效果的能力。
需要说明的是,与上述预设AI模型a不同的是,预设AI模型b的输入样本增加了第七图像对应的第三场景的指示信息。该预设AI模型b的训练原理与上述预设AI模型的训练原理相同。不同的是,第七图像对应的第三场景的指示信息可以更加明确的指示第七图像对应的拍摄场景。
应理解,如果识别到第七图像的拍摄场景为第三场景,则表示该第七图像是第三场景的图像的可能性较高。那么,将拍摄对象对应的第三LUT的加权系数设置为较大值,有利于提升显示效果。由此可见,该第三场景的指示信息可以对预设AI模型b的训练起到引导的作用,引导预设AI模型b向倾向于该第三场景的方向训练。这样,可以加速预设AI模型b的收敛,减少第二预设AI模型的训练次数。
第二方面,本申请提供一种图像处理方法,该方法中,电子设备可获取第一图像,该第一图像为电子设备的摄像头采集的图像,第一图像包括第一拍摄对象。之后,电子设备可以将第一图像作为输入,运行预设AI模型(如预设AI模型a),得到多个第三LUT的多个第三加权系数。多个第三加权系数之和为1,多个第三LUT与多个第三加权系数一一对应。电子设备采用多个第三加权系数,计算多个第三LUT的加权和,得到第一LUT。电子设备根据第一LUT对第一图像进行处理得到第二图像,并显示第二图像,第二图像的显示效果与第一LUT对应。
采用本方案,电子设备在拍照或录像过程中,可以根据电子设备获取的每一帧图像动态调整LUT。这样,在拍照或录像过程中,便可以呈现出不同LUT对应的显示效果或风格,可以丰富拍照或录像得到的显示效果。
并且,电子设备确定第一图像的第一LUT,不仅参考了第一图像的第一场景对应的一个第三LUT,还参考了多个第三LUT中除第一场景之外的其他拍摄场景对应的第三LUT。这样,可以提升电子设备的显示效果。
在第二方面的一种可能的设计方式中,电子设备采用多个第三加权系数,计算多个第三LUT的加权和,得到第一LUT,包括:电子设备采用多个第三加权系数,计算多个第三LUT的加权和,得到第一图像的第四LUT;电子设备计算第一图像的第四LUT和第五图像的第一LUT的加权和,得到第一LUT;其中,第五图像是第一图像的前一帧图像,电子设备在本次拍摄过程中采集的第1帧图像的前一帧图像的第三LUT是预设LUT。
在该设计方式中,电子设备在确定最终LUT时,不仅参考了当前一帧图像,还参考了上一帧图像的最终LUT。这样,可以在改变LUT的过程中,实现不同LUT对应的显示效果或风格的平滑过渡,可以优化电子设备呈现的多帧预览图像的显示效果,提升用户拍照或录像过程中的视觉体验。
在第二方面的另一种可能的设计方式中,在电子设备将第一图像作为输入,运行预设AI模型,得到多个第三LUT的多个第三加权系数之前,电子设备可以训练预设AI模型a。其中,电子设备训练预设AI模型a的方法包括:电子设备获取多组数据对,每组数据对包括第六图像和第七图像,第六图像是处理第七图像得到的满足预设条件的图像;电子设备将第七图像和第六图像作为输入样本,训练预设AI模型,使得预设AI模型具备确定采用何种权重对多个第三LUT求加权和得到的LUT处理第七图像能够得到第六图像的显示效果的能力。
在第一方面或第二方面的另一种可能的设计方式中,用户可以调整上述预设AI模型a或预设AI模型b输出的权重。本申请的方法还可以包括:电子设备响应于用户的第二预设操作,显示多个第三设置项;其中,每个第三设置项对应一个第三LUT,用于设置第三LUT的第三加权系数;电子设备响应于用户对多个第三设置项中一个或多个第三设置项的设置操作,更新对应的第三加权系数。其中,电子设备采用更新后的多个第三加权系数计算多个第三LUT的加权和。
上述第二预设操作是用户对第二预设控件的点击操作,第二预设控件用于触发电子设备设置多个第三LUT的权重;或者,第二预设操作是用户对电子设备中第二物理按键的点击操作。
在该设计方式中,可以由用户调整预设AI模型a或预设AI模型b输出的权重。这样,电子设备可以按照用户的需求调整LUT,如此便可以拍摄到与用户满意度更高的图像。
在第一方面或第二方面的另一种可能的设计方式中,还可以由用户在电子设备中新增LUT。本申请的方法还包括:电子设备响应于用户的第三预设操作,显示一个或多个第四设置项;其中,第三预设操作用于触发电子设备新增显示效果,每个第四设置项对应一种第五LUT,每种第五LUT对应一种拍摄场景下的显示效果,第五LUT与第三LUT不同;响应于用户对预览界面中任一个第四设置项的选择操作,电子设备保存用户选择的第四设置项对应的第五LUT。
在第一方面或第二方面的另一种可能的设计方式中,上述第四设置项包括采用对应第五LUT处理后的预览图像,用于呈现第五LUT对应的显示效果。如此,用户便可以按照电子设备呈现出来的调整后的显示效果,确认是否得到满意的LUT。这样,可以提升用户设置新增LUT的效率。
在第一方面或第二方面的一种可能的设计方式中,电子设备获取第一图像,可以包括:电子设备在电子设备拍照的预览界面、电子设备录像前的预览界面或者电子设备正在录像的取景界面,采集第一图像。也就是说,该方法可以应用于电子设备的拍照场景、正在录像场景和录像模式下录像前的场景。
在第一方面或第二方面的一种可能的设计方式中,第一图像可以是电子设备的摄像头采集的图像。或者,第一图像可以是由电子设备的摄像头采集的图像得到的预览图像。
第三方面,本申请提供一种电子设备,该电子设备包括存储器、显示屏、一个或多个摄像头和一个或多个处理器。该存储器、显示屏、摄像头与处理器耦合。其中,摄像头用于采集图像,显示屏用于显示摄像头采集的图像或者处理器生成的图像,存储器中存储有计算机程序代码,计算机程序代码包括计算机指令,当计算机指令被处理器执行时,使得电子设备执行如第一方面或第二方面及其任一种可能的设计方式所述的方法。
第四方面,本申请提供一种电子设备,该电子设备包括存储器、显示屏、一个或多个摄像头和一个或多个处理器。存储器、显示屏、摄像头与处理器耦合。其中,存储器中存储有计算机程序代码,该计算机程序代码包括计算机指令,当该计算机指令被处理器执行时,使得电子设备执行如下步骤:获取第一图像,第一图像为电子设备的摄像头采集的图像,第一图像包括第一拍摄对象;确定第一图像对应的第一场景,其中,第一场景用于标识第一拍摄对象对应的场景;根据第一场景确定第一颜色查找表LUT;根据第一LUT对第一图像进行处理得到第二图像,并显示第二图像,第二图像的显示效果与第一LUT对应。
在第四方面的一种可能的设计方式中,当计算机指令被处理器执行时,使得电子设备还执行如下步骤:在显示第二图像之后,采集第三图像,第三图像为电子设备的摄像头采集的图像,第三图像包括第二拍摄对象;确定第二图像对应第二场景,其中,第二场景用于标识第二拍摄对象对应的场景;根据第二场景确定第二LUT;根据第二LUT对第三图像进行处理得到第四图像,并显示第四图像,第四图像的显示效果与第 二LUT对应。
在第四方面的另一种可能的设计方式中,当该计算机指令被处理器执行时,使得电子设备还执行如下步骤:在电子设备拍照的预览界面、电子设备录像前的预览界面或者电子设备正在录像的取景界面,采集第一图像。
在第四方面的另一种可能的设计方式中,第一图像是电子设备的摄像头采集的图像;或者,第一图像是由电子设备的摄像头采集的图像得到的预览图像。
在第四方面的另一种可能的设计方式中,当该计算机指令被处理器执行时,使得电子设备还执行如下步骤:将多个第三LUT中第一场景对应的第三LUT,确定为第一图像的第一LUT。其中,多个第三LUT预先配置在电子设备中,用于对电子设备的摄像头采集的图像进行处理得到不同显示效果的图像,每个第一LUT对应一种场景下的显示效果。
在第四方面的另一种可能的设计方式中,当该计算机指令被处理器执行时,使得电子设备还执行如下步骤:将多个第三LUT中第一场景对应的第三LUT,确定为第一图像的第四LUT;其中,多个第三LUT预先配置在电子设备中,用于对电子设备的摄像头采集的图像进行处理得到不同显示效果的图像,每个第三LUT对应一种场景下的显示效果;计算第一图像的第四LUT和第五图像的第一LUT的加权和,得到第一LUT;其中,第五图像是第一图像的前一帧图像,电子设备在本次拍摄过程中采集的第1帧图像的前一帧图像的第三LUT是预设LUT。
在第四方面的另一种可能的设计方式中,当该计算机指令被处理器执行时,使得电子设备还执行如下步骤:采用预先配置的第一加权系数和第二加权系数,计算第一图像的第四LUT和第五图像的第一LUT的加权和,得到第一LUT。其中,第一加权系数是第一图像的第四LUT的加权系数,第二加权系数是第五图像的第一LUT的加权系数,第一加权系数和第二加权系数之和等于1。其中,第一加权系数越小,第二加权系数越大,多帧第二图像的过渡效果越平滑。
在第四方面的另一种可能的设计方式中,当该计算机指令被处理器执行时,使得电子设备还执行如下步骤:在采用预先配置的第一加权系数和第二加权系数,计算第一图像的第四LUT和第五图像的第一LUT的加权和,得到第一LUT之前,响应于第一预设操作,显示第一设置项和第二设置项,第一设置项用于设置第一加权系数,第二设置项用于设置第二加权系数;响应于用户对第一设置项和/或第二设置项的设置操作,将用户设置的第一加权系数作为第一图像的第四LUT的加权系数,将用户设置的第二加权系数作为第五图像的第一LUT的加权系数。其中,第一预设操作是对电子设备显示的第一预设控件的点击操作,第一预设控件用于触发电子设备设置第一图像的第四LUT和第五图像的第一LUT的权重;或者,第一预设操作是用户对电子设备的第一物理按键的点击操作。
在第四方面的另一种可能的设计方式中,当该计算机指令被处理器执行时,使得电子设备还执行如下步骤:将第一场景的指示信息和第一图像作为输入,运行预设AI模型,得到多个第三LUT的多个第三加权系数;其中,多个第三加权系数之和为1,多个第三LUT与多个第三加权系数一一对应;采用多个第三加权系数,计算多个第三LUT的加权和,得到第一LUT。
在第四方面的另一种可能的设计方式中,当该计算机指令被处理器执行时,使得电子设备还执行如下步骤:将第一场景的指示信息和第一图像作为输入,运行预设AI模型,得到多个第三LUT的多个第三加权系数;其中,多个第三加权系数之和为1,多个第三LUT与多个第三加权系数一一对应;采用多个第三加权系数,计算多个第三LUT的加权和,得到第一图像的第四LUT;计算第一图像的第四LUT和第五图像的第一LUT的加权和,得到第一LUT;其中,第五图像是第一图像的前一帧图像,电子设备在本次拍摄过程中采集的第1帧图像的前一帧图像的第三LUT是预设LUT
在第四方面的另一种可能的设计方式中,当该计算机指令被处理器执行时,使得电子设备还执行如下步骤:在根据第一场景确定第一LUT之前,获取多组数据对,每组数据对包括第六图像和第七图像,第六图像是处理第七图像得到的满足预设条件的图像;识别第七图像,确定第七图像对应的第三场景;将第七图像和第六图像,以及识别第三场景的指示信息作为输入样本,训练预设AI模型,使得预设AI模型具备确定采用何种权重对多个第三LUT求加权和得到的LUT处理第七图像能够得到第六图像的显示效果的能力。
在第四方面的另一种可能的设计方式中,当该计算机指令被处理器执行时,使得电子设备还执行如下步骤:响应于第二预设操作,显示多个第三设置项;其中,每个第三设置项对应一个第三LUT,用于设置第三LUT的第三加权系数;响应于用户对多个第三设置项中一个或多个第三设置项的设置操作,更新对应的第三加权系数;其中,电子设备采用更新后的多个第三加权系数计算多个第三LUT的加权和。
其中,第二预设操作是用户对第二预设控件的点击操作,第二预设控件用于触发电子设备设置多个第三LUT的权重;或者,第二预设操作是用户对电子设备中第二物理按键的点击操作。
在第四方面的另一种可能的设计方式中,当该计算机指令被处理器执行时,使得电子设备还执行如下步骤:响应于第三预设操作,显示一个或多个第四设置项;其中,第三预设操作用于触发电子设备新增显示效果,每个第四设置项对应一种第五LUT,每种第五LUT对应一种拍摄场景下的显示效果,第五LUT与第三LUT不同;响应于用户对任一个第四设置项的选择操作,保存用户选择的第四设置项对应的第五LUT。
在第四方面的另一种可能的设计方式中,上述第四设置项包括采用对应第五LUT处理后的预览图像,用于呈现第五LUT对应的显示效果。
第五方面,本申请提供一种电子设备,该电子设备包括存储器、显示屏、一个或多个摄像头和一个或多个处理器。存储器、显示屏、摄像头与处理器耦合。其中,存储器中存储有计算机程序代码,该计算机程序代码包括计算机指令,当该计算机指令被处理器执行时,使得电子设备执行如下步骤:获取第一图像,第一图像为电子设备的摄像头采集的图像,第一图像包括第一拍摄对象;将第一图像作为输入,运行预设人工智能AI模型,得到多个第二颜色查找表LUT的多个第三加权系数;其中,多个第三加权系数之和为1,多个第三LUT与多个第三加权系数一一对应;采用多个第三加权系数,计算多个第三LUT的加权和,得到第一LUT;根据第一LUT对第一图像进行处理得到第二图像,并显示第二图像,第二图像的显示效果与第一LUT对应。
在第五方面的另一种可能的设计方式中,当该计算机指令被处理器执行时,使得 电子设备还执行如下步骤:采用多个第三加权系数,计算多个第三LUT的加权和,得到第一图像的第四LUT;计算第一图像的第四LUT和第五图像的第一LUT的加权和,得到第一LUT;其中,第五图像是第一图像的前一帧图像,电子设备在本次拍摄过程中采集的第1帧图像的前一帧图像的第三LUT是预设LUT。
在第五方面的另一种可能的设计方式中,当该计算机指令被处理器执行时,使得电子设备还执行如下步骤:在将第一图像作为输入,运行预设AI模型,得到多个第三LUT的多个第三加权系数之前,获取多组数据对,每组数据对包括第六图像和第七图像,第六图像是处理第七图像得到的满足预设条件的图像;将第七图像和第六图像作为输入样本,训练预设AI模型,使得预设AI模型具备确定采用何种权重对多个第三LUT求加权和得到的LUT处理第七图像能够得到第六图像的显示效果的能力。
第六方面,本申请提供一种计算机可读存储介质,该计算机可读存储介质包括计算机指令,当计算机指令在电子设备上运行时,使得电子设备执行如第一方面或第二方面及其任一种可能的设计方式所述的方法。
第七方面,本申请提供一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得该计算机执行如第一方面或第二方面及任一种可能的设计方式所述的方法。该计算机可以是上述电子设备。
可以理解地,上述提供的第二方面、第三方面及其任一种可能的设计方式所述的电子设备,第四方面所述的计算机存储介质,第五方面所述的计算机程序产品所能达到的有益效果,可参考第一方面及其任一种可能的设计方式中的有益效果,此处不再赘述。
附图说明
图1为多种LUT对应的显示效果或风格的示意图;
图2为一种手机的拍照的取景界面示意图;
图3为一种手机的录像的取景界面示意图;
图4为本申请实施例提供的一种电子设备的硬件结构示意图;
图5为本申请实施例提供的一种图像处理方法的流程图;
图6为本申请实施例提供的一种手机的拍照的取景界面示意图;
图7A为本申请实施例提供的另一种图像处理方法的流程图;
图7B为本申请实施例提供的一种确定第T帧图像的最终LUT(即第一LUT)的原理示意图;
图7C为本申请实施例提供的另一种图像处理方法的流程图;
图7D为本申请实施例提供的一种确定第T帧图像的最终LUT(即第一LUT)的原理示意图;
图7E为本申请实施例提供的另一种手机的拍照的取景界面示意图;
图7F为本申请实施例提供的另一种手机的拍照的取景界面示意图;
图8为本申请实施例提供的一种手机的录像的取景界面示意图;
图9为本申请实施例提供的另一种手机的录像的取景界面示意图;
图10为本申请实施例提供的另一种手机的录像的取景界面示意图;
图11A为本申请实施例提供的另一种图像处理方法的流程图;
图11B为本申请实施例提供的另一种确定第T帧图像的最终LUT(即第一LUT)的原理示意图;
图11C为本申请实施例提供的另一种确定第T帧图像的最终LUT(即第一LUT)的原理示意图;
图12A为本申请实施例提供的另一种图像处理方法的流程图;
图12B为本申请实施例提供的另一种确定第T帧图像的最终LUT(即第一LUT)的原理示意图;
图12C为本申请实施例提供的另一种确定第T帧图像的最终LUT(即第一LUT)的原理示意图;
图13为本申请实施例提供的另一种图像处理方法的流程图;
图14A为本申请实施例提供的另一种手机的录像的取景界面示意图;
图14B为本申请实施例提供的另一种手机的录像的取景界面示意图;
图15A为本申请实施例提供的另一种确定第T帧图像的最终LUT(即第一LUT)的原理示意图;
图15B为本申请实施例提供的另一种确定第T帧图像的最终LUT(即第一LUT)的原理示意图;
图16A为本申请实施例提供的另一种手机的录像的取景界面示意图;
图16B为本申请实施例提供的另一种手机的录像的取景界面示意图;
图17A为本申请实施例提供的另一种确定第T帧图像的最终LUT(即第四LUT)的原理示意图;
图17B为本申请实施例提供的另一种确定第T帧图像的最终LUT(即第四LUT)的原理示意图;
图18A为本申请实施例提供的另一种手机的录像的取景界面示意图;
图18B为本申请实施例提供的另一种手机的录像的取景界面示意图;
图18C为本申请实施例提供的另一种手机的录像的取景界面示意图;
图19为本申请实施例提供的一种芯片系统的结构示意图。
具体实施方式
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
为了便于理解,本申请实施例这里介绍本申请实施例涉及的术语:
(1)红绿蓝(Red Green Blue,RGB):三原色RGB包括红(Red)、绿(Green)、蓝(Blue)。将这三种颜色的光按照不同比例混合,就可以得到丰富多彩的色彩。
摄像头采集的图像是由一个个像素构成的,每个像素都是由红色子像素、绿色子像素和蓝色子像素构成的。假设R、G、B三者的取值范围为0-255,如RGB(255,0,0)表示纯红色,Green(0,255,0)表示纯绿色,Blue(0,0,255)表示纯蓝色。总之,这三种颜色按照不同比例混合,就可以得到丰富多彩的色彩。
(2)颜色查找表(LUT):也可以称为LUT文件或者LUT参数,是一种红绿蓝(Red  Green Blue,RGB)的映射表。
一张图像包括很多像素,每个像素由RGB值表示。电子设备的显示屏可以根据该图像中每个像素点的RGB值来显示该图像。也就是说,这些RGB值会告诉显示屏如何发光,以混合出各种各样的色彩呈现给用户。如果想要改变该图像的色彩(或者风格、效果),则可以调整这些RGB值即可。
LUT是一种RGB的映射表,用于表征调整前后的RGB值的对应关系。例如,请参考图1,其示出一种LUT的示例。
表1
Figure PCTCN2022090630-appb-000001
当原始RGB值为(14,22,24)时,经过表1所示的LUT的映射,输出RGB值为(6,9,4,)。当原始RGB值为(61,34,67)时,经过表1所示的LUT的映射,输出RGB值为(66,17,47)。当原始RGB值为(94,14,171)时,经过表1所示的LUT的映射,输出RGB值为(117,82,187)。当原始RGB值为(241,216,222)时,经过表1所示的LUT的映射,输出RGB值为(255,247,243)。
需要说明的是,针对同一张图像,未采用LUT处理过的图像的显示效果与采用LUT处理过的图像的显示效果不同;采用不同的LUT处理同一张图像,可以得到不同风格的显示效果。本申请实施例中所述的图像的“显示效果”是指图像被显示屏显示后,可以被人眼观察到的图像效果。
例如,图1所示的LUT 1、LUT 2和LUT 3是不同的LUT。采用LUT 1处理摄像头采集的原图100,可得到图1所示的图像101。采用LUT 2处理原图100,可得到图1所示的图像102。采用LUT 3处理原图100,可得到图1所示的图像103。对比图1所示的图像101、图像102和图像103可知:图像101、图像102和图像103的显示效果不同。
常规技术中,手机拍摄(如拍照和录像)时,只能采用拍摄前预先配置的LUT、用户选择的LUT或者识别预览图像确定的LUT来处理预览图像。
示例性的,在拍照场景下,手机响应于用户对相机应用的图标的点击操作,可以显示图2中的(a)所示的拍照的取景界面201。该拍照的取景界面201可以包括摄像头采集的预览图像202和AI拍摄开关203。该预览图像202是未经过LUT处理的图像。AI拍摄开关203用于触发手机识别预览图像202对应的拍摄场景。手机可接收用户对AI拍摄开关 203的点击操作。响应于用户对AI拍摄开关203的点击操作,手机可以识别预览图像202对应的拍摄场景(如人物场景)。
其中,手机中可以保存多个预置LUT,每个预置LUT对应一种拍摄场景。例如,手机中可以保存人物场景对应的预置LUT、美食场景对应的预置LUT、植物场景对应的预置LUT、动物场景对应的预置LUT,以及大海场景对应的预置LUT等。应注意,采用每个拍摄场景对应的LUT处理该拍摄场景的图像,可以提升该拍摄场景下的显示效果。
然后,手机可以采用识别到的拍摄场景对应的预置LUT处理该预览图像202。例如,手机采用上述摄场景对应的预置LUT处理该预览图像202,可以得到图2中的(b)所示的预览图像205。具体的,响应于用户对AI拍摄开关203的点击操作,手机可以显示图2中的(b)所示的拍照的取景界面204,该拍照的取景界面204包括预览图像205。
示例性的,在录像场景下,手机可显示图3中的(a)所示的录像的取景界面301。该录像的取景界面301可以包括摄像头采集的预览图像303和拍摄风格选项302。该预览图像303是未经过LUT处理的图像。
然后,手机可接收用户对拍摄风格选项302的点击操作。响应于用户对拍摄风格选项302的点击操作,手机可以显示图3中的(b)所示的风格选择界面304,该风格选择界面304用于提示用户选择录像的拍摄风格/效果。例如,风格选择界面304可以包括提示信息“请选择您需要的拍摄风格/效果”304。该风格选择界面304还可以包括多个风格的选项,如原图选项、**风格的选项、##风格的选项和&&风格的选项。每个风格的选项用于一种预置LUT,用于触发手机采用对应的预置LUT处理录像的预览图像。
举例来说,上述多个风格(如**风格、##风格和&&风格等)可以包括:自然风格、灰调风格、油画风格、黑白风格、旅行风格、美食风格、风景风格、人物风格、宠物风格或者静物风格等。
例如,以用户选择图3中的(b)所示的##风格的选项为例。手机响应于用户对图3中的(b)所示的##风格的选项的选择操作,可以采用##风格对应的预置LUT处理录像的预览图像306,如手机可显示图3中的(c)所示的录像的取景界面305。该录像的取景界面305可以包括预览图像306。
应注意,图3中的(b)所示的原图选项对应未采用LUT处理过的图像,**风格的选项对应采用**风格的LUT处理过的图像,##风格的选项对应采用##风格的LUT处理过的图像,&&风格的选项对应采用&&风格的LUT处理过的图像。图3中的(b)所示的四张图像的显示效果不同。
综上所述,采用常规技术的方案,只能采用拍摄前预先配置的LUT、用户选择的LUT或者识别预览图像确定的LUT来处理预览图像。如此,手机只能拍摄得到上述预先配置的LUT、用户选择的LUT或者识别预览图像确定的LUT对应的风格或显示效果的照片或视频。手机拍摄的照片或视频的风格或显示效果单一,无法满足当下用户多样化的拍摄需求。
本申请实施例提供一种图像处理方法,可以应用于包括摄像头的电子设备。该电子设备可以确定摄像头采集的一帧第一图像对应的场景(即第一场景)。然后,电子设备可以确定该第一场景对应的第一LUT。最后,电子设备可以采用这一帧图像的第一LUT,对该第一图像进行图像处理得到第二图像,并显示该第二图像。其中,第二图像的显示效果与第一LUT对应的显示效果相同。
采用本方案,电子设备在拍照或录像过程中,可以根据电子设备所获取的每一帧图像动态调整LUT。这样,在拍照或录像过程中,便可以呈现出不同LUT对应的显示效果或风格,可以丰富拍照或录像得到的显示效果。
示例性的,本申请实施例中的电子设备可以为便携式计算机(如手机)、平板电脑、笔记本电脑、个人计算机(personal computer,PC)、可穿戴电子设备(如智能手表)、增强现实(augmented reality,AR)\虚拟现实(virtual reality,VR)设备、车载电脑等,以下实施例对该电子设备的具体形式不做特殊限制。
以上述电子设备是手机为例。请参考图4,其示出本申请实施例提供的一种电子设备100的结构示意图。该电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。
其中,上述传感器模块180可以包括压力传感器,陀螺仪传感器,气压传感器,磁传感器,加速度传感器,距离传感器,接近光传感器,指纹传感器180A,温度传感器,触摸传感器180B,环境光传感器,骨传导传感器等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器神经网络处理器(neural-network processing unit,NPU),和/或微控制单元(micro controller unit,MCU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,串行外设接口(serial peripheral interface,SPI),集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM) 接口,和/或通用串行总线(universal serial bus,USB)接口等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如Wi-Fi网络),蓝牙(blue tooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),NFC,红外技术(infrared,IR)等无线通信的解决方案。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。该显示屏是触摸屏。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。ISP用于处理摄像头193反馈的数据。摄像头193用于捕获静态图像或视频。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:贴膜状态识别,图像修复、图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。 存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
指纹传感器180A用于采集指纹信息。电子设备100可以利用采集的指纹信息的指纹特性进行用户身份校验(即指纹识别),以实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
触摸传感器180B,也称“触控面板(TP)”。触摸传感器180B可以设置于显示屏194,由触摸传感器180B与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180B用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180B也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
按键190包括开机键,音量键等。马达191可以产生振动提示。指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口195用于连接SIM卡。
本申请实施例提供一种图像处理方法,该方法可以应用于包括摄像头和显示屏(如触摸屏)电子设备。以上述电子设备是手机为例,如图5所示,该图像处理方法可以包括S501-S504。
S501、手机获取第一图像。该第一图像是手机的摄像头采集的图像,该第一图像包括第一拍摄对象。
在本申请实施例的应用场景(1)中,手机可以在手机拍照的预览界面采集第一图像。例如,手机可以显示图6中的(a)所示的预览界面601。该预览界面601包括手机的摄像头采集的第一图像602。该第一图像602是未采用LUT处理的图像。
在本申请实施例的应用场景(2)中,手机可以在手机录像前的预览界面采集第一图像。例如,手机可以显示图8中的(a)所示的预览界面801。该预览界面801包括手机的摄像头采集的第一图像802。该第一图像802是未采用LUT处理的图像。
在本申请实施例的应用场景(3)中,手机可以在手机正在录像的取景界面(也称为预览界面)采集第一图像。例如,图10中的(a)所示的录像的取景界面1001为还未开始录像的取景界面,取景界面1001包括预览图像1002。响应于用户在图10中的(a)所示取景界面1001的录像操作,手机可以显示图10中的(b)所示的预览界面1003。该预览界面1003包括手机的摄像头采集的第一图像1004。该第一图像1004是未采用LUT处理的图像。
需要说明的是,上述第一图像可以是手机的摄像头采集的图像。例如,该第一图像可以是手机的摄像头采集到的原始图像,该第一图像未经过ISP的图像处理。或者,第一图像可以是由手机的摄像头采集的图像得到的预览图像。例如,该第一图像可以是对手机的摄像头采集的原始图像,进行图像处理后的预览图像。
S502、手机确定第一图像对应的第一场景。其中,第一场景用于标识第一拍摄对象对 应的场景。
S503、手机根据第一场景确定第一LUT。
在本申请实施例中,手机中可以预先配置多个第三LUT。该多个第三LUT也可以称为多个预置LUT。该多个第三LUT用于对摄像头采集的预览图像进行处理得到不同显示效果的图像,每个第三LUT对应一种拍摄场景下的显示效果。例如,如图1所示,图像101是采用LUT 1(即第三LUT 1,也称为预置LUT 1)处理原图100得到的,图像102是采用LUT 2(即第三LUT 2,也称为预置LUT 2)处理原图100得到的,图像103是采用LUT 3(即第三LUT 3,也称为预置LUT 3)处理原图100得到的。对比图像101、图像102和图像103呈现出不同的显示效果。也就是说,预置LUT 1、预置LUT 2和预置LUT3可以对应不同的显示效果或风格。
本申请实施例中,不同的显示效果可以是不同拍摄场景下的显示效果。例如,该拍摄场景可以为:人物场景、旅行场景、美食场景、风景场景、宠物场景或者静物场景等。应注意,本申请实施例中所述的拍摄场景与显示效果或风格一一对应。在不同的拍摄场景下,可以采用对应的LUT处理预览图像得到相应的显示效果或风格。因此,手机可以识别第一图像,确定第一图像对应的拍摄场景(即第一场景)。然后,手机可以根据第一场景确定第一LUT。
由上述描述可知,该拍摄场景可以为人物场景、旅行场景、美食场景、风景场景、宠物场景或者静物场景等。不同的拍摄场景下采集的图像中的拍摄对象不同。例如,人物场景中采集的图像可以包括人物的图像,美食场景中采集的图像可以包括美食的图像。因此,本申请实施例中,手机可以识别第一图像中包括的拍摄对象,来确定该第一图像对应的拍摄场景。
其中,手机可以采用预先配置的图像拍摄场景检测算法,识别第一图像,以识别出该第一图像对应的拍摄场景(即第一拍景)。例如,以第一图像是图6中的(a)所示的第一图像602为例。手机识别第一图像602,可以识别出该第一图像602对应的拍摄场景(即第一场景)为人物场景。如此,手机则可以将人物场景对应的第三LUT确定为第一LUT。
需要说明的是,手机识别第一图像对应的第一场景的方法,可以参考常规技术中的相关方法,本申请实施例这里不予赘述。上述图像拍摄场景检测算法的具体示例可以参考常规技术中的相关算法,本申请实施例这里不予赘述。
在一些实施例中,如图7A所示,S503可以包括S503a。
S503a:手机将多个第三LUT中第一场景对应的第三LUT,确定为第T帧图像(即第一图像)的第一LUT。
本申请实施例这里以第T帧第一图像是图6中的(a)所示的第一图像602为例,结合图7B介绍手机执行S502-S503(包括S503a),确定第一LUT的方法。
如图7B所示,手机可以对第一图像602执行场景检测,识别出第一图像602对应的第一场景(如人物场景)。然后,手机可以执行LUT选择(即LUT Select),从多个第三LUT(如第三LUT 1、第三LUT 2和第三LUT 3等第三LUT)中选择出人物场景对应的第一LUT。
在另一些实施例中,手机在确定最终LUT时,不仅参考可以当前一帧图像(即第一图像),还参考了第一图像的前一帧图像的最终LUT。这样,可以在改变LUT的过程中,实 现不同LUT对应的显示效果或风格的平滑过渡,可以优化电子设备呈现的多帧预览图像的显示效果,提升用户拍照或录像过程中的视觉体验。
具体的,如图7C所示,S503可以包括S503A-S503B。
S503A:手机将多个第三LUT中第一场景对应的第三LUT,确定为第一图像的第四LUT。
S503B:手机计算第一图像的第四LUT和第五图像的第一LUT的加权和,得到第一LUT。其中,第五图像是第一图像的前一帧图像。手机在本次拍摄过程中采集的第1帧第一图像的前一帧图像的第三LUT是预设LUT。
其中,手机在拍照模式或录像模式下,手机的摄像头可以实时采集图像,并输出采集的每一帧图像。例如,若第一图像是手机采集的第2帧图像,则第五图像是手机采集的第1帧图像。若第一图像是手机采集的第T帧图像,则第五图像是手机采集的第T-1帧图像,T≥2,T为整数。
在一些实施例中,手机可以采用第一加权系数P 1和第二加权系数P 2,计算第T帧图像(即第一图像)的第四LUT和第T-1帧图像(即第五图像)的第一LUT的加权和,得到第T帧图像(即第一图像)的第一LUT。该第一加权系数P 1和第二加权系数P 2也可以统称为时域平滑权重。
其中,该第一加权系数P 1是第T帧图像的第四LUT的加权系数,第二加权系数P 2是第T-1帧图像的第一LUT的加权系数。上述第一加权系数P 1和第二加权系数P 2之和等于1,即P 1+P 2=1。上述第一加权系数P 1和第二加权系数P 2可以预置在手机中。
示例性的,本申请实施例中,可以将第T帧图像的第四LUT记为Q (T,2),可以将第T-1帧图像的第一LUT记为Q (T-1,3),可以将第T帧图像的第一LUT记为Q (T,3)。第0帧图像的第一LUT为预设LUT。也就是说,Q (0,3)是预先设定的值。如此,手机便可以采用以下公式(1),计算第T帧图像的第一LUT,如Q (T,3)
Q (T,3)=P 1×Q (T,2)+P 2×Q (T-1,3)    公式(1)。
例如,在T=1的情况下,Q (0,3)、第一加权系数P 1和第二加权系数P 2为已知量。因此,手机可以采用公式(1),如Q (1,3)=P 1×Q (1,2)+P 2×Q (0,3),计算第1帧图像的第一LUT,如Q (1,3)
又例如,在T=2的情况下,Q (1,3)、第一加权系数P 1和第二加权系数P 2为已知量。因此,手机可以采用公式(1),如Q (2,3)=P 1×Q (2,2)+P 2×Q (1,3),计算第2帧图像的第一LUT,如Q (2,3)
又例如,在T=3的情况下,Q (2,3)、第一加权系数P 1和第二加权系数P 2为已知量。因此,手机可以采用公式(1),如Q (3,3)=P 1×Q (3,2)+P 2×Q (2,3),计算第3帧图像的第一LUT,如Q (3,3)
又例如,在T=4的情况下,Q (4,3)、第一加权系数P 1和第二加权系数P 2为已知量。因此,手机可以采用公式(1),如Q (4,3)=P 1×Q (4,2)+P 2×Q (3,3),计算第4帧图像的第一LUT,如Q (4,3)
如此,在T=n的情况下,Q (n,3)、第一加权系数P 1和第二加权系数P 2为已知量。因此,手机可以采用公式(1),如Q (n,3)=P 1×Q (n,2)+P 2×Q (n-1,3),计算第n帧图像的第一LUT,如Q (n,3)
需要说明的是,上述第一加权系数P 1(即第T帧图像的第四LUT的加权系数)越小,第二加权系数P 2(即第T-1帧图像的第一LUT的加权系数)越大,多帧第二图像的过渡效果越平滑。
本申请实施例这里以第T帧第一图像是图6中的(a)所示的第一图像602为例,结合图7D介绍手机执行S502-S503(包括S503A-S503B),确定第一LUT的方法。
如图7D所示,手机可以对第一图像602执行场景检测,识别出第一图像602对应的第一场景(如人物场景)。然后,手机可以执行LUT选择(即LUT Select),从多个第三LUT(如第三LUT 1、第三LUT 2和第三LUT 3等第三LUT)中选择出人物场景对应的第四LUT。最后,手机可以对第T帧图像(即第一图像)的第四LUT和第T-1帧图像(即第五图像)的第一LUT进行加权和(Blending),便可以得到第T帧图像的第一LUT。
在另一些实施例中,可以由用户设置第T帧图像(即第一图像)的第四LUT和第T-1帧图像(即第五图像)的第一LUT的加权系数。具体的,上述预览界面(如预览界面601、预览界面801或预览界面1003)还可以包括第一预设控件。该第一预设控件用于触发手机设置第T帧图像的第四LUT和第T-1帧图像的第一LUT的权重,即上述第一加权系数和第二加权系数。例如,如图7E中的(a)所示,预览界面701可以包括第一预设控件703,该第一预设控件703用于触发手机设置第T帧图像的第四LUT和第T-1帧图像的第一LUT的权重。该预览界面701还包括第一图像702。具体的,在上述S503B之前,本申请实施例的方法还可以包括S503'和S503〃。
S503'、手机响应于用户对该第一预设控件的点击操作,显示第一设置项和第二设置项。
其中,该第一设置项用于设置第T帧图像的第四LUT的第一加权系数,第二设置项用于设置第T-1帧图像的第一LUT的第二加权系数。
例如,响应于用户对图7E中的(a)所示的第一预设控件703的点击操作,手机可显示图7E中的(b)所示的预览界面704。该预览界面704包括第一预设控件705、第一图像706、第一设置项707和第二设置项708。该第一设置项707用于设置第T帧图像的第四LUT的第一加权系数。该第二设置项708用于设置第T-1帧图像的第一LUT的第二加权系数。其中,第一预设控件705与第一预设控件703处于不同的状态。如第一预设控件705处于开启状态,第一预设控件703处于关闭状态。
在一些实施例中,上述预览界面(如预览界面601、预览界面801或预览界面1003)可以包括上述第一预设控件,也可以不包括上述第一预设控件。在该实施例中,手机可以接收用户在预览界面输入的第一预设操作。上述S504'可以替换为:手机响应于用户在预览界面的第一预设操作,在预览界面显示第一设置项和第二设置项。例如,该第一预设操作可以为用户在手机的显示屏(如触摸屏)输入的L形手势、S形手势或者√形手势等任一种预设手势。又例如,该第一预设操作可以是用户对手机的第一物理按键的点击操作。该第一物理按键可以是手机中的一个物理按键,或者至少两个物理按键的组合按键。
S503〃、手机响应于用户对第一设置项和/或第二设置项的设置操作,将用户设置的第一加权系数作为第T帧图像的第四LUT的加权系数,将用户设置的第二加权系数作为第T-1帧图像的第一LUT的加权系数。该第一加权系数和第二加权系数可以统称为时域平滑权重。
其中,用户设置的加权系数(包括第一加权系数和第二加权系数)不同,则手机采用 用户设置的加权系数得到的第T帧图像的第一LUT。采用不同第T帧图像的第一LUT处理同一第一图像,可以得到不同的显示效果。在一些实施例中,手机还可以显示用户调整第一加权系数和第二加权系数后,采用第T帧图像的第一LUT处理后的显示效果。
例如,图7E中的(b)所示的第一设置项707对应的第一加权系数、图7F中的(a)所示的第一设置项710对应的第一加权系数、图7F中的(b)所示的第一设置项713对应的第一加权系数均不同。并且,图7E中的(b)所示的第二设置项708对应的第二加权系数、图7F中的(a)所示的第二设置项711对应的第二加权系数、图7F中的(b)所示的第二设置项714对应的第二加权系数均不同。因此,图7E中的(b)所示的预览图像706、图7F中的(a)所示的预览图像709和图7F中的(b)所示的预览图像712的显示效果均不同。如此,用户便可以根据调整后的显示效果,设置合适的加权系数。图7F中的(c)所示的715为采用图7F中的(b)所示的权重(即加权系数)确定的LUT处理后的图像。
示例性的,假设用户设置的第一加权系数为P 1',第二加权系数为P 2'。在该实施例中,可以将第T帧图像的第四LUT记为Q (T,2),可以将第T-1帧图像的第一LUT记为Q (T-1,3),可以将第T帧图像的第一LUT记为Q (T,3)。第0帧图像的第一LUT为预设LUT。也就是说,Q (0,3)是预先设定的值。如此,手机便可以采用以下公式(2),计算第T帧图像的第一LUT,如Q (T,3)
Q (T,3)=P 1'×Q (T,2)+P 2'×Q (T-1,3)    公式(2)。
例如,在T=1的情况下,Q (0,3)、第一加权系数P 1'和第二加权系数P 2'为已知量。因此,手机可以采用公式(2),如Q (1,3)=P 1'×Q (1,2)+P 2'×Q (0,3),计算第1帧图像的第一LUT,如Q (1,3)
又例如,在T=2的情况下,Q (1,3)、第一加权系数P 1'和第二加权系数P 2'为已知量。因此,手机可以采用上述公式(2),如Q (2,3)=P 1×Q (2,2)+P 2×Q (1,3),计算第2帧图像的第一LUT,如Q (2,3)
需要说明的是,手机拍摄或录像的过程中,用户随时可以触发手机执行上述S504'和S504“,重新设置第一加权系数和第二加权系数。例如,假设T=2之后,T=3之前,将第一加权系数设置为P 1“,第二加权系数设置为P 2“。之后,手机可以采用公式(3),计算第T帧图像的第一LUT,如Q (3,3)
例如,在T=3的情况下,Q (2,3)、第一加权系数P 1“和第二加权系数P 2“为已知量。因此,手机可以采用上述公式(3),如Q (3,3)=P 1“×Q (3,2)+P 2“×Q (2,3),计算第3帧图像的第一LUT,如Q (3,3)
又例如,在T=4的情况下,Q (4,3)、第一加权系数P 1和第二加权系数P 2为已知量。因此,手机可以采用上述公式(3),如Q (4,3)=P 1“×Q (4,2)+P 2“×Q (3,3),计算第4帧图像的第一LUT,如Q (4,3)
需要说明的是,上述第一加权系数(即第T帧图像的第四LUT的加权系数)越小,第二加权系数(即第T-1帧图像的第一LUT的加权系数)越大,多帧第二图像的过渡效果越平滑。
S504、手机根据第一LUT对第一图像进行处理得到第二图像,并显示所述第二图像。该第二图像的显示效果与第一图像的第一LUT对应。
示例性的,在上述应用场景(1)中,以第一图像是图6中的(a)所示的第一图像602 为例。手机执行S504,可以得到图6中的(b)所示的第二图像604,并显示图6中的(b)所示的预览界面603。该预览界面603包括采用第T帧图像的第一LUT处理得到的第二图像604。针对同一张图像,未采用LUT处理过的图像的显示效果与采用LUT处理过的图像的显示效果不同。例如,图6中的(a)所示的第一图像602未采用LUT处理过,图6中的(b)所示的第二图像604是采用LUT处理过的图像;第一图像602的显示效果与第二图像604的显示效果不同。本申请实施例中所述的图像的“显示效果”是指图像被显示屏显示后,可以被人眼观察到的图像效果。响应于用户对图6中的(b)所示的“拍摄快门”的点击操作,手机可以保存该第二图像604,显示图6中的(c)所示的拍照的预览界面605。该拍照的预览界面605包括预览图像606。
例如,本申请实施例这里结合图7D介绍S504。手机可以执行S504,采用图7D所示的时域平滑权重(包括上述第一加权系数和第二加权系数),计算第T帧图像的第四LUT和第T-1帧图像的第一LUT的加权和,得到图7D所示的第T帧第一LUT。然后,手机可以采用图7D所示的第T帧第一LUT,对摄像头采集的预览图像进行图像处理得到图7D所示的第二图像604。
示例性的,在上述应用场景(2)中,以第一图像是图8中的(a)所示的第一图像802为例。手机执行S504,可以得到图8中的(b)所示的第二图像804,并显示图8中的(b)所示的预览界面803。该预览界面803包括采用第T帧图像的第一LUT处理得到的第二图像804。其中,图8中的(b)所示的第二图像804的显示效果与图8中的(a)所示的第一图像802的显示效果不同。
手机拍照过程中,手机的摄像头的取景界面可能会发生较大变化。例如,用户可能会移动手机,使手机的取景内容发生变化。又例如,用户可能会切换手机的前后置摄像头,使手机的取景内容发生变化。如果手机的取景内容发生较大变化,执行本方案,手机的显示效果/风格可能会随着取景内容的变化而发生变化。
具体的,在S304之后,手机可以采集第三图像,该第三图像为手机的摄像头采集的图像,该第三图像包括第二拍摄对象;手机确定第二图像对应第二场景,该第二场景用于标识第二拍摄对象对应的场景;手机根据第二场景确定第二LUT;手机根据第二LUT对第三图像进行处理得到第四图像,并显示第四图像。该第四图像的显示效果与第二LUT对应。
例如,假设图8中的(b)所示预览图像804是前置摄像头采集的图像。手机响应于用户对图8中的(b)所示的摄像头切换选项的点击操作,可以切换使用后置摄像头采集图像,如手机可显示图9中的(a)所示的录像的取景界面901。该录像的取景界面901包括预览图像(可作为第四图像)902。作为第四图像的预览图像902可以是根据摄像头采集的第三图像进行处理得到的。由于预览图像902与预览图像804的图像内容发生了较大变化;因此,预览图像902与预览图像804的拍摄场景也可能发生了较大变化。例如,预览图像804的拍摄场景为人物场景(即第一场景),预览图像902的拍摄场景可能为美食场景(即第二场景)。如此,手机则可以自动调整LUT。例如,手机可以显示图9中的(b)所示的录像的取景界面903。该录像的取景界面903包括预览图像(可作为第四图像)904。其中,预览图像(可作为第四图像)904与预览图像(可作为第二图像)902的拍摄场景不同,预览图像904处理时所采用的LUT与预览图像902处理时所采用的LUT不同;因 此,预览图像904的显示效果与预览图像902的显示效果不同。
示例性的,在上述应用场景(3)中,以第一图像是图10中的(b)所示预览界面1003中的第一图像1004为例。手机执行S504,可以得到图10中的(c)所示的第二图像1006,并显示图10中的(b)所示的预览界面1005。该预览界面1005包括采用第T帧图像的第一LUT处理得到的第二图像1006。第二图像1006的显示效果与第一图像1004的显示效果不同。
本申请实施例提供的图像处理方法中,手机可以确定摄像头采集的一帧第一图像对应的场景(即第一场景)。然后,手机可以确定该第一场景对应的第一LUT。最后,手机可以采用这一帧图像的第一LUT,对该第一图像进行图像处理得到第二图像,并显示该第二图像。其中,第二图像的显示效果与第一LUT对应的显示效果相同。
采用本方案,采用本方案,手机在拍照或录像过程中,可以根据手机周期性获取的每一帧图像动态调整LUT。这样,在拍照或录像过程中,便可以呈现出不同LUT对应的显示效果或风格,可以丰富拍照或录像得到的显示效果。
并且,手机在确定最终LUT时,不仅参考了当前一帧图像,还参考了前一帧图像的最终LUT。这样,可以在改变LUT的过程中,实现不同LUT对应的显示效果或风格的平滑过渡,可以优化手机呈现的多帧预览图像的显示效果,提升用户拍照或录像过程中的视觉体验。
需要说明的是,摄像头采集的图像可能不只包括一种拍摄场景的图像,可能包括多种拍摄场景(称为复杂的拍摄场景)的图像。例如,如图9中的(a)所示,预览图像902中包括人物的图像、美食的图像和建筑的图像。在这种情况下,如果手机执行S503所示的方法,则只能将第一图像的第一场景对应的一个第三LUT作为第一LUT;或者,只能将第一图像的第一场景对应的一个第三LUT作为第四LUT来确定第一LUT。也就是说,在上述复杂的拍摄场景中,采用S503所示的方法,第一LUT只参考了第一图像的第一场景对应的一个第三LUT,而没有参考复杂的拍摄场景中除第一场景之外的其他拍摄场景对应的第三LUT。这样,可能会影响手机的显示效果。
基于此,在另一些实施例中,手机可以将第T帧图像(即第一图像)作为预设AI模型(如预设AI模型a)的输入,运行预设AI模型得到上述多个第三LUT的权重。然后,手机可以计算该多个第三LUT的加权和,便可以得到第一LUT。具体的,如图11A所示,上述S502-S503可以替换为S1101-S1102。
S1101、手机将第T帧图像(即第一图像)作为输入,运行预设AI模型a,得到多个第三LUT的多个第三加权系数。该多个第三加权系数之和为1,多个第三LUT与多个第三加权系数一一对应。
其中,上述预设AI模型a可以是用于进行LUT权重学习的神经网络模型。例如,该预设AI模型a可以是以下任一种神经网络模型:VGG-net、Resnet和Lenet。本申请实施例中,预设AI模型a的训练过程可以包括Sa和Sb。
Sa、手机获取多组数据对,每组数据对包括第六图像和第七图像,第六图像是处理第七图像得到的满足预设条件的图像。
其中,该预设条件具体可以为:处理后的显示效果(也称为显示效果)满足预先设定的标准显示效果。也就是说,上述第六图像相当于标准图,第七图像是未处理的原图。其 中,上述第六图像可以是对第七图像进行(photo shop,PS)得到的。应注意,上述多个多组数据对可以包括多个不同拍摄场景下的数据对。
Sb、手机将第七图像和第六图像作为输入样本,训练预设AI模型a,使得预设AI模型a具备确定采用何种权重对多个第三LUT求加权和得到的LUT处理该第七图像能够得到第六图像的显示效果的能力。
示例性的,手机将第七图像和第六图像作为输入样本输入预设AI模型a后,预设AI模型a可以重复执行以下操作(1)-操作(2),直至预设AI模型a处理第七图像得到的第八图像达到第六图像的显示效果,则表示预设AI模型a具备了上述能力。
操作(1):第七图像作为输入(Input),预设AI模型a采用多个第三LUT的权重,对第七图像(Input)进行处理得到第八图像(Output)。预设AI模型a第一次对第七图像(Input)进行处理得到第八图像(Output)时,所采用的权重是默认权重。该默认权重包括多个默认加权系数。多个默认加权系数与多个第三LUT一一对应。该多个默认加权系数预先配置在手机中。
操作(2):预设AI模型a采用梯度下降法,对比第八图像(Output)与第六图像(即标准图),更新操作(1)中的权重。
需要说明的是,开始训练预设AI模型a时候,上述多个默认加权系数可能都是相同的。随着训练的进行,预设AI模型a会逐渐调整多个第三LUT的权重,学习到确定采用何种权重对多个第三LUT求加权和得到的LUT处理该第二图像能够得到第一图像的显示效果的能力。
S1102、手机采用多个第三加权系数,计算多个第三LUT的加权和,得到第T帧图像的第一LUT。
示例性的,本申请实施例这里以第T帧图像(即第一图像)是图9中的(a)所示的第一图像902为例,结合图11B介绍手机执行S1101-S1102,确定第T帧图像的第一LUT的方法。以及,手机执行S504,得到第二图像的方法。
首先,手机可以执行S1101,将第一图像902作为输入,运行图11B所示的预设AI模型a,便可以得到图11B所示的多个第三加权系数,该多个第三加权系数之和为1,多个第三LUT与多个第三加权系数一一对应。例如,假设图11B所示的预设AI模型a输出M个第三加权系数,M≥2,M是整数。假设M个第三加权系数中,第三LUT 1(即预置LUT 1)对应的第三加权系数为K (T,1),第三LUT 2(即预置LUT 2)对应的第三加权系数为K (T,2),第三LUT 3(即预置LUT 3)对应的第三加权系数为K (T,3),第三LUT M(即预置LUT M)对应的第三加权系数为K (T,M)
然后,手机可以执行S1102,采用上述多个第三加权系数,按照以下公式(4)计算M个第三LUT的加权和,得到第T帧图像的第一LUT。本申请实施例中,可以将第T帧图像的第一LUT记为Q (T,m,3),可以将第三LUT m记为Q (T,m,1)。
Figure PCTCN2022090630-appb-000002
之后,手机可以执行S504,采用图11B所示的第T帧图像的第一LUT,对第一图像902进行图像处理得到图11B所示的第二图像904。
在该实施例中,针对复杂的拍摄场景,手机确定第T帧图像的第一LUT,不仅参考了 第一图像的第一场景对应的一个第三LUT,还参考了多个第三LUT中除第一场景之外的其他拍摄场景对应的第三LUT。这样,可以提升手机的显示效果。
在另一些实施例中,手机在确定最终LUT时,不仅参考可以当前一帧图像(即第一图像),还参考了第一图像的前一帧图像的最终LUT。这样,可以在改变LUT的过程中,实现不同LUT对应的显示效果或风格的平滑过渡,可以优化电子设备呈现的多帧预览图像的显示效果,提升用户拍照或录像过程中的视觉体验。
具体的,S1102可以包括:手机采用多个第三加权系数,计算多个第三LUT的加权和,得到第T帧图像的第四LUT;手机计算第T帧图像(即第一图像)的第四LUT与第T-1帧图像(即第五图像)的第一LUT的加权和,得到第T帧图像的第一LUT。请参考图11C,其示出本实施例中手机执行S1101-S1102确定第T帧图像的第一LUT的方法;以及手机执行S504得到第二图像的方法原理示意图。
在另一些实施例中,手机可以将第T帧图像(即第一图像)和第一图像的场景检测结果均作为AI模型(如预设AI模型b)的输入,运行AI模型得到上述多个第三LUT的权重。然后,手机可以计算该多个第三LUT的加权和,便可以得到第一LUT。具体的,如图12A所示,S503可以替换为S1201-S1202。
S1201、手机将第一场景的指示信息和第一图像(即第T帧图像)作为输入,运行预设AI模型b,得到多个第三LUT的多个第三加权系数。该多个第三加权系数之和为1,该多个第三LUT与多个第三加权系数一一对应。
其中,上述预设AI模型b可以是用于进行LUT权重学习的神经网络模型。例如,该预设AI模型b可以是以下任一种神经网络模型:VGG-net、Resnet和Lenet。本申请实施例中,预设AI模型b的训练过程可以包括Si、Sii和Siii。
Si、手机获取多组数据对,每组数据对包括第六图像和第七图像,第六图像是处理第七图像得到的满足预设条件的图像。
其中,Si与上述Sa相同,本申请实施例这里不予赘述。
Sii、手机识别第七图像,确定第二图像对应的第三场景。
其中,手机识别第七图像确定第七图像对应的第三场景的方法,可以参考手机识别第一图像对应的第一场景的方法,本申请实施例这里不予赘述。
Siii、手机将第七图像和第六图像,以及识别第三场景的指示信息作为输入样本,训练预设AI模型b,使得预设AI模型b具备确定采用何种权重对多个第三LUT求加权和得到的LUT处理第七图像能够得到第六图像的显示效果。
需要说明的是,与上述预设AI模型a不同的是,预设AI模型b的输入样本增加了第第图像对应的第三场景的指示信息。该预设AI模型b的训练原理与上述预设AI模型a的训练原理相同。不同的是,第七图像对应的第三场景的指示信息可以更加明确的指示第七图像对应的拍摄场景。
应理解,如果识别到第七图像的拍摄场景为第三场景,则表示该第七图像是第三场景的图像的可能性较高。那么,将第二拍摄对象对应的第三LUT的加权系数设置为较大值,有利于提升显示效果。由此可见,该第三场景的指示信息可以对预设AI模型b的训练起到引导的作用,引导预设AI模型b向倾向于该第三场景的方向训练。这样,可以加速预设AI模型b的收敛,减少预设AI模型b的训练次数。
S1202、手机采用多个第三加权系数,计算多个第三LUT的加权和,得到第T帧图像(即第一图像)的第一LUT。
示例性的,本申请实施例这里以第T帧图像(即第一图像)是图9中的(a)所示的第一图像902为例,结合图12B介绍手机执行S1201-S1202,确定第T帧图像的第一LUT的方法。以及,手机执行S504,得到第二图像的方法。
首先,手机可以执行S502,对第T帧图像(即第一图像)902进行场景检测结果,得到图12B所示的第一图像902对应的第一场景。
然后,手机可以执行S1201,将第一图像902和第一场景的指示信息作为输入,运行图12B所示的预设AI模型b,便可以得到图12B所示的多个第三加权系数。该多个第三加权系数之和为1,多个第三LUT与多个第三加权系数一一对应。例如,假设图12B所示的预设AI模型b输出M个第三加权系数,M≥2,M是整数。手机可以执行S1202,采用多个第三加权系数,计算M个第三LUT的加权和,得到第T帧图像的第一LUT。之后,手机可以执行S505,采用图12B所示的第T帧第一LUT,对第一图像902进行图像处理得到图12B所示的第二图像904。
在该实施例中,针对复杂的拍摄场景,手机确定第T帧图像的第一LUT,不仅参考了第一图像的第一场景对应的一个第三LUT,还参考了多个第三LUT中除第一场景之外的其他拍摄场景对应的第三LUT。并且,手机确定多个第三加权系数时,还参考了第一图像。这样,可以提升手机的显示效果。
在另一些实施例中,手机在确定最终LUT时,不仅可以参考当前一帧图像(即第一图像),还参考了第一图像的前一帧图像的最终LUT。这样,可以在改变LUT的过程中,实现不同LUT对应的显示效果或风格的平滑过渡,可以优化电子设备呈现的多帧预览图像的显示效果,提升用户拍照或录像过程中的视觉体验。
具体的,S1203可以包括:手机采用多个第三加权系数,计算多个第三LUT的加权和,得到第T帧图像的第四LUT;手机计算第T帧图像(即第一图像)的第四LUT与第T-1帧图像(即第五图像)的第一LUT的加权和,得到第T帧图像的第一LUT。请参考图12C,其示出本实施例中手机执行S1201-S1202确定第T帧图像的第一LUT的方法;以及手机执行S504得到第二图像的方法原理示意图。
在另一些实施例中,用户可以调整上述预设AI模型a或预设AI模型b输出的多个第三加权系数中的至少一个第三加权系数。也就是说,手机可以接收用户对上述多个第三加权系数的调整操作,采用用户调整后的多个第三加权系数,计算上述第T帧图像的第一LUT。具体的,在上述S1102或S1202之前,本申请实施例的方法还可以包括S1301-S1302。相应的,上述S1102或S1202可以替换为S1303。例如,如图13所示,在S1202之前,本申请实施例的方法还可以包括S1301-S1302。相应的,S1202可以替换为S1303。
S1301、手机响应于用户对第二预设控件的点击操作,显示多个第三设置项。每个第三设置项对应一个第三LUT,用于设置第三LUT的第三加权系数。
具体的,上述预览界面还可以包括第二预设控件。该第二预设控件用于触发手机显示所述多个第三加权系数的多个第三设置项,以便于用户可以通过该多个第三设置项设置上述多个第三LUT的权重。
示例性的,如图14A中的(a)所示,预览界面1401包括第二预设控件1402。响应于 用户对该第二预设控件1402的点击操作,如图14A中的(b)所示,手机可以在预览界面1403显示多个第三设置项1405,如“##风格(如人物场景)”设置项、“**风格(如美食场景)”设置项和“&&风格(如建筑场景)”设置项等。本申请实施例中,以第三设置项是图14A中的(a)所示的滚动条为例,介绍本申请实施例的方法。由上述实施例可知:每种拍摄风格和拍摄场景可以对应一种第三LUT。手机可以通过上述第三设置项设置对应第三LUT的权重(即加权系数)。
响应于用户对第二预设控件1402的点击操作,该第二预设控件1402的显示状态发生变化,如手机可显示图14A中的(b)所示的第二预设控件1406。第二预设控件1402对应的显示状态(如白底黑字的显示状态)用于指示第二预设控件处于关闭状态。第二预设控件1406对应的显示状态(如黑底白字的显示状态)用于指示第二预设控件处于开启状态。预览界面1403还包括第二图像1404。第二图像1404的显示效果为:采用多个第三设置项1405所示的多个第三加权系数进行加权和计算,最终得到的第T帧第四LUT处理第一图像得到的显示效果。
在一些实施例中,上述预览界面可以包括上述第二预设控件,也可以不包括上述第二预设控件。在该实施例中,手机可以接收用户在预览界面输入的第二预设操作。上述S1301可以替换为:手机响应于用户在预览界面的第二预设操作,在预览界面显示多个第三设置项。例如,该第二预设操作可以为用户在手机的显示屏(如触摸屏)输入的L形手势、S形手势或者√形手势等任一种预设手势。该第二预设操作对应的预设手势与第一预设操作对应的预设手势不同。又例如,该第二预设操作可以是用户对手机的第二物理按键的点击操作。该第一物理按键可以是手机中的一个物理按键,或者至少两个物理按键的组合按键。该第二物理按键与上述第一物理按键不同。
S1302、手机响应于用户对多个第三设置项中一个或多个第三设置项的设置操作,更新对应的第三加权系数。
例如,手机可以接收用户对图14A中的(b)所示的多个第三设置项1405的设置操作,显示图14B中的(a)所示的预览界面1407。该预览界面1407包括多个第三设置项1409。该多个第三设置项1409所示的多个第三加权系数与多个第三设置项1405所示的多个第三加权系数不同。也就是说,手机响应于用户对多个第三设置项1405的设置操作,将多个第三加权系数由多个第三设置项1405所示的第三加权系数更新为多个第三设置项1409所示的第三加权系数。
其中,预览界面1407还包括第二图像1408。第二图像1408的显示效果为:采用多个第三设置项1409所示的多个第三加权系数进行加权和计算,最终得到的第T帧第一LUT处理第一图像得到的显示效果。对比图14B中的(a)和图14A中的(b)可知:第二图像1408的显示效果与第二图像1404的显示效果不同。
又例如,手机可以接收用户对图14B中的(a)所示的多个第三设置项1409的设置操作,显示图14B中的(b)所示的预览界面1410。该预览界面1410包括多个第三设置项1412。该多个第三设置项1412所示的多个第三加权系数与多个第三设置项1409所示的多个第三加权系数不同。也就是说,手机响应于用户对多个第三设置项1409的设置操作,将多个第三加权系数由多个第三设置项1409所示的第三加权系数更新为多个第三设置项1412所示的第三加权系数。
其中,预览界面1410还包括第二图像1411。第二图像1411的显示效果为:采用多个第三设置项1412所示的多个第三加权系数进行加权和计算,最终得到的第T帧第一LUT处理第一图像得到的显示效果。对比图14B中的(b)和图14B中的(a)可知:第二图像1411的显示效果与第二图像1408的显示效果不同。
需要说明的是,手机执行S1302之后,手机可能会接收到用户对多个第三设置项中一个或多个第三设置项的设置操作。手机更新后的多个第三加权系数之和不一定为1。
其中,用户可以通过调整上述任一个第三设置项,实时调整上述多个第三加权系数。并且,用户可以观察调整多个第三加权系数后第二图像的显示效果,设置为多个第三LUT设置合适的加权系数。
在一些实施例中,手机可以接收用户对图14B中的(b)所示的第二预设控件1406的点击操作。响应于用户对第二预设控件1406的点击操作,手机可以隐藏上述多个第三设置项,显示图14B中的(c)所示的预览界面1413。该预览界面1413包括第二预览控件1402和第二图像1414。
S1303、手机采用更新后的多个第三加权系数,计算多个第三LUT的加权和,得到第T帧图像(即第一图像)的第一LUT。
示例性的,本申请实施例这里图15A介绍手机执行S1301-S1303,确定第T帧图像的第一LUT的方法。以及,手机执行S504,得到第二图像的方法。
手机将摄像头采集的第一图像作为输入执行S1101或S1202之后,便可以得到图15A所示的多个第三加权系数,如预设AI模型a或第二预先AI模型输出的多个第三加权系数。手机可以执行S1301-S1302,采用用户自定义的第三加权系数更新上述多个第三加权系数,得到更新的多个第三加权系数。然后,手机可以执行S1303,采用更新的多个第三加权系数,按照以下公式(5)计算M个第三LUT的加权和,得到第T帧图像的第一LUT。本申请实施例中,可以将第T帧图像的第一LUT记为Q (T,3),可以将第一LUT m记为Q (T,m,1)。
Figure PCTCN2022090630-appb-000003
其中,
Figure PCTCN2022090630-appb-000004
是第三LUT m(即预置LUT m)更新后的第三加权系数。
之后,手机可以执行S504,采用图15A所示的第T帧图像的第一LUT,对第一图像进行图像处理得到图15A所示的第二图像1411。
在该实施例中,针对复杂的拍摄场景,手机不仅可以通过预设AI模型a或预设AI模型b确定多个第三LUT的加权系数,还可以为用户提供调整该多个第三LUT的加权系数的服务。如此,手机便可以根据按照用户调整后的加权系数计算第T帧图像的第四LUT。这样,手机可以按照用户的需求拍摄出用户想要的照片或者视频,可以提升用户的拍摄体验。
在另一些实施例中,手机在确定最终LUT时,不仅参考可以当前一帧图像(即第一图像),还参考了第一图像的前一帧图像的最终LUT。这样,可以在改变LUT的过程中,实现不同LUT对应的显示效果或风格的平滑过渡,可以优化电子设备呈现的多帧预览图像的显示效果,提升用户拍照或录像过程中的视觉体验。
具体的,S1303可以包括:手机采用多个第三加权系数,计算多个第三LUT的加权和, 得到第T帧图像的第四LUT;手机计算第T帧图像(即第一图像)的第四LUT与第T-1帧图像(即第五图像)的第一LUT的加权和,得到第T帧图像的第一LUT。请参考图15B,其示出本实施例中手机执行S1301-S1303确定第T帧图像的第一LUT的方法;以及手机执行S504得到第二图像的方法原理示意图。
在另一些实施例中,用户可以在手机中新增LUT。例如,假设手机中预置了M个第三LUT。那么,手机可以响应于用户新增LUT的操作,在手机中增设第M+1个第三LUT、第+2个第三LUT等。具体的,本申请实施例的方法还可以包括S1601-S1603。
S1601、响应于用户的第二预设操作,手机显示第三预设控件。该第三预设控件用于触发手机新增LUT(即LUT对应的显示效果)。
其中,响应于上述第二预设操作,手机不仅可以显示多个第三设置项,还可以显示第三预设控件。例如,响应于第二预设操作,手机可显示图16A中的(a)所示的预览界面1601。该录像的预览界面1601包括第一图像1602和第三预设控件1603。该第三预设控件1603用于触发手机新增LUT,即新增LUT对应的显示效果。
S1602、响应于用户对第三预设控件的点击操作,手机显示一个或多个第四设置项,每个第四设置项对应一种第五LUT,每种第五LUT对应一种拍摄场景下的显示效果,该第五LUT与第三LUT不同。
例如,响应于用户对图16A中的(a)所示的第三预设控件1603的点击操作,手机可显示图16A中的(b)所示的预览界面1604。该预览界面1604包括一个或多个第四设置项,如“%%风格”设置项、“@@风格”设置项、“&^风格”设置项和“^^风格”设置项等。每个第四设置项对应一个第五LUT。
S1603、响应于用户对任一个第四设置项的选择操作,手机保存用户选择的第四设置项对应的第五LUT。
示例性的,响应于用户对图16A中的(b)所示的“@@风格”设置项的选择操作,手机可以保存该“@@风格”设置项对应的第五LUT。也就是说,该“@@风格”设置项对应的第五LUT可以作为一个第三LUT,用于手机执行S503确定第T帧图像的第一LUT。
例如,响应于用户对图16A中的(b)所示的“确定”按钮的点击操作,手机可以显示图16A中的(c)所示的预览界面1605。相比于图16A中的(a)所示的预览界面1601,图16A中的(c)所示的预览界面1605还包括“@@风格”对应的第四设置项。
在一些实施例中,上述每个第四设置项还包括采用对应第五LUT处理后的预览图像,用于呈现该第五LUT对应的显示效果。例如,如图16A中的(b)所示,“%%风格”设置项、“@@风格”设置项、“&^风格”设置项和“^^风格”设置项中均展示了采用对应第五LUT处理后的预览图像。
需要说明的是,上述第五LUT可以预先保存在手机中,但是该第五LUT并未应用于手机的照相应用。手机执行S1601-S1602之后,用户选择的第五LUT便可以应用于手机的照相应用。例如,“@@风格”设置项对应的第五LUT可以作为一个第三LUT,用于手机执行S503确定第T帧图像的第一LUT。
在另一些实施例中,手机不会提供上述多个第五LUT供用户选择,而是由用户自行设置需要的LUT。在该实施例中,响应于用户对第三预设控件的点击操作,手机可以显示第四界面。该第四界面包括RGB的LUT参数的三个调节选项,该三个调节选项用于设置新 增LUT。例如,响应于用户对图16A中的(a)所示的第三预设控件1603的点击操作,手机可显示图16B中的(a)所示的第四界面16007。该第四界面16007包括三个调节选项1608。
手机可以接收用户对三个调节选项1608的调整操作,响应于用户的调整操作,保存用户设置的新增LUT。例如,手机可以接收用户对三个调节选项1608的调整操作,显示图16B中的(b)所示的第四界面1609。第四界面1609中包括三个调节选项1610。三个调节选项1610对应的LUT与三个调节选项1608对应的LUT不同。响应于用户对图16B中的(b)所示的“确定”按钮的点击操作,手机可以保存三个调节选项1610对应的LUT(即新增LUT)。
需要说明的是,LUT(也称为3D LUT)是一个比较复杂的三维查找表。LUT的设置会涉及到很多参数(如亮度和颜色等)的调整。人工设置很难细化到LUT的每一个参数的调整。因此,本申请实施例中,可以使用全局调整的方式为用户提供LUT的新增功能。也就是说,上述RGB的LUT参数的三个调节选项1608和RGB的LUT参数的三个调节选项1610是一种支持全局调整的LUT三个调节选项。
本申请实施例这里介绍上述支持全局调整的LUT三个调节选项。首先,可以初始化一个初始LUT。该初始LUT的cube(输出值与输入值完全相同)。例如,表2示出一种初始LUT,表2所示的初始LUT的输出值与输入值完全相同,均为(10,20,30)。然后,可以对LUT三个调节选项的进度条的值进行归一化。例如,可以将“0”-“+100”可以归一化到[1.1,10.0],可以将“-100”-“0”可以归一化到[0.0,1.0]。最后,可以将归一化后的值作为颜色通道系数(如采用Rgain、Ggain、Bgain表示),乘在初始LUT的输入值上,便可以得到新增LUT的输出值。如此,便可以由表2所示的初始LUT,得到表3所示的新增LUT。
表2
Figure PCTCN2022090630-appb-000005
表3
Figure PCTCN2022090630-appb-000006
例如,假设图16B中的(a)所示的原图1611中一个像素点的RGB值为(10,20,30)。假设用户设置的图16B中的(b)所示的三个调节选项1608对应的进度条的值为(45,30,65)。手机可以将(45,30,65)中的每个值由“0”-“+100”归一化到[1.1,10.0],得到(5.0,3.7,5.8)。即Rgain=5.0,Ggain=3.7,Bgain=5.8。然后,手机可以采用Rgain、Ggain、Bgain分别乘以初始LUT的输入值,便可以得到新增LUT的输出值。例如,手机可以计算RGB值(10,20,30)与(5.0,3.7,5.8)中对应gain值的乘积,得到表4所示的新增LUT的GRB 的输出值(50,74,174)。其中,50=10*Rgain=10*5.0,74=20*Ggain=20*3.7=74,174=30*Bgain=30*5.8。
表4
Figure PCTCN2022090630-appb-000007
在另一些实施例中,上述第四界面还可以包括亮度系数滑动条、暗区亮度系数/亮区亮度系数滑动条、各通道灰阶曲线调整等更多用户设置项。本申请实施例这里不予赘述。
示例性的,结合图15A,手机还可以执行S1601-S1603,如图17A或图17B所示响应于用户新增LUT的操作,在多个第三LUT中新增第五LUT。
本申请实施例中,手机还可以响应于用户的操作,在手机中新增LUT。一般而言,新增LUT是用户按照自己的需求设置的,该新增LUT与用户的拍摄需求的契合度较高。如此,手机采用该新增LUT处理摄像头采集的图像,可以拍摄出用户满意度较高的照片或者视频,可以提升用户的拍摄体验。
在另一些实施例中,本申请实施例的方法可以应用于手机对手机图库(或者相册)中的照片或视频进行图像处理的场景(简称为:拍摄后的图像处理场景)中。
在拍摄后的图像处理场景中,手机响应于用户对相册中任一张照片预设操作,可以执行S501-S504,得到并显示第二图像。
例如,手机可以显示图18A中的(a)所示的相册列表界面1801,该相册列表界面1801包括多张照片的预览项。一般而言,手机可以响应于用户对相册列表界面1801中“小女孩”照片(相当于第一图像)的预览项1802的点击操作,可以直接显示该预览项1802对应的“小女孩”照片(相当于第一图像)。本申请实施例中,手机可以响应于用户对“小女孩”照片(相当于第一图像)的预览项1802的点击操作,可以执行S501-S504,得到并显示图18A中的(b)所示的第二图像1803。图18A中的(b)所示的照片的详情页不仅包括第二图像1803,还包括编辑按钮1804。该编辑按钮1804用于触发手机编辑第二图像1803。
或者,在拍摄后的图像处理场景中,用户可以在一张照片的编辑界面中触发手机执行S501-S504,得到并显示第二图像。
例如,手机可以显示图18B中的(a)所示的照片1805(即第一图像)的详情页。手机响应于用户对图18B中的(a)所示的编辑按钮1806的点击操作,可显示图18B中的(b)所示的编辑界面1807。该编辑界面1807包括“智能AI”按钮1808、“裁剪”按钮、“滤镜”按钮和“调节”按钮。“智能AI”按钮1809用于触发手机调整第一图像的LUT。“裁剪”按钮用于触发手机裁剪第一图像。“滤镜”按钮用于触发手机为第一图像添加滤镜效果。“调节”按钮用于触发手机调整第一图像的对比度、饱和度和亮度等参数。
响应于用户对“智能AI”按钮1809的点击操作,手机可执行S501-S504,得到并显示图18B中的(c)所示的第二图像1811。图18B中的(c)所示的编辑界面不仅包括第二图像1811,还包括保存按钮1810。该保存按钮1810用于触发手机保存第二图像1811。响应于用户对保存按钮1810的点击操作,手机可以保存第二图像907并显示图18C所示的第二 图像1811的照片详情页。
需要说明的是,手机对手机图库(或者相册)中的视频进行图像处理的方法,与手机对手机图库中的照片进行图像处理的方法类似,本申请实施例这里不予赘述。不同的是,手机需要处理视频中每一帧图像。
本申请实施例提供了一种电子设备,该电子设备可以包括:显示屏(如触摸屏)、摄像头、存储器和一个或多个处理器。该显示屏、摄像头、存储器和处理器耦合。该存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令。当处理器执行计算机指令时,电子设备可执行上述方法实施例中手机执行的各个功能或者步骤。该电子设备的结构可以参考图4所示的电子设备400的结构。
本申请实施例还提供一种芯片系统,如图19所示,该芯片系统1900包括至少一个处理器1901和至少一个接口电路1902。
上述处理器1901和接口电路1902可通过线路互联。例如,接口电路1902可用于从其它装置(例如电子设备的存储器)接收信号。又例如,接口电路1902可用于向其它装置(例如处理器1901)发送信号。示例性的,接口电路1902可读取存储器中存储的指令,并将该指令发送给处理器1901。当所述指令被处理器1901执行时,可使得电子设备执行上述实施例中手机190执行的各个步骤。当然,该芯片系统还可以包含其他分立器件,本申请实施例对此不作具体限定。
本申请实施例还提供一种计算机存储介质,该计算机存储介质包括计算机指令,当所述计算机指令在电子设备上运行时,使得该电子设备执行上述方法实施例中手机执行的各个功能或者步骤。
本申请实施例还提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行上述方法实施例中手机执行的各个功能或者步骤。
通过以上实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可 以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (19)

  1. 一种图像处理方法,其特征在于,所述方法包括:
    电子设备获取第一图像,所述第一图像为所述电子设备的摄像头采集的图像,所述第一图像包括第一拍摄对象;
    所述电子设备确定所述第一图像对应的第一场景,其中,所述第一场景用于标识所述第一拍摄对象对应的场景;
    所述电子设备根据所述第一场景确定第一LUT;
    所述电子设备根据所述第一LUT对所述第一图像进行处理得到第二图像,并显示所述第二图像,所述第二图像的显示效果与所述第一LUT对应。
  2. 根据权利要求1所述的方法,其特征在于,在显示所述第二图像之后,所述方法还包括:
    所述电子设备采集第三图像,所述第三图像为所述电子设备的摄像头采集的图像,所述第三图像包括第二拍摄对象;
    所述电子设备确定所述第二图像对应第二场景,其中,所述第二场景用于标识所述第二拍摄对象对应的场景;
    所述电子设备根据所述第二场景确定第二LUT,所述第二LUT与所述第一LUT不同;
    所述电子设备根据所述新的第二LUT对所述第三图像进行处理得到第四图像,并显示所述第四图像,所述第四图像的显示效果与所述第二LUT对应。
  3. 根据权利要求1或2所述的方法,其特征在于,所述电子设备获取第一图像,包括:
    所述电子设备在所述电子设备拍照的预览界面、所述电子设备录像前的预览界面或者所述电子设备正在录像的取景界面,采集所述第一图像。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述第一图像是所述电子设备的摄像头采集的图像;或者,所述第一图像是由所述电子设备的摄像头采集的图像得到的预览图像。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述电子设备根据所述第一场景确定第一LUT,包括:
    所述电子设备将多个第三LUT中所述第一场景对应的第三LUT,确定为所述第一图像的第一LUT;
    其中,所述多个第三LUT预先配置在所述电子设备中,用于对所述电子设备的摄像头采集的图像进行处理得到不同显示效果的图像,每个第一LUT对应一种场景下的显示效果。
  6. 根据权利要求1-4中任一项所述的方法,其特征在于,所述电子设备根据所述第一场景确定第一LUT,包括:
    所述电子设备将所述多个第三LUT中所述第一场景对应的第三LUT,确定为所述第一图像的第四LUT;其中,所述多个第三LUT预先配置在所述电子设备中,用于对所述电子设备的摄像头采集的图像进行处理得到不同显示效果的图像,每个第三LUT对应一种场景下的显示效果;
    所述电子设备计算所述第一图像的第四LUT和第五图像的第一LUT的加权和,得到所述第一LUT;其中,所述第五图像是所述第一图像的前一帧图像,所述电子设备在本次拍摄过程中采集的第1帧图像的前一帧图像的第三LUT是预设LUT。
  7. 根据权利要求6所述的方法,其特征在于,所述电子设备计算所述第一图像的第四LUT和第五图像的第一LUT的加权和,得到所述第一LUT,包括:
    所述电子设备采用预先配置的第一加权系数和第二加权系数,计算所述第一图像的第四LUT和所述第五图像的第一LUT的加权和,得到所述第一LUT;
    其中,所述第一加权系数是所述第一图像的第四LUT的加权系数,所述第二加权系数是所述第五图像的第一LUT的加权系数,所述第一加权系数和所述第二加权系数之和等于1;
    其中,所述第一加权系数越小,所述第二加权系数越大,多帧所述第二图像的过渡效果越平滑。
  8. 根据权利要求7所述的方法,其特征在于,在所述电子设备采用预先配置的第一加权系数和第二加权系数,计算所述第一图像的第四LUT和第五图像的第一LUT的加权和,得到所述第一LUT之前,所述方法还包括:
    所述电子设备响应于第一预设操作,显示第一设置项和第二设置项,所述第一设置项用于设置所述第一加权系数,所述第二设置项用于设置所述第二加权系数;
    所述电子设备响应于用户对所述第一设置项和/或所述第二设置项的设置操作,将用户设置的第一加权系数作为所述第一图像的第四LUT的加权系数,将用户设置的第二加权系数作为所述第五图像的第一LUT的加权系数;
    其中,所述第一预设操作是对所述电子设备显示的第一预设控件的点击操作,所述第一预设控件用于触发所述电子设备设置所述第一图像的第四LUT和所述第五图像的第一LUT的权重;或者,所述第一预设操作是用户对所述电子设备的第一物理按键的点击操作。
  9. 根据权利要求1-4中任一项所述的方法,其特征在于,所述电子设备根据所述第一场景确定第一LUT,包括:
    所述电子设备将所述第一场景的指示信息和所述第一图像作为输入,运行预设AI模型,得到多个第三LUT的多个第三加权系数;其中,所述多个第三加权系数之和为1,所述多个第三LUT与所述多个第三加权系数一一对应;
    所述电子设备采用所述多个第三加权系数,计算所述多个第三LUT的加权和,得到所述第一LUT。
  10. 根据权利要求1-4中任一项所述的方法,其特征在于,所述电子设备根据所述第一场景确定第一LUT,包括:
    所述电子设备将所述第一场景的指示信息和所述第一图像作为输入,运行预设AI模型,得到多个第三LUT的多个第三加权系数;其中,所述多个第三加权系数之和为1,所述多个第三LUT与所述多个第三加权系数一一对应;
    所述电子设备采用所述多个第三加权系数,计算所述多个第三LUT的加权和,得到所述第一图像的第四LUT;
    所述电子设备计算所述第一图像的第四LUT和第五图像的第一LUT的加权和, 得到所述第一LUT;其中,所述第五图像是所述第一图像的前一帧图像,所述电子设备在本次拍摄过程中采集的第1帧图像的前一帧图像的第三LUT是预设LUT。
  11. 根据权利要求10所述的方法,其特征在于,在所述电子设备根据所述第一场景确定第一LUT之前,所述方法还包括:
    所述电子设备获取多组数据对,每组数据对包括第六图像和第七图像,所述第六图像是处理所述第七图像得到的满足预设条件的图像;
    所述电子设备识别所述第七图像,确定所述第七图像对应的第三场景;
    所述电子设备将所述第七图像和所述第六图像,以及识别所述第三场景的指示信息作为输入样本,训练所述预设AI模型,使得所述预设AI模型具备确定采用何种权重对所述多个第三LUT求加权和得到的LUT处理所述第七图像能够得到所述第六图像的显示效果的能力。
  12. 根据权利要求9-11中任一项所述的方法,其特征在于,所述方法还包括:
    所述电子设备响应于第二预设操作,显示多个第三设置项;其中,每个第三设置项对应一个第三LUT,用于设置所述第三LUT的第三加权系数;
    所述电子设备响应于用户对所述多个第三设置项中一个或多个第三设置项的设置操作,更新对应的第三加权系数;其中,所述电子设备采用更新后的多个第三加权系数计算所述多个第三LUT的加权和;
    其中,所述第二预设操作是用户对第二预设控件的点击操作,所述第二预设控件用于触发所述电子设备设置所述多个第三LUT的权重;或者,所述第二预设操作是用户对所述电子设备中第二物理按键的点击操作。
  13. 根据权利要求1-12中任一项所述的方法,其特征在于,所述方法还包括:
    所述电子设备响应于第三预设操作,显示一个或多个第四设置项;其中,所述第三预设操作用于触发所述电子设备新增显示效果,每个第四设置项对应一种第五LUT,每种第五LUT对应一种拍摄场景下的显示效果,所述第五LUT与所述第三LUT不同;
    响应于用户对任一个第四设置项的选择操作,所述电子设备保存用户选择的第四设置项对应的第五LUT。
  14. 根据权利要求13所述的方法,其特征在于,所述第四设置项包括采用对应第五LUT处理后的预览图像,用于呈现所述第五LUT对应的显示效果。
  15. 一种图像处理方法,其特征在于,所述方法包括:
    电子设备获取第一图像,所述第一图像为所述电子设备的摄像头采集的图像,所述第一图像包括第一拍摄对象;
    所述电子设备将所述第一图像作为输入,运行预设人工智能AI模型,得到多个第二颜色查找表LUT的多个第三加权系数;其中,所述多个第三加权系数之和为1,所述多个第三LUT与所述多个第三加权系数一一对应;
    所述电子设备采用所述多个第三加权系数,计算所述多个第三LUT的加权和,得到所述第一LUT;
    所述电子设备根据所述第一LUT对所述第一图像进行处理得到第二图像,并显示所述第二图像,所述第二图像的显示效果与所述第一LUT对应。
  16. 根据权利要求15所述的方法,其特征在于,所述电子设备采用所述多个第三 加权系数,计算所述多个第三LUT的加权和,得到所述第一LUT,包括:
    所述电子设备采用所述多个第三加权系数,计算所述多个第三LUT的加权和,得到所述第一图像的第四LUT;
    所述电子设备计算所述第一图像的第四LUT和第五图像的第一LUT的加权和,得到所述第一LUT;其中,所述第五图像是所述第一图像的前一帧图像,所述电子设备在本次拍摄过程中采集的第1帧图像的前一帧图像的第三LUT是预设LUT。
  17. 根据权利要求15或16所述的方法,其特征在于,在所述电子设备将所述第一图像作为输入,运行预设AI模型,得到多个第三LUT的多个第三加权系数之前,所述方法还包括:
    所述电子设备获取多组数据对,每组数据对包括第六图像和第七图像,所述第六图像是处理所述第七图像得到的满足预设条件的图像;
    所述电子设备将所述第七图像和所述第六图像作为输入样本,训练所述预设AI模型,使得所述预设AI模型具备确定采用何种权重对所述多个第三LUT求加权和得到的LUT处理所述第七图像能够得到所述第六图像的显示效果的能力。
  18. 一种电子设备,其特征在于,所述电子设备包括存储器、显示屏、一个或多个摄像头和一个或多个处理器;所述存储器、所述显示屏、所述摄像头与所述处理器耦合;其中,所述摄像头用于采集图像,所述显示屏用于显示所述摄像头采集的图像或者所述处理器生成的图像,所述存储器中存储有计算机程序代码,所述计算机程序代码包括计算机指令,当所述计算机指令被所述处理器执行时,使得所述电子设备执行如权利要求1-17任一项所述的方法。
  19. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-17中任一项所述的方法。
PCT/CN2022/090630 2021-07-31 2022-04-29 一种图像处理方法及电子设备 WO2023010912A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP22797244.5A EP4152741A4 (en) 2021-07-31 2022-04-29 IMAGE PROCESSING METHOD AND ELECTRONIC DEVICE

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110877402.X 2021-07-31
CN202110877402.XA CN115633250A (zh) 2021-07-31 2021-07-31 一种图像处理方法及电子设备

Publications (2)

Publication Number Publication Date
WO2023010912A1 WO2023010912A1 (zh) 2023-02-09
WO2023010912A9 true WO2023010912A9 (zh) 2023-11-16

Family

ID=84901175

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/090630 WO2023010912A1 (zh) 2021-07-31 2022-04-29 一种图像处理方法及电子设备

Country Status (3)

Country Link
EP (1) EP4152741A4 (zh)
CN (1) CN115633250A (zh)
WO (1) WO2023010912A1 (zh)

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4304623B2 (ja) * 2005-06-01 2009-07-29 ソニー株式会社 撮像装置及び撮像装置における撮像結果の処理方法
CN105323456B (zh) * 2014-12-16 2018-11-30 维沃移动通信有限公司 用于拍摄装置的图像预览方法、图像拍摄装置
CN108701439B (zh) * 2016-10-17 2021-02-12 华为技术有限公司 一种图像显示优化方法及装置
KR102401659B1 (ko) * 2017-03-23 2022-05-25 삼성전자 주식회사 전자 장치 및 이를 이용한 카메라 촬영 환경 및 장면에 따른 영상 처리 방법
CN107820020A (zh) * 2017-12-06 2018-03-20 广东欧珀移动通信有限公司 拍摄参数的调整方法、装置、存储介质及移动终端
US20190205929A1 (en) * 2017-12-28 2019-07-04 Facebook, Inc. Systems and methods for providing media effect advertisements in a social networking system
EP3754958A4 (en) * 2018-02-14 2022-03-09 LG Electronics Inc. MOBILE TERMINAL AND ITS CONTROL METHOD
CN110611776B (zh) * 2018-05-28 2022-05-24 腾讯科技(深圳)有限公司 特效处理方法、计算机设备和计算机存储介质
CN109068056B (zh) * 2018-08-17 2021-03-30 Oppo广东移动通信有限公司 一种电子设备及其拍摄图像的滤镜处理方法、存储介质
CN109741288B (zh) * 2019-01-04 2021-07-13 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN112532859B (zh) * 2019-09-18 2022-05-31 华为技术有限公司 视频采集方法和电子设备
CN111163350B (zh) * 2019-12-06 2022-03-01 Oppo广东移动通信有限公司 一种图像处理方法、终端及计算机存储介质
CN111045279B (zh) * 2019-12-30 2022-02-01 维沃移动通信有限公司 闪光灯的补光方法和电子设备
CN111416950B (zh) * 2020-03-26 2023-11-28 腾讯科技(深圳)有限公司 视频处理方法、装置、存储介质及电子设备
CN112948048A (zh) * 2021-03-25 2021-06-11 维沃移动通信(深圳)有限公司 信息处理方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
EP4152741A4 (en) 2023-12-06
WO2023010912A1 (zh) 2023-02-09
CN115633250A (zh) 2023-01-20
EP4152741A1 (en) 2023-03-22

Similar Documents

Publication Publication Date Title
US20230396886A1 (en) Multi-channel video recording method and device
CN111327814A (zh) 一种图像处理的方法及电子设备
WO2021036771A1 (zh) 具有可折叠屏幕的电子设备及显示方法
CN113810602B (zh) 一种拍摄方法及电子设备
US20220148161A1 (en) Skin detection method and electronic device
US20220319077A1 (en) Image-text fusion method and apparatus, and electronic device
WO2022042776A1 (zh) 一种拍摄方法及终端
EP4124019A1 (en) Video capturing method and electronic device
WO2023020006A1 (zh) 基于可折叠屏的拍摄控制方法及电子设备
CN113965694B (zh) 录像方法、电子设备及计算机可读存储介质
CN108848294A (zh) 一种拍摄参数调整方法、终端及计算机可读存储介质
WO2022242213A9 (zh) 一种刷新率调整方法和电子设备
WO2020155052A1 (zh) 一种基于连拍选择图像的方法及电子设备
CN113727018B (zh) 一种拍摄方法及设备
US20240179397A1 (en) Video processing method and electronic device
WO2023241209A1 (zh) 桌面壁纸配置方法、装置、电子设备及可读存储介质
CN112889027A (zh) 自动分屏的方法、图形用户界面及电子设备
CN114845059A (zh) 一种拍摄方法及相关设备
CN113965693B (zh) 一种视频拍摄方法、设备和存储介质
WO2023010912A9 (zh) 一种图像处理方法及电子设备
WO2023010913A1 (zh) 一种图像处理方法及电子设备
CN114915722B (zh) 处理视频的方法和装置
WO2022170918A1 (zh) 合拍方法和电子设备
WO2023051320A1 (zh) 更换电子设备屏幕壁纸的方法、装置和电子设备
WO2024114257A1 (zh) 转场动效生成方法和电子设备

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022797244

Country of ref document: EP

Effective date: 20221108

NENP Non-entry into the national phase

Ref country code: DE