WO2023036034A1 - 图像处理方法及其相关设备 - Google Patents

图像处理方法及其相关设备 Download PDF

Info

Publication number
WO2023036034A1
WO2023036034A1 PCT/CN2022/116201 CN2022116201W WO2023036034A1 WO 2023036034 A1 WO2023036034 A1 WO 2023036034A1 CN 2022116201 W CN2022116201 W CN 2022116201W WO 2023036034 A1 WO2023036034 A1 WO 2023036034A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
initial
processing
domain
images
Prior art date
Application number
PCT/CN2022/116201
Other languages
English (en)
French (fr)
Inventor
金杰
李子荣
刘吉林
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Priority to EP22862315.3A priority Critical patent/EP4195643A4/en
Priority to US18/026,679 priority patent/US20230342895A1/en
Publication of WO2023036034A1 publication Critical patent/WO2023036034A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/12Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with one sensor only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/131Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing infrared wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/133Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing panchromatic light, e.g. filters passing white light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/135Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on four or more different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/135Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on four or more different wavelength filter elements
    • H04N25/136Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on four or more different wavelength filter elements using complementary colours
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors

Definitions

  • the present application relates to the field of image processing, in particular to an image processing method and related equipment.
  • CMOS image sensors currently used for visible light imaging are traditional RGB (red, green, blue) sensors, that is to say, this image sensor can only receive red channel signal, green channel signal and blue channel signal.
  • RGB red, green, blue
  • This application provides an image processing method and related equipment. By acquiring multiple frames of initial images containing different channel information, the difference in channel information is used to complete image dynamic fusion, so as to achieve the maximum restoration of image color and the best performance of signal-to-noise ratio. .
  • an image processing method is provided, which is applied to an electronic device including a multispectral sensor, and the method includes: displaying a preview interface, the preview interface including a first control;
  • multiple frames of initial images are acquired, and the multiple frames of initial images contain different channel signals;
  • the multiple frames of the processed images are fused to obtain a target image.
  • the multispectral sensor refers to other multispectral sensors that have a wider spectral response range than the RGB sensor.
  • the embodiment of the present application provides an image processing method. By acquiring multiple frames of initial images containing information of different channels, and then merging the initial images of different channels after processing, the maximum restoration of image color and the signal-to-noise ratio can be achieved. best performance without color cast issues.
  • the processing each frame of the initial image in the multiple frames of the initial image respectively includes:
  • Said merging multiple frames of said processed images to obtain a target image including:
  • each of them undergoes front-end processing, and then performs fusion in the RAW domain, and then performs the first back-end processing on the fused fusion image to convert it from the RAW domain.
  • the target image is obtained. Since the original images undergo a series of processing before fusion in the RAW domain, and more details are preserved in the fusion in the RAW domain, the maximum restoration of image color and the best performance of the signal-to-noise ratio can be achieved.
  • the separately processing each frame of the initial image in the multiple frames of initial images includes:
  • Said merging multiple frames of said processed images to obtain a target image including:
  • the fusion of the RGB domain is performed, and then the second back-end processing is performed on the fused fused image to make it from The RGB domain is converted to the YUV domain to obtain the target image.
  • the original image has undergone a series of processing and color correction, so that the maximum restoration of the image color and the best performance of the signal-to-noise ratio can be achieved.
  • the separately processing each frame of the initial image in the multiple frames of initial images includes:
  • Said merging multiple frames of said processed images to obtain a target image including:
  • the intermediate processed image is fused in the YUV domain to obtain a fused image in the YUV domain, and the fused image is the target image.
  • first back-end processing may also be referred to as first intermediate processing or second intermediate processing.
  • the fusion of the YUV domain is performed to obtain the target image.
  • Each has a series of processing and color correction, so as to achieve the maximum restoration of image color and the best performance of signal-to-noise ratio.
  • the method further includes:
  • each frame of initial images in the multiple frames of initial images is respectively processed to obtain respective corresponding processed images, and multiple frames of the processed images are fused to obtain the target image.
  • the processing is performed in one image signal processor, which can reduce the cost.
  • the method further includes:
  • each frame of the initial image in the plurality of frames of the initial image is respectively processed to obtain respective corresponding processed images.
  • the first front-end processing and the second front-end processing need to be separately performed in two image signal processors.
  • the method further includes:
  • Preprocessing the image to be preprocessed to obtain the multi-frame initial image the preprocessing is used to convert channel signals included in the image to be preprocessed.
  • the image to be preprocessed includes multiple channel signals.
  • the preprocessing can be: horizontal binning, vertical binning, v2h2binning, or remosaic.
  • the multispectral sensor can only be used to acquire a frame of preprocessed images that contain more channel signals.
  • preprocessing the preprocessed images that is, splitting them, the images containing different channel signals can be split.
  • a multi-spectral sensor is used to acquire multiple image signals
  • the multi-frame initial images are determined, and front-end processing is performed on each frame of the multi-frame initial images to obtain the RAW domain Front-end image processing;
  • the front-end processed image is subjected to RAW domain fusion processing to obtain the fusion image in the RAW domain.
  • the effective pixel area in the multi-spectral sensor can be utilized to acquire multiple image signals.
  • the general sensor will contain one or several rows of pixels that do not participate in light sensing. In order to avoid affecting the effect of subsequent color restoration, it can be excluded, and only the effective pixels in the effective pixel area that can sense light in the multispectral sensor can be used. Pixels to obtain image signals, so as to achieve the purpose of improving the color reproduction effect.
  • the method further includes:
  • the multi-frame initial images include a first initial image and a second initial image with different channel signals
  • the following formula is used for fusion:
  • I_f(i,j) W_ij ⁇ I_c(i,j)+(1-W_ij) ⁇ I_r(i,j)
  • (i, j) is the pixel coordinate; I_c (i, j) is the processing image corresponding to the first initial image, I_r (i, j) is the processing image corresponding to the second initial image, and W_ij is the processing image corresponding to the second initial image.
  • the weight assigned to the processed image corresponding to the first initial image, 1-W_ij is the weight assigned to the processed image corresponding to the second initial image, and I_f(i, j) is the fused image.
  • the method further includes:
  • the fusion effect can be improved in a multi-directional and comprehensive manner.
  • the method further includes:
  • Wc_ij (GA_standard-GA_ij)/GA_standard;
  • GA_ij refers to the grayscale value corresponding to the pixel whose pixel coordinates are (i, j), and GA_standard is a preset standard grayscale value.
  • the front-end processing includes: at least one of dynamic dead pixel compensation, noise reduction, lens shading correction, and wide dynamic range adjustment.
  • the first back-end processing includes: color correction and conversion from RGB domain to YUV domain.
  • the second back-end processing includes: converting RGB domain to YUV domain.
  • both the first back-end processing and the second back-end processing further include: at least one of gamma correction and style transformation.
  • an electronic device including a module/unit for performing the first aspect or any method in the first aspect.
  • an electronic device including a multispectral sensor, a processor, and a memory;
  • the multi-spectral sensor is used to acquire multi-frame initial images, and the multi-frame initial images contain different channel signals;
  • said memory for storing a computer program executable on said processor
  • the processor is configured to execute the processing steps in the first aspect or any one of the methods in the first aspect.
  • the processor includes an image signal processor, and the image signal processor is configured to separately process each frame of the initial image in the multiple frames of initial images, Obtain respective corresponding processed images, and fuse multiple frames of the processed images to obtain the target image.
  • the processor includes a plurality of image signal processors, and different image signal processors of the plurality of image signal processors are configured to The initial image is processed to obtain the corresponding processed image.
  • the multispectral sensor is also used to acquire images to be preprocessed
  • the preprocessing is used to convert the channel signals included in the image to be preprocessed.
  • the multispectral sensor is also used to acquire multiple image signals
  • the multispectral sensor is further configured to perform RAW domain fusion processing on the front-end processed image to obtain the fusion image in the RAW domain.
  • a chip including: a processor, configured to call and run a computer program from a memory, so that a device installed with the chip executes the first aspect or any one of the methods in the first aspect for processing A step of.
  • a computer-readable storage medium stores a computer program, the computer program includes program instructions, and when the program instructions are executed by a processor, the processor Performing the processing steps in the first aspect or any one of the methods in the first aspect.
  • a computer program product comprising: computer program code, when the computer program code is run by an electronic device, the electronic device is made to execute any one of the first aspect or the first aspect. Steps for processing in a method.
  • the maximum restoration of image color and the best performance of signal-to-noise ratio can be achieved by acquiring multiple frames of initial images containing information of different channels, and then processing the initial images of different channels and then merging them , to avoid color cast problems.
  • Figure 1 is a schematic diagram of imaging of a traditional RGB COMS Sensor
  • Fig. 2 is a kind of spectral response curve of RGBY
  • FIG. 3 is a schematic diagram of an application scenario
  • FIG. 4 is a schematic flow diagram of an image processing method provided in an embodiment of the present application.
  • FIG. 5 is a schematic diagram of an acquired 2-frame initial image provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of another acquired 2-frame initial image provided by the embodiment of the present application.
  • FIG. 7 is a schematic diagram of an acquired 3-frame initial image provided by an embodiment of the present application.
  • FIG. 8 is a schematic flow chart of an image processing method provided in an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of another image processing method provided in the embodiment of the present application.
  • FIG. 10 is a schematic flowchart of another image processing method provided in the embodiment of the present application.
  • FIG. 11 is a schematic diagram of a first front-end processing or a second front-end processing provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a first back-end processing provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of a second back-end processing provided by the embodiment of the present application.
  • FIG. 14 is a schematic flowchart of another image processing method provided in the embodiment of the present application.
  • FIG. 15 is a schematic flowchart of another image processing method provided by the embodiment of the present application.
  • FIG. 16 is a schematic flowchart of another image processing method provided by the embodiment of the present application.
  • FIG. 17 is a schematic flowchart of another image processing method provided by the embodiment of the present application.
  • FIG. 18 is a schematic flowchart of another image processing method provided by the embodiment of the present application.
  • FIG. 19 is a schematic flowchart of another image processing method provided by the embodiment of the present application.
  • FIG. 20 is a schematic flowchart of another image processing method provided in the embodiment of the present application.
  • FIG. 21 is a schematic diagram of a display interface of an electronic device provided in an embodiment of the present application.
  • FIG. 22 is a schematic diagram of a display interface of another electronic device provided by an embodiment of the present application.
  • FIG. 23 is a schematic diagram of a color restoration error provided by an embodiment of the present application.
  • Fig. 24 is a schematic diagram of the signal-to-noise ratio under the color temperature light source D65 provided by the embodiment of the present application;
  • FIG. 25 is a schematic diagram of a hardware system applicable to an electronic device of the present application.
  • Fig. 26 is a schematic diagram of a software system applicable to the electronic device of the present application.
  • FIG. 27 is a schematic structural diagram of an image processing device provided by an embodiment of the present application.
  • FIG. 28 is a schematic structural diagram of a chip provided by an embodiment of the present application.
  • a relationship means that there may be three kinds of relationships, for example, A and/or B means: A exists alone, A and B exist simultaneously, and B exists alone.
  • plural refers to two or more than two.
  • first and second are used for descriptive purposes only, and cannot be understood as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, a feature defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of this embodiment, unless otherwise specified, “plurality” means two or more.
  • the RGB (red, green, blue) color space or RGB domain refers to a color model related to the structure of the human visual system. According to the structure of the human eye, all colors are seen as different combinations of red, green and blue.
  • YUV color space or YUV domain refers to a color coding method, Y represents brightness, U and V represent chroma.
  • RGB color space focuses on the human eye's perception of color, while the YUV color space focuses on the sensitivity of vision to brightness.
  • RGB color space and YUV color space can be converted to each other.
  • the pixel value refers to a group of color components corresponding to each pixel in the color image located in the RGB color space.
  • each pixel corresponds to a group of three primary color components, wherein the three primary color components are red component R, green component G and blue component B respectively.
  • Bayer pattern color filter array when the image is converted from the actual scene to image data, usually the image sensor receives the red channel signal, the green channel signal and the blue channel signal respectively, three channel signal information, and then synthesize the information of the three channel signals into a color image, however, in this scheme, three filters are required at each pixel position, which is expensive and difficult to manufacture. Therefore, as shown in Figure 1, as shown in Figure 1. As shown, a layer of color filter array can be covered on the surface of the image sensor to obtain the information of the three channel signals.
  • a Bayer pattern color filter array refers to the arrangement of filters in a checkerboard format. For example, the minimum repeating unit in the Bayer format color filter array is: one filter for obtaining the red channel signal, two filters for obtaining the green channel signal, and one filter for obtaining the blue channel signal arranged in a 2 ⁇ 2 manner cloth.
  • a Bayer image that is, an image output by an image sensor based on a Bayer format color filter array.
  • the pixels of multiple colors in this image are arranged in a Bayer pattern.
  • each pixel in the Bayer format image only corresponds to a channel signal of one color.
  • green pixels pixels corresponding to the green channel signal
  • blue pixels pixels corresponding to the blue channel signal
  • red pixels Pigels corresponding to the red channel signal
  • the minimum repeating unit of the Bayer format image is: one red pixel, two green pixels and one blue pixel are arranged in a 2 ⁇ 2 manner.
  • the RAW domain is a RAW color space
  • an image in the Bayer format may be referred to as an image in the RAW domain.
  • Grayscale image a grayscale image is a single-channel image, used to represent different brightness levels, the brightest is all white, and the darkest is all black. That is, each pixel in a grayscale image corresponds to a different degree of brightness between black and white.
  • 256 gray scales (0th grayscale to grayscale 255 gray scales).
  • Spectral response also known as spectral sensitivity, represents the ability of the image sensor to convert incident light energy of different wavelengths into electrical energy. Among them, if the light energy of a certain wavelength of light incident on the image sensor is converted into the number of photons, and the current generated by the image sensor and transmitted to the external circuit is represented by the number of electrons, it means that each incident photon can be converted into The electronic capability of the external circuit is called quantum efficiency (quantum efficiency, QE), and the unit is expressed as a percentage.
  • QE quantum efficiency
  • the spectral responsivity of the image sensor depends on the quantum efficiency, as well as parameters such as wavelength and integration time.
  • the half-peak width refers to the peak width at half the peak height of the spectrum, also known as the half-width.
  • CMOS image sensors currently used for visible light imaging are traditional RGB sensors. Due to hardware limitations, this image sensor can only receive red channel signals, green channel signals and blue channel signals. In this way, compared with the human eye, the spectral response range of the image sensor is very narrow, and the narrow spectral response range will limit the color reproduction capability of the image sensor and affect information such as the color of the restored image.
  • the narrow spectral response range has a more significant influence on the signal-to-noise ratio in the dark light environment, which leads to the reduction of the signal-to-noise ratio of the restored image in the dark light environment. Noise ratio performance is very poor.
  • multispectral means that the spectral bands used for imaging include 2 or more than 2 bands. According to this definition, since the RGB sensor utilizes the three bands of red, green and blue, strictly speaking, the RGB sensor also belongs to the multi-spectral response. Then, the visible light CMOS sensor with multi-spectral response referred to in this application actually refers to Other multispectral sensors have a wider spectral response range than RGB sensors.
  • the multispectral sensor may be an RYYB sensor, an RGBW sensor, and the like.
  • the RYYB sensor receives a red channel signal, a yellow channel signal and a blue channel signal.
  • the RGBW sensor receives a red channel signal, a green channel signal, a blue channel signal and a white channel signal.
  • FIG. 2 provides a schematic diagram of an RGBY spectral response curve.
  • the horizontal axis represents the wavelength
  • the vertical axis represents the spectral responsivity corresponding to different spectra.
  • the spectral response curve indicated by Y indicates the different spectral responsivity corresponding to different wavelengths of yellow light
  • the spectral response curve indicated by R indicates the different spectral responsivity of red light corresponding to different wavelengths
  • the spectral response indicated by G The curves represent the different spectral responsivity of green light at different wavelengths
  • the spectral response curve indicated by B represents the different spectral responsivity of blue light at different wavelengths.
  • the received yellow channel signal is equivalent to the superposition of the red channel signal and the green channel signal.
  • the performance in dark light can be improved and the Signal-to-noise ratio, however, as shown in Figure 2, because in the spectral response curve diagram, the half-peak width corresponding to the spectral response curve indicated by Y is relative to the corresponding half-peak width of the spectral response curve indicated by R and G respectively The width will be wider, but it will cause some color information to be lost when the image is restored, which will lead to problems such as color cast or overexposure in specific scenes.
  • the received white channel signal is equivalent to the superposition of all color channel signals, which has better light transmittance and can improve the signal-to-noise ratio problem in dark light, but the same, due to the spectral In the response curve (not shown in Figure 2), the half-width corresponding to white light will be very wide, and some color information will also be lost during image restoration, which will lead to color cast or overexposure in specific scenes.
  • the embodiment of the present application provides an image processing method.
  • the difference in channel information is used to complete image dynamic fusion, thereby achieving the maximum restoration of image color and the improvement of signal-to-noise ratio. Best performance.
  • the image processing method provided in the embodiment of the present application can be applied to the field of photographing.
  • it can be applied to capture images or record videos in a dark environment.
  • FIG. 3 shows a schematic diagram of an application scenario provided by an embodiment of the present application.
  • an electronic device is used as an example for illustration, and the mobile phone includes a multispectral sensor other than an RGB sensor.
  • the electronic device may start the camera application, and display a graphical user interface (graphical user interface, GUI) as shown in FIG. 3.
  • GUI graphical user interface
  • the GUI interface may be called a preview interface.
  • the preview interface includes various shooting mode options and first controls.
  • the multiple shooting modes include, for example, a photographing mode, a video recording mode, etc.
  • the first control is, for example, a shooting key 11 , which is used to indicate that the current shooting mode is one of the multiple shooting modes.
  • the user when the user starts the camera application and wants to take pictures of outdoor grasslands and trees at night, the user clicks the shooting key 11 on the preview interface, and the electronic device detects the user's click operation on the shooting key 11 Afterwards, in response to the click operation, the program corresponding to the image processing method is run to acquire the image.
  • the multispectral sensor included in the electronic device is not an RGB sensor, such as an RYYB sensor
  • the spectral response range of the electronic device has been expanded compared with the prior art, that is to say, the color reproduction ability and signal-to-noise ratio performance are both
  • the color of the captured image still has a color cast relative to the color in the actual scene, resulting in color distortion of the captured image.
  • the electronic device is processed by the image processing method provided in the embodiment of the present application, it can correct the color, improve the visual effect of the captured image, and improve the image quality.
  • FIG. 4 shows a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • the embodiment of the present application provides an image processing method 1, and the image processing method 1 includes the following S11 to S15.
  • the first control is, for example, the camera key 11 shown in FIG. 3
  • the first operation is, for example, a click operation.
  • the first operation may also be other operations, which are not limited in this embodiment of the present application.
  • the multiple frames of initial images are all Bayer format images, or in other words, all are located in the RAW domain.
  • the channel signals included in the multi-frame initial images are different, which means that the multiple colors corresponding to the pixels arranged in the Bayer format in each frame of the multi-frame initial images are different.
  • multi-frame initial images may be collected by a multi-spectral sensor included in the electronic device itself or obtained from other devices, and may be specifically set according to needs, which is not limited in this embodiment of the present application.
  • the multispectral sensor may simultaneously output multiple frames of initial images containing different channel information, or may serially output multiple frames of initial images containing different channel information, Specifically, selection and setting may be required, which is not limited in this embodiment of the present application.
  • multiple frames of initial images are output from the multispectral sensor, they may be output simultaneously or serially, but regardless of the output, the multiple frames of initial images are actually images generated by the same shooting of the scene to be shot.
  • FIG. 5 shows a schematic diagram of two acquired initial images.
  • two frames of initial images are acquired, as shown in (a) in FIG. ;
  • another frame of initial image P2 includes channel signals (such as T1 and T2) of 2 colors, or, as shown in (c) in Figure 5, initial image P2 also Can include channel signals of 3 colors (such as T1, T2 and T4), or, as shown in (d) in Figure 5, the initial image P2 can also include channel signals of 4 colors (such as T1, T2, T3 and T4), of course, the initial image P2 may also include channel signals of more colors, which is not limited in this embodiment of the present application.
  • the channel signals of the three colors included in the initial image P1 are red channel signal (R), green channel signal (G) and blue channel signal (B), respectively, and The three colors are repeated in an arrangement of RGGB.
  • the channel signals of 3 colors included in the initial image P2 may be a red channel signal (R), a yellow channel signal (Y) and a blue channel signal (B), And the 3 colors can be repeated in the arrangement of RYYB.
  • the channel signals of the three colors included in the initial image P2 may be the red channel signal (R), the green channel signal (G) and the cyan channel signal (C), and the three colors may be repeated in the arrangement of RGGC .
  • the channel signals of the three colors included in the initial image P2 may be a red channel signal (R), a yellow channel signal (Y) and a cyan channel signal (C), and the three colors are repeated in an arrangement of RYYC.
  • the channel signals of the three colors included in the initial image P2 may be a red channel signal (R), a white channel signal (W) and a blue channel signal (B), and the three colors are repeated in an arrangement of RWWB .
  • the three color channel signals included in the initial image P2 can be cyan channel signal (C), yellow channel signal (Y) and magenta channel signal (M), and the three colors can be arranged in CYYM repeat.
  • the 4 color channel signals included in the initial image P2 can be red channel signal (R), green channel signal (G), blue channel signal (B) and white channel signal (W), and the four colors can be repeated in an RGBW arrangement.
  • the channel signals of the four colors included in the initial image P2 may be a red channel signal (R), a green channel signal (G), a blue channel signal (B) and a near-infrared channel signal (NIR), and the four colors It can be repeated in the arrangement of RGB-NIR.
  • FIG. 6 shows another schematic diagram of acquired 2 frames of initial images.
  • two frames of initial images are obtained, as shown in (a) in Fig. 6, (b) in Fig. 6 and (c) in Fig. 6, wherein one frame of initial image P2 is the same as (b) in Fig. 5 ), the situation of P2 shown in (c) in Fig. 5 and (d) in Fig.
  • Including channel signals of 2 colors such as T1 and T3
  • channel signals of 4 colors such as T1, T2, T3 and T5
  • channel signals of more colors which is not limited in this embodiment of the present application.
  • FIG. 7 shows a schematic diagram of three acquired initial images.
  • the initial image P2 also includes channel signals of 3 colors (such as T1, T2 and T4), as shown in (c) in Figure 7, while the initial image P3 includes channel signals of 4 colors (such as for T1, T2, T3 and T5).
  • the initial image P2 also includes channel signals of 3 colors (such as T1, T2 and T4), as shown in (c) in Figure 7, while the initial image P3 includes channel signals of 4 colors (such as for T1, T2, T3 and T5).
  • each frame of initial images includes channel signals of different colors, which is not limited in this embodiment of the present application.
  • the image processing method provided by the embodiment of the present application obtains multiple frames of initial images containing information of different channels, and then fuses the initial images of different channels after processing, so as to achieve the maximum restoration of image color and the maximum signal-to-noise ratio Best performance, avoid color cast problem.
  • FIG. 8 shows a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • the embodiment of the present application provides an image processing method 2, and the image processing method 2 includes the following S21 to S26.
  • the first initial image contains 3 channel signals, and each pixel corresponds to a color channel signal, which are red channel signal, green channel signal and blue channel signal, such as T1, T2 and T3.
  • the second initial image also contains 3 channel signals, and each pixel corresponds to a color channel signal.
  • the 3 channel signals are different from the channel signals of the first initial image, namely the cyan channel signal, the magenta channel signal and the yellow channel. Signals such as T4, T5, and T6 shown in Figure 8.
  • the subsequent first initial image and the second initial image are all taken as an example, and details are not repeated here.
  • both the first initial image and the second initial image are Bayer format images, or in other words, both are images in the RAW domain. Since the channel signals included in the first initial image and the second initial image are different, the color restoration ability in subsequent processing is better.
  • the number of channel signals that can be acquired by the multispectral sensor should be greater than or equal to the sum of the number of channel signals corresponding to the first initial image and the second initial image.
  • the multispectral sensor can acquire at least 6 different
  • the channel signals of the colors are red channel signal, green channel signal, blue channel signal, cyan channel signal, magenta channel signal and yellow channel signal, thus, two channels containing 3 different color channel signals can be generated An initial image and a second initial image.
  • the number of initial images acquired by using the multispectral sensor and the channel signal corresponding to each frame of the initial image can be set and changed as required, and the embodiment of the present application does not impose any restrictions on this.
  • the above is just an example.
  • the multispectral sensor may output the first initial image and the second initial image through one data path, or may output the first initial image and the second initial image respectively through two data paths, specifically, Set as required, and this embodiment of the present application does not impose any limitation on this.
  • FIG. 8 is a schematic illustration of two data paths for transmission.
  • both the processed first front-end processed image and the second front-end processed image are located in the RAW domain, that is to say, both the first front-end processed image and the second front-end processed image are Bayer format images.
  • both the first front-end processing and the second front-end processing may include: dynamic bad point compensation (defect pixel correction, DPC), noise reduction, lens shading correction (lens shading correction, LSC) and wide dynamic range adjustment (wide range compression, WDR) at least one.
  • DPC dynamic bad point compensation
  • LSC lens shading correction
  • WDR wide dynamic range adjustment
  • dynamic dead point compensation is used to solve the defects in the array formed by the points collected by light on the multispectral sensor, or the errors in the process of converting optical signals; usually by taking the average value of other surrounding pixels in the brightness domain to eliminate bad spots.
  • noise reduction is used to reduce noise in an image
  • general methods include mean filtering, Gaussian filtering, bilateral filtering, and the like.
  • Lens shading correction is used to eliminate the inconsistency between the color and brightness of the surrounding image and the center of the image caused by the lens optical system.
  • Wide dynamic range adjustment means that when high-brightness areas illuminated by strong light sources (sunlight, lamps or reflections, etc.) and relatively low-brightness areas such as shadows and backlights exist in the image at the same time, bright areas will appear in the image due to Overexposure becomes white, while dark areas become black due to underexposure, seriously affecting image quality. Therefore, the brightest area and darker area can be adjusted in the same scene, such as making the dark area brighter in the image and the bright area darker in the image, so that the processed image can show the difference between the dark area and the bright area. more details.
  • both the first front-end processing and the second front-end processing may include the above-mentioned one or more processing steps, and when the first front-end processing or the second front-end processing includes multiple processing steps, the order of the multiple processing steps may be as required Adjustments are made, and this embodiment of the present application does not impose any limitation on this.
  • both the first front-end processing and the second front-end processing may further include other steps, which may be added as needed, which is not limited in this embodiment of the present application.
  • FIG. 11 shows a schematic diagram of a first front-end processing or a second front-end processing.
  • the first front-end processing or the second front-end processing includes, in order of processing: dynamic dead point compensation, noise reduction, lens shading correction, and wide dynamic range adjustment.
  • the first front-end processing and the second front-end processing may further include: automatic white balance (auto white balance, AWB).
  • automatic white balance auto white balance, AWB
  • the automatic white balance is used to make the white color of the camera appear true white at any color temperature.
  • first front-end processing and the second front-end processing may be the same or different, and may be specifically set and changed as required, which is not limited in this embodiment of the present application.
  • the dead pixels in the first initial image and the second initial image will be reduced, noise reduction, color balance, bright areas and dark areas The details of the area are clearer, the dynamic range is improved, and the quality of the entire image will be effectively improved.
  • the first image signal processor ISP1 perform a first fusion process on the first front-end processed image and the second front-end processed image to obtain a first fusion image in the RAW domain.
  • both the first front-end processed image and the second front-end processed image are in the RAW domain, and the first fusion image is also in the RAW domain, it can be known that the first fusion processing is actually fusion processing in the RAW domain. Fusion processing in the RAW domain can preserve more details of the image.
  • the first image signal processor ISP1 perform first back-end processing on the first fused image in the RAW domain to obtain a target image in the YUV domain.
  • the first back-end processing may include: color correction (color correction matrix, CCM) and conversion from RGB domain to YUV domain.
  • color correction color correction matrix, CCM
  • each pixel includes a color image of a red channel signal, a green channel signal and a blue channel signal.
  • RGB domain to YUV domain used to convert the image in the RGB domain to the YUV domain.
  • the first back-end processing may further include: at least one of gamma (Gamma) correction and style transformation (3dimensional look up table, 3DLUT).
  • gamma correction is used to adjust the brightness, contrast, dynamic range, etc. of the image by adjusting the gamma curve
  • style transformation indicates the style transformation of the color, that is, using a color filter to change the original image style into another image style
  • Common styles include movie style, Japanese style, eerie style, etc.
  • the first back-end processing may include one or more of the above-mentioned processing steps.
  • the order of the multiple processing steps may be adjusted as required. No restrictions are imposed.
  • the first back-end processing may also include other steps, which may be added according to needs, which is not limited in this embodiment of the present application.
  • Fig. 12 shows a schematic diagram of a first back-end processing.
  • the first back-end processing includes, in processing order: color correction, gamma correction, style transformation, and conversion from RGB domain to YUV domain.
  • the first fused image is converted from the RAW domain to the YUV domain, which can reduce the amount of data transmitted subsequently and save bandwidth.
  • the target image is in the YUV domain.
  • the target image can be displayed on the interface of the electronic device 100 as a captured image, or can only be stored, which can be set according to needs, which is not limited in this embodiment of the present application.
  • the same image signal processor after each of them undergoes front-end processing, fusion of the RAW domain is performed, and then the fused first
  • the fused image is subjected to first back-end processing, so that it is converted from the RAW domain to the YUV domain, and then, the target image in the YUV domain after the first back-end processing is output from the image signal processor.
  • the initial images contain different channel signals and undergo a series of processing, and the fusion in the RAW domain preserves more details, so that the maximum restoration of image color and the best signal-to-noise ratio can be achieved. Performance.
  • FIG. 9 shows a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • the embodiment of the present application provides an image processing method 3, and the image processing method 3 includes the following S31 to S37.
  • first initial image and the second initial image are the same as the description in S23 above, and will not be repeated here.
  • the first initial image and the second initial image are only examples provided in this embodiment of the present application.
  • the first image signal processor (ISP1 as shown in FIG. 9 ) perform first front-end processing on the first initial image to obtain a first front-end processed image in the RAW domain; and process the first image signal
  • second front-end processing is performed on the second initial image to obtain a second front-end processed image in the RAW domain.
  • the description of the first front-end processed image and the second front-end processed image is the same as the description in S24 above, and will not be repeated here.
  • each pixel includes a color image of a red channel signal, a green channel signal and a blue channel signal.
  • the second fusion image is also located in the RGB domain, therefore, it can be known that the second fusion processing is actually fusion processing in the RGB domain.
  • the RAW domain images containing different channel signals into the same standard RGB domain and then merging, it is beneficial to process the images in the same color space and obtain the optimal effect of the RGB color space.
  • the second back-end processing may include: converting RGB domain to YUV domain.
  • RGB domain to YUV domain used to convert the image in the RGB domain to the YUV domain.
  • the second back-end processing may further include: at least one of gamma correction and style transformation.
  • the second back-end processing may include one or more of the above processing steps, and when the first back-end processing includes multiple processing steps, the order of the multiple processing steps may be adjusted as required. No restrictions are imposed.
  • the second back-end processing may also include other steps, which may be added according to needs, which is not limited in this embodiment of the present application.
  • Fig. 13 shows a schematic diagram of a second back-end processing.
  • the second back-end processing includes, in processing order: gamma correction, style transformation, and conversion from RGB domain to YUV domain.
  • the second fused image is converted from the RGB domain to the YUV domain, which can reduce the amount of data transmitted subsequently and save bandwidth.
  • the target image is in the YUV domain.
  • the target image can be displayed on the interface of the electronic device 100 as a captured image, or can only be stored, which can be set according to needs, which is not limited in this embodiment of the present application.
  • the fusion of the RGB domain is performed, and then the fusion
  • the second back-end processing is performed on the second fused image, so that it is converted from the RGB domain to the YUV domain, and then, the target image in the YUV domain after the second back-end processing is output from the image signal processor.
  • the initial images contain different channel signals and undergo a series of processing and color correction, so that the maximum restoration of the image color and the best performance of the signal-to-noise ratio can be achieved.
  • FIG. 10 shows a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • the embodiment of the present application provides an image processing method 4, and the image processing method 4 includes the following S41 to S46.
  • first initial image and the second initial image are the same as the description in S23 above, and will not be repeated here.
  • the first initial image and the second initial image are only examples provided in this embodiment of the present application.
  • the first image signal processor (ISP1 as shown in FIG. 10 ) perform first front-end processing on the first initial image to obtain a first front-end processed image in the RAW domain; and process the first image signal
  • second front-end processing is performed on the second initial image to obtain a second front-end processed image in the RAW domain.
  • the description of the first front-end processed image and the second front-end processed image is the same as the description in S24 above, and will not be repeated here.
  • both the first intermediate processing and the second intermediate processing may include: color correction and conversion from RGB domain to YUV domain.
  • each pixel includes a color image of a red channel signal, a green channel signal and a blue channel signal.
  • RGB domain to YUV domain used to convert the image in the RGB domain to the YUV domain.
  • both the first intermediate processing and the second intermediate processing may further include: at least one of gamma correction and style transformation.
  • first intermediate processing and the second intermediate processing may include one or more processing steps described above, and when the first intermediate processing and the second intermediate processing include multiple processing steps, the order of the multiple processing steps may be performed as required Adjustment, the embodiment of the present application does not impose any limitation on this.
  • both the first intermediate processing and the second intermediate processing may further include other steps, which may be added according to needs, which is not limited in this embodiment of the present application.
  • the first intermediate processing and the second intermediate processing are the same as the first back-end processing, and both include color correction, gamma correction, style transformation, and conversion from RGB domain to YUV domain in the order of processing.
  • first intermediate processing and the second intermediate processing may be the same or different, and may be specifically set and changed as required, which is not limited in this embodiment of the present application.
  • the first image signal processor ISP1 perform a third fusion process on the first intermediate processed image located in the YUV domain and the second intermediate processed image located in the YUV domain to obtain a third fusion image located in the YUV domain.
  • the third The fused image is also the target image.
  • the third fused image is also in the YUV domain, therefore, it can be known that the third fused process is actually a fused process in the YUV domain. Fusion processing is carried out in the YUV domain, the data volume is small, and the processing speed is faster.
  • the target image that is, the third fused image can be displayed on the interface of the electronic device 100 as a captured image, or can only be stored, and can be specifically set according to needs, and this embodiment of the present application does not make any limit.
  • the fusion of the YUV domain is performed, and then, from The image signal processor directly outputs the target image in the YUV domain after the YUV domain fusion.
  • the channel signals contained in each initial image are different, and a series of processing and color correction have been carried out, so that the maximum restoration of image color and the best performance of signal-to-noise ratio can be achieved.
  • FIG. 14 shows a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • the embodiment of the present application provides an image processing method 5
  • the image processing method 5 includes the following S51 to S56 .
  • first initial image and the second initial image are the same as the description in S23 above, and will not be repeated here.
  • the first initial image and the second initial image are only examples provided in this embodiment of the present application.
  • the description of the first front-end processed image and the second front-end processed image is the same as the description in S24 above, and will not be repeated here.
  • the first front-end processing and the second front-end processing need to be separately performed in two image signal processors.
  • the first fusion processing is actually fusion processing in the RAW domain.
  • first fusion processing and the first back-end processing may be performed in the second image signal processor ISP2, or may be performed in the third image signal processor ISP3, and of course, may also be performed in other image signal processors , the embodiment of the present application does not impose any limitation on this.
  • the target image may be displayed on the interface of the electronic device 100 as a captured image, or may only be stored, and may be specifically set as required, which is not limited in this embodiment of the present application.
  • different image signal processors first perform front-end processing respectively; then, the front-end processing output from different image signal processors
  • the images are fused in the RAW domain; then, the first back-end processing is performed on the fused first fused image, so that it is converted from the RAW domain to the YUV domain.
  • the original image contains different channel signals and undergoes a series of processing, and more details are preserved in the RAW domain fusion, so that the maximum restoration of image color and the best performance of signal-to-noise ratio can be achieved .
  • FIG. 15 shows a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • the embodiment of the present application provides an image processing method 6 , and the image processing method 6 includes the following S61 to S67 .
  • first initial image and the second initial image are the same as the description in S23 above, and will not be repeated here.
  • the first initial image and the second initial image are only examples provided in this embodiment of the present application.
  • the description of the first front-end processed image and the second front-end processed image is the same as the description in S24 above, and will not be repeated here.
  • the second fusion image is also located in the RGB domain, therefore, it can be known that the second fusion processing is actually fusion processing in the RGB domain.
  • the RAW image containing different channel signals to the same standard RGB domain for fusion, it is beneficial to process the image in the same color space and obtain the optimal effect of the RGB color space.
  • the second fusion processing and the second back-end processing may be performed in the second image signal processor ISP2, or may be performed in the third image signal processor ISP3, and of course, may also be performed in other image signal processors , the embodiment of the present application does not impose any limitation on this.
  • the target image may be displayed on the interface of the electronic device 100 as a captured image, or may only be stored, and may be specifically set as required, which is not limited in this embodiment of the present application.
  • the initial images contain different channel signals and undergo a series of processing and color correction, so that the maximum restoration of the image color and the best performance of the signal-to-noise ratio can be achieved.
  • FIG. 16 shows a schematic flowchart of an image processing method provided by an embodiment of the present application. As shown in FIG. 16 , the embodiment of the present application provides an image processing method 7, and the image processing method 7 includes the following S71 to S76.
  • first initial image and the second initial image are the same as the description in S23 above, and will not be repeated here.
  • the first initial image and the second initial image are only examples provided in this embodiment of the present application.
  • the description of the first front-end processed image and the second front-end processed image is the same as the description in S24 above, and will not be repeated here.
  • the third fused image is also in the YUV domain, therefore, it can be known that the third fused process is actually a fused process in the YUV domain. Fusion processing is carried out in the YUV domain, the data volume is small, and the processing speed is faster.
  • the third fusion processing may be performed in the second image signal processor ISP2, or may be performed in the third image signal processor ISP3, and of course, may also be performed in other image signal processors. There are no restrictions here.
  • the target image that is, the third fused image can be displayed on the interface of the electronic device 100 as a captured image, or can only be stored, and can be specifically set according to needs, and this embodiment of the present application does not make any limit.
  • the initial images based on the first initial image and the second initial image with different channel signals, they are converted into intermediate processed images in the YUV domain through front-end processing and intermediate processing respectively; and then , and then fuse the intermediate processed images output from different image signal processors in the YUV domain to obtain the target image in the YUV domain.
  • the initial images contain different channel signals and undergo a series of processing and color correction, so as to achieve the maximum restoration of image color and the best performance of signal-to-noise ratio.
  • the present application also provides a schematic flowchart of another image processing method. As shown in FIG. 17 , the embodiment of the present application provides an image processing method 8 .
  • S53 includes the following S531 and S532:
  • S532. Perform preprocessing on the image to be preprocessed to obtain a first initial image and a second initial image.
  • the preprocessing is used to convert the channel signals contained in the image to be preprocessed.
  • the image to be preprocessed includes multiple channel signals, and the number of the channel signals is greater than or equal to the sum of the channel signals of the first initial image and the second initial image.
  • the description of the first initial image and the second initial image is the same as the description in S23 above, and will not be repeated here.
  • the channel signal contained in the image to be preprocessed is equal to the sum of the first initial image and the second initial image as an example.
  • the image to be preprocessed acquired by the multispectral sensor includes at least 6 channel signals of different colors, namely red channel signal, green channel signal, blue channel signal, cyan channel signal signal, magenta channel signal, and yellow channel signal. It should be understood that the above is only an example.
  • the preprocessing can be: horizontal binning, vertical binning, v2h2binning, or remosaic.
  • binning refers to adding together the charges induced by adjacent pixels in the Bayer pattern array and outputting them in the mode of one pixel.
  • binning in the horizontal direction refers to adding the charges of adjacent rows together for output
  • binning in the vertical direction refers to adding the charges of adjacent columns together for output
  • v2h2binning refers to performing both horizontal and vertical directions simultaneously.
  • the pixels distributed in 2 ⁇ 2 can be synthesized into 1 ⁇ 1, so that the length and width of the image are shortened to half of the original, and the output resolution is reduced to a quarter of the original.
  • remosaic also refers to combining four pixels into one pixel, but unlike v2h2binning, remosaic refers to combining four pixels in the Quadra CFA (Quadra Color Filter array) format into one pixel.
  • the format of the image to be preprocessed is Quadra CFA, and the four adjacent pixels in the image to be preprocessed in this format actually correspond to the channel signal of the same color. Based on this, after remosaic processing, the Bayer format can be restored image.
  • the preprocessing may also be performed in other manners, which are not limited in this embodiment of the present application.
  • preprocessing can be performed in the second image signal processor ISP2, or in the third image signal processor ISP3, and of course, can also be performed in other image signal processors, which is not discussed in this embodiment of the present application. Make any restrictions.
  • first front-end processing of the first initial image may be performed in the second image signal processor ISP2, and the second front-end processing of the second initial image may be performed in the third image signal processor ISP3, or, Performing the first front-end processing on the first initial image, and performing the second front-end processing on the second initial image may also be performed in one image signal processor, which is not limited in this embodiment of the present application.
  • the multi-spectral sensor can only be used to acquire a frame of pre-processed images containing more channel signals, and by pre-processing the pre-processed images, that is, splitting them, splitting out images containing different channel signals
  • the first initial image and the second initial image then, based on the first initial image and the second initial image that contain different channel signals, in different image signal processors, first perform front-end processing respectively; then process the signals from different images
  • the front-end processed image output by the device is fused in the RAW domain; then, the fused first fused image is subjected to the first back-end processing, so that it is converted from the RAW domain to the YUV domain.
  • the channel signals contained in the initial image are different and each undergoes a series of processing, and more details are preserved in the fusion in the RAW domain, so that the maximum restoration of image color and the best signal-to-noise ratio can be achieved. Performance.
  • the present application also provides a schematic flowchart of another image processing method. As shown in FIG. 18 , the embodiment of the present application provides an image processing method 9 .
  • S63 may include the following S631 and S632:
  • S632. Perform preprocessing on the image to be preprocessed to obtain a first initial image and a second initial image.
  • the preprocessing is used to convert the channel signals contained in the image to be preprocessed.
  • the image to be preprocessed includes multiple channel signals.
  • the quantity of the channel signals is greater than or equal to the sum of the channel signals of the first initial image and the second initial image.
  • the description of the first initial image and the second initial image is the same as the description in S24 above, and will not be repeated here.
  • the channel signal contained in the image to be preprocessed is equal to the sum of the first initial image and the second initial image as an example.
  • the image to be preprocessed acquired by the multispectral sensor includes at least 6 channel signals of different colors, namely red channel signal, green channel signal, blue channel signal, cyan channel signal signal, magenta channel signal, and yellow channel signal. It should be understood that the above is only an example.
  • preprocessing can be performed in the second image signal processor ISP2, or in the third image signal processor ISP3, and of course, can also be performed in other image signal processors, which is not discussed in this embodiment of the present application. Make any restrictions.
  • the subsequent first front-end processing and color correction for the first initial image can be performed in the second image signal processor ISP2
  • the second front-end processing and color correction for the second initial image can be performed in the third image signal processor ISP3 or perform the first front-end processing and color correction on the first initial image, and perform the second front-end processing and color correction on the second initial image in one image signal processor, which is not carried out in this embodiment of the present application. any restrictions.
  • the multi-spectral sensor can only be used to acquire a frame of pre-processed images containing more channel signals, and by pre-processing the pre-processed images, that is, splitting them, splitting out images containing different channel signals
  • the first initial image and the second initial image then, in different image signal processors, first undergo front-end processing and color correction; then the corrected images output from different image signal processors are fused in the RGB domain; then, the After fusion, the second fused image is subjected to the second back-end processing, so that it is converted from the RGB domain to the YUV domain.
  • the initial images contain different channel signals and undergo a series of processing and color correction, so that the maximum restoration of the image color and the best performance of the signal-to-noise ratio can be achieved.
  • the present application also provides a schematic flowchart of another image processing method. As shown in FIG. 19 , the embodiment of the present application provides an image processing method 10 .
  • S73 may include the following S731 and S732:
  • S732. Perform preprocessing on the image to be preprocessed to obtain a first initial image and a second initial image.
  • the preprocessing is used to convert the channel signals contained in the image to be preprocessed.
  • the image to be preprocessed includes multiple channel signals.
  • the quantity of the channel signals is greater than or equal to the sum of the channel signals of the first initial image and the second initial image.
  • the description of the first initial image and the second initial image is the same as the description in S24 above, and will not be repeated here.
  • the channel signal contained in the image to be preprocessed is equal to the sum of the first initial image and the second initial image as an example.
  • the image to be preprocessed acquired by the multispectral sensor includes at least 6 channel signals of different colors, namely red channel signal, green channel signal, blue channel signal, cyan channel signal signal, magenta channel signal, and yellow channel signal. It should be understood that the above is only an example.
  • first front-end processing and first intermediate processing on the first initial image can be performed in the second image signal processor ISP2
  • the second front-end processing and second intermediate processing on the second initial image can be performed on the third image
  • the first front-end processing and the first intermediate processing are performed on the first initial image
  • the second front-end processing and the second intermediate processing are performed on the second initial image, which can also be performed in one image signal processor
  • the multi-spectral sensor can only be used to acquire a frame of pre-processed images containing more channel signals, and by pre-processing the pre-processed images, that is, splitting them, splitting out images containing different channel signals
  • the first initial image and the second initial image then, in different image signal processors, first undergo front-end processing and intermediate processing, and convert them into intermediate processed images in the YUV domain; then, output from different image signal processors
  • the intermediate processed image is fused in the YUV domain to obtain the target image in the YUV domain.
  • the channel signals contained in each initial image are different, and a series of processing and color correction have been carried out, so that the maximum restoration of image color and the best performance of signal-to-noise ratio can be achieved.
  • FIG. 20 shows a schematic flowchart of another image processing method provided by the embodiment of the present application.
  • the embodiment of the present application provides an image processing method 11, and the image processing method 11 includes the following S111 to S116.
  • S111 Display a preview interface, where the preview interface includes a first control.
  • an effective pixel is a pixel capable of sensing light in the multi-spectral sensor
  • an effective pixel area is an area composed of all effective pixels in the multi-spectral sensor.
  • first initial image and the second initial image are the same as the description in S23 above, and will not be repeated here.
  • the first initial image and the second initial image are only examples provided in this embodiment of the present application.
  • the description of the first front-end processed image and the second front-end processed image is the same as the description in S24 above, and will not be repeated here.
  • the first fusion processing is actually fusion processing in the RAW domain.
  • the first fused image is located in the RAW domain, and after the first fused image is output from the multispectral sensor, it may also be referred to as a third initial image relative to other ISPs.
  • the first back-end processing may be performed in the fourth image signal processor ISP4.
  • performing the first front-end processing and the second front-end processing and the first fusion processing in the multispectral sensor can reduce the calculation amount of subsequent processing. For example, when the first back-end processing is carried out in ISP4, if the previous first front-end processing and second front-end processing, and the first fusion processing have all been processed in the multispectral sensor, then the subsequent ISP4 can be reduced. Calculations, thereby reducing power consumption.
  • the target image may be displayed on the interface of the electronic device 100 as a captured image, or may only be stored, and may be specifically set as required, which is not limited in this embodiment of the present application.
  • the multispectral sensor based on the effective pixel area, determine the first initial image and the second initial image that contain different channel signals; continue in the multispectral sensor, the first initial image and the second initial image First perform front-end processing respectively, and then perform fusion and output of the front-end processed image in the RAW domain; then, perform first back-end processing on the fused first fused image to convert it from the RAW domain to the YUV domain. Since a series of processing and fusion are performed on the initial image containing different channel signals in the multi-spectral sensor, the maximum restoration of image color and the best performance of signal-to-noise ratio can be achieved. In addition, the calculation amount of subsequent processing can also be reduced, thereby reducing power consumption.
  • each embodiment of the present application may further include the following content:
  • the first front-end processed image after the first front-end processing is performed on the first initial image
  • the second front-end processing image after the second initial image is processed by the second front-end
  • I_f(i,j) W_ij ⁇ I_c(i,j)+(1-W_ij) ⁇ I_r(i,j) (1)
  • (i,j) is the pixel coordinates in the image
  • I_c(i,j) is the first front-end processing image corresponding to the first initial image
  • I_r(i,j) is the second front-end processing corresponding to the second initial image image
  • W_ij is the weight assigned to the first front-end processing image corresponding to the first initial image
  • 1-W_ij is the weight assigned to the second front-end processing image corresponding to the second initial image
  • I_f(i,j) is the fused image, that is, the first fused image.
  • the pixels corresponding to the same position in the first front-end processed image and the second front-end processed image are fused to obtain the content of all pixels in the first fused image.
  • the weight W_ij can be further refined, for example, the weight W_ij can include light intensity weight, color temperature weight and scene type weight.
  • the weight W_ij can be determined using the following formula (2):
  • W_ij Wa_ij ⁇ para1+Wb_ij ⁇ para2+Wc_ij ⁇ para3 (2)
  • the illuminance E in the shooting environment can be converted by a statistical value of automatic exposure (auto exposure value, AE).
  • the standard illuminance E_standard can be preset as required.
  • the standard illuminance can be fixed in the OTP (one time programmable) memory by burning before the electronic equipment leaves the factory.
  • the color temperature T in the shooting environment may be determined by a color temperature estimation algorithm or may be collected by a multispectral color temperature sensor.
  • the standard color temperature T_standard can be preset as required, for example, it can be set to 5000K.
  • scene type weights corresponding to different scene types may also be preset and changed as required, which is not limited in this embodiment of the present application.
  • the weight value of the corresponding scene type can be defined to be smaller, so as to improve the hue deviation.
  • the scene type is landscape, you can define a larger weight value for the corresponding scene type to improve the signal-to-noise ratio.
  • the above three weights of light intensity and illuminance weight, color temperature weight, and scene type weight may all be coefficients of a global operation. At this time, for each pixel, since the three weights are respectively consistent, the global operation is realized.
  • the segmentation method may be selected according to needs, which is not limited in this embodiment of the present application.
  • the area to be processed can be outlined in the form of a box, circle, ellipse, irregular polygon, etc. from the processed image, and it can be used as a region of interest (ROI), and then for the area of interest and non- Regions of interest are assigned weights of different sizes.
  • ROI region of interest
  • scene type weight when the scene type is HDR, due to the relatively large difference in image content, different scene type weight values can be set for each pixel or sub-region to achieve fine adjustment.
  • the scene type weight corresponding to HDR can be determined by the following formula (3):
  • Wc_ij (GA_standard-GA_ij)/GA_standard (3)
  • GA_ij refers to the grayscale value corresponding to the pixel whose pixel coordinates are (i, j), and GA_standard is a preset standard grayscale value.
  • the RAW domain fusion it means that the two fused images are located in the RAW domain, and at this time the Y value corresponding to each pixel is the gray value.
  • the YUV domain fusion it means that the two frames of images to be fused are located in the YUV domain, and the Y value corresponding to each pixel is a gray value.
  • the fusion in the RGB domain it means that the two frames of images to be fused are located in the RGB domain. At this time, the Y value corresponding to each pixel can be determined according to the corresponding pixel values of the three primary colors.
  • the gray value corresponding to each pixel can be determined, and then the corresponding scene type weight can be determined by using the above formula (3), and then brought into the above formula (2), Determine the weight W_ij corresponding to each pixel.
  • Example 2 when the scene type is HDR mode, the segmentation method can be used to first divide the region of interest and the region of non-interest, and then, for each region, the scene corresponding to each pixel can be determined by the above formula (3) type weight, and then determine the scene type weight corresponding to the region by averaging or other methods, and then determine the scene type weight of the entire image by weighting or other methods.
  • FIG. 21 is a schematic diagram of a display interface of an electronic device provided by an embodiment of the present application.
  • the electronic device 100 displays a shooting interface as shown in (a) in FIG. 21 .
  • the user can perform a sliding operation on the interface, so that the shooting key 11 indicates the shooting option "more".
  • the electronic device 100 displays a shooting interface as shown in (b) in FIG. mode, HDR mode, time-lapse photography mode, watermark mode, color reproduction mode, etc. It should be understood that the foregoing shooting mode options are only examples, and may be specifically set and modified as required, and are not limited in this embodiment of the present application.
  • the electronic device 100 can start a program related to the image processing method provided by the embodiment of the present application when shooting.
  • FIG. 22 is a schematic diagram of a display interface of another electronic device provided by an embodiment of the present application.
  • the electronic device 100 displays a shooting interface as shown in (a) in FIG. button. Users can click the "Settings” button on this interface to enter the setting interface to set related functions.
  • the electronic device 100 displays a setting interface as shown in (b) in FIG.
  • the voice-activated camera is used to realize the setting of whether to trigger by sound in the photo mode
  • the video resolution is used to realize the adjustment of the video resolution
  • the video frame rate is used to realize the adjustment of the video frame rate.
  • the electronic device 100 can start the program related to the image processing method provided by the embodiment of the present application when shooting.
  • FIG. 23 is a schematic diagram of a color restoration error provided by the embodiments of the present application.
  • the horizontal axis represents different color temperature light sources, and the vertical axis represents the color reproduction error (Delta_E).
  • the color restoration error value of the imaging (such as mean-A) corresponding to the first initial image alone is 7.5
  • the color restoration error value of the imaging such as mean-B
  • the color restoration error value of the acquired target image (such as mean-ABfusion) is 3.
  • the color restoration of the image acquired by this application The error is minimal and the color reproduction effect is the best.
  • FIG. 24 is a schematic diagram of a signal-to-noise ratio under a color temperature light source D65 provided by an embodiment of the present application.
  • the horizontal axis represents the illuminance (lux), and the vertical axis represents the signal-to-noise ratio (SNR).
  • each set of data in the histogram represents the signal-to-noise ratio corresponding to the image under different illuminance, where each set of data contains the signal-to-noise ratio corresponding to the first initial image (such as A). Ratio, the signal-to-noise ratio corresponding to the second initial image (such as B) and the signal-to-noise ratio corresponding to the fused target image (such as AB Fusion). It can be seen from the figure that compared with the single initial image, the signal-to-noise ratio of the fused target image obtained by using the image processing method provided by the embodiment of the present application is relatively higher, indicating that its signal-to-noise ratio performance is relatively better.
  • the signal-to-noise ratio corresponding to the first initial image and the second initial image are both approximately 38, while the signal-to-noise ratio of the fused target image is approximately 45, and the signal-to-noise ratio is improved.
  • the embodiment of the application provides an image processing method that can effectively improve the signal-to-noise ratio performance of an image.
  • Fig. 25 shows a hardware system applicable to the electronic device of this application.
  • the electronic device 100 may be used to implement the image processing methods described in the foregoing method embodiments.
  • the electronic device 100 may be a mobile phone, a smart screen, a tablet computer, a wearable electronic device, a vehicle electronic device, an augmented reality (augmented reality, AR) device, a virtual reality (virtual reality, VR) device, a notebook computer, a super mobile personal computer ( ultra-mobile personal computer (UMPC), netbook, personal digital assistant (personal digital assistant, PDA), projector, etc.
  • augmented reality augmented reality
  • VR virtual reality
  • a notebook computer a super mobile personal computer ( ultra-mobile personal computer (UMPC), netbook, personal digital assistant (personal digital assistant, PDA), projector, etc.
  • UMPC ultra-mobile personal computer
  • PDA personal digital assistant
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and A subscriber identification module (subscriber identification module, SIM) card interface 195 and the like.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, bone conduction sensor 180M, etc.
  • the structure shown in FIG. 25 does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than those shown in FIG. 25 , or the electronic device 100 may include a combination of some of the components shown in FIG. 25 , or , the electronic device 100 may include subcomponents of some of the components shown in FIG. 25 .
  • the components shown in FIG. 25 can be realized in hardware, software, or a combination of software and hardware.
  • Processor 110 may include one or more processing units.
  • the processor 110 may include at least one of the following processing units: an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor) , ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, neural network processor (neural-network processing unit, NPU).
  • an application processor application processor, AP
  • modem processor graphics processing unit
  • graphics processing unit graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory.
  • the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • the processor 110 may execute displaying a preview interface, where the preview interface includes a first control; detects a first operation on the first control; and acquires multiple frames of initial images in response to the first operation, and multiple frames of initial images
  • the included channel signals are different; each frame of the initial image in the multi-frame initial image is processed separately to obtain a corresponding processed image; the multi-frame processed images are fused to obtain a target image.
  • connection relationship between the modules shown in FIG. 25 is only a schematic illustration, and does not constitute a limitation on the connection relationship between the modules of the electronic device 100 .
  • each module of the electronic device 100 may also adopt a combination of various connection modes in the foregoing embodiments.
  • the wireless communication function of the electronic device 100 may be realized by components such as the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, and a baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna may be used in conjunction with a tuning switch.
  • the electronic device 100 can realize the display function through the GPU, the display screen 194 and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • Display 194 may be used to display images or video.
  • the electronic device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display screen 194 , and the application processor.
  • the ISP is used for processing the data fed back by the camera 193 .
  • the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can optimize the algorithm of image noise, brightness and color, and ISP can also optimize parameters such as exposure and color temperature of the shooting scene.
  • the ISP may be located in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard red green blue (red green blue, RGB), YUV and other image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos in various encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3 and MPEG4.
  • the hardware system of the electronic device 100 is described in detail above, and the software system of the electronic device 100 is introduced below.
  • Fig. 26 is a schematic diagram of a software system of an electronic device provided by an embodiment of the present application.
  • the system architecture may include an application layer 210 , an application framework layer 220 , a hardware abstraction layer 230 , a driver layer 240 and a hardware layer 250 .
  • the application layer 210 may include a camera application program or other application programs, and other application programs include but not limited to: application programs such as a camera and a gallery.
  • the application framework layer 220 can provide an application programming interface (application programming interface, API) and a programming framework to the application program of the application layer; the application framework layer can include some predefined functions.
  • API application programming interface
  • the application framework layer 220 may include a camera access interface; the camera access interface may include camera management and camera equipment; wherein, the camera management may be used to provide an access interface for managing the camera; the camera equipment may be used to provide an interface for accessing the camera.
  • the hardware abstraction layer 230 is used to abstract hardware.
  • the hardware abstraction layer can include the camera abstraction layer and other hardware device abstraction layers; the camera hardware abstraction layer can call the camera algorithm in the camera algorithm library.
  • the hardware abstraction layer 230 includes a camera hardware abstraction layer 2301 and a camera algorithm library;
  • the camera algorithm library may include software algorithms; for example, algorithm 1, algorithm 2, etc. may be software algorithms for image processing.
  • the driver layer 240 is used to provide drivers for different hardware devices.
  • the driver layer may include camera device drivers, digital signal processor drivers, and graphics processor drivers.
  • the hardware layer 250 may include multiple image sensors (sensors), multiple image signal processors, digital signal processors, graphics processors, and other hardware devices.
  • the hardware layer 250 includes a sensor and an image signal processor; the sensor may include a sensor 1, a sensor 2, a depth sensor (time of flight, TOF), a multispectral sensor, and the like.
  • the image signal processor may include an image signal processor 1, an image signal processor 2, and the like.
  • the connection between the application program layer 210 and the application program framework layer 220 above the hardware abstraction layer 230 and the driver layer 240 and hardware layer 250 below can be realized.
  • the camera hardware interface layer in the hardware abstraction layer 230 manufacturers can customize functions here according to requirements. Compared with the hardware abstraction layer interface, the camera hardware interface layer is more efficient, flexible, and low-latency, and can call ISP and GPU more abundantly to realize image processing.
  • the image input into the hardware abstraction layer 230 may be from an image sensor, or may be from a stored picture.
  • the scheduling layer in the hardware abstraction layer 230 includes a general functional interface for implementing management and control.
  • the camera service layer in the hardware abstraction layer 230 is used for accessing ISP and other hardware interfaces.
  • the workflow of the software and hardware of the electronic device 100 will be exemplarily described below in conjunction with capturing and photographing scenes.
  • the camera application in the application layer can be displayed on the screen of the electronic device 100 in the form of an icon.
  • the electronic device 100 starts to run the camera application.
  • the camera application runs on the electronic device 100, the camera application calls the interface corresponding to the camera application in the application framework layer 210, and then starts the camera driver by calling the hardware abstraction layer 230, and starts the multispectral sensor on the electronic device 100.
  • the camera 193 collects multi-frame initial images with different channels through a multi-spectral sensor.
  • the multispectral sensor can collect according to a certain working frequency, and the collected image can be processed inside the multispectral sensor or transmitted to one or more image signal processors, and then the processed target image can be processed Save and/or transfer to display for display.
  • FIG. 27 is a schematic diagram of an image processing apparatus 300 provided by an embodiment of the present application.
  • the image processing device 300 includes a display unit 310 , an acquisition unit 320 and a processing unit 330 .
  • the display unit 310 is configured to display a preview interface, and the preview interface includes a first control.
  • the acquiring unit 320 is configured to detect a first operation on the first control.
  • the processing unit 330 is configured to acquire multiple frames of initial images in response to the first operation, and the multiple frames of initial images contain different channel signals.
  • the processing unit 330 is further configured to separately process each frame of the initial image in the multi-frame initial image to obtain a corresponding processed image; to fuse the multi-frame processed images to obtain the target image.
  • image processing apparatus 300 described above is embodied in the form of functional units.
  • unit here may be implemented in the form of software and/or hardware, which is not specifically limited.
  • a "unit” may be a software program, a hardware circuit or a combination of both to realize the above functions.
  • the hardware circuitry may include application specific integrated circuits (ASICs), electronic circuits, processors (such as shared processors, dedicated processors, or group processors) for executing one or more software or firmware programs. etc.) and memory, incorporating logic, and/or other suitable components to support the described functionality.
  • ASICs application specific integrated circuits
  • processors such as shared processors, dedicated processors, or group processors for executing one or more software or firmware programs. etc.
  • memory incorporating logic, and/or other suitable components to support the described functionality.
  • the units of each example described in the embodiments of the present application can be realized by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the present application.
  • the embodiment of the present application also provides a computer-readable storage medium, and computer instructions are stored in the computer-readable storage medium; when the computer-readable storage medium runs on the image processing device 300, the image processing device 300 Execute the image processing method shown above.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server, or data center Transmission to another website site, computer, server or data center by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or may be a data storage device including one or more servers, data centers, etc. that can be integrated with the medium.
  • the available medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium, or a semiconductor medium (for example, a solid state disk (solid state disk, SSD)) and the like.
  • the embodiment of the present application also provides a computer program product including computer instructions, which, when run on the image processing apparatus 300 , enables the image processing apparatus 300 to execute the aforementioned image processing method.
  • FIG. 28 is a schematic structural diagram of a chip provided by an embodiment of the present application.
  • the chip shown in FIG. 28 can be a general-purpose processor or a special-purpose processor.
  • the chip includes a processor 401 .
  • the processor 401 is used to support the image processing apparatus 300 to execute the technical solution shown above.
  • the chip further includes a transceiver 402, and the transceiver 402 is used to accept the control of the processor 401, and is used to support the image processing apparatus 300 to execute the aforementioned technical solution.
  • the chip shown in FIG. 28 may further include: a storage medium 403 .
  • the chip shown in Figure 28 can be implemented using the following circuits or devices: one or more field programmable gate arrays (field programmable gate array, FPGA), programmable logic device (programmable logic device, PLD) , controllers, state machines, gate logic, discrete hardware components, any other suitable circuitry, or any combination of circuitry capable of performing the various functions described throughout this application.
  • field programmable gate array field programmable gate array, FPGA
  • programmable logic device programmable logic device
  • controllers state machines, gate logic, discrete hardware components, any other suitable circuitry, or any combination of circuitry capable of performing the various functions described throughout this application.
  • the electronic equipment, image processing apparatus 300, computer storage medium, computer program product, and chip provided by the above-mentioned embodiments of the present application are all used to execute the method provided above. Therefore, the beneficial effects that it can achieve can refer to the above provided The beneficial effects corresponding to the method will not be repeated here.
  • sequence numbers of the above processes do not mean the order of execution, and the execution order of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiment of the present application.
  • presetting and predefining can be realized by pre-saving corresponding codes, tables or other methods that can be used to indicate related information in devices (for example, including electronic devices) , the present application does not limit its specific implementation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Color Television Image Signal Generators (AREA)
  • Image Processing (AREA)

Abstract

本申请提供了一种图像处理方法及其相关设备,涉及图像处理领域,该图像处理方法包括:显示预览界面,预览界面包括第一控件;检测到对第一控件的第一操作;响应于第一操作,获取多帧初始图像,多帧初始图像包含的通道信号不同;分别对多帧初始图像中的每帧初始图像进行处理,得到各自对应的处理图像;将多帧所述处理图像进行融合,得到目标图像。本申请通过获取多帧包含不同通道信息的初始图像,利用通道信息的不同完成图像动态融合,从而实现图像色彩的最大还原和信噪比的最佳表现。

Description

图像处理方法及其相关设备
本申请要求于2021年09月10日提交国家知识产权局、申请号为202111063195.0、申请名称为“图像处理方法及其相关设备”的中国专利申请的优先权,以及要求于2022年01月28日提交国家知识产权局、申请号为202210108251.6、申请名称为“图像处理方法及其相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理领域,尤其涉及一种图像处理方法及其相关设备。
背景技术
目前用于可见光成像的互补金属氧化物半导体(complementary metal oxide semiconductor,CMOS)图像传感器大部分皆为传统的RGB(red,green,blue)传感器,也即是说,这种图像传感器只能接收红色通道信号、绿色通道信号和蓝色通道信号。
由于其偏窄的光谱响应范围制约着成像的颜色还原的上限及暗光下较差的信噪比表现,因此,市场上出现了一些多光谱响应的可见光成像CMOS图像传感器,希望以此解决成像色彩还原和暗光信噪比的问题,但目前并没有成熟的处理方案来利用好这种传感器并实现精准颜色还原这一目标。由此,亟待一种新的处理方案。
发明内容
本申请提供一种图像处理方法及其相关设备,通过获取多帧包含不同通道信息的初始图像,利用通道信息的不同完成图像动态融合,从而实现图像色彩的最大还原和信噪比的最佳表现。
为达到上述目的,本申请采用如下技术方案:
第一方面,提供一种图像处理方法,应用于包括多光谱传感器的电子设备,该方法包括:显示预览界面,所述预览界面包括第一控件;
检测到对所述第一控件的第一操作;
响应于所述第一操作,获取多帧初始图像,所述多帧初始图像包含的通道信号不同;
分别对所述多帧初始图像中的每帧初始图像进行处理,得到各自对应的处理图像;
将多帧所述处理图像进行融合,得到目标图像。
其中,在本申请实施例中,多光谱传感器指的是比RGB传感器的光谱响应范围宽的其他多光谱传感器。
本申请实施例提供了一种图像处理方法,通过获取多帧包含不同通道信息的初始图像,然后对不同通道的初始图像进行处理之后再进行融合,从而可以实现图像色彩的最大还原和信噪比的最佳表现,避免出现偏色问题。
在第一方面一种可能的实现方式中,所述分别对所述多帧初始图像中的每帧初始 图像进行处理,包括:
分别对所述多帧初始图像中的每帧初始图像进行前端处理,得到位于RAW域的前端处理图像;
所述将多帧所述处理图像进行融合,得到目标图像,包括:
将所述前端处理图像进行RAW域融合处理,得到位于RAW域的融合图像;
对所述位于RAW域的融合图像进行第一后端处理,得到位于YUV域的所述目标图像。
在该实现方式中,基于包含通道信号不同的多帧初始图像,先各自经过前端处理之后,再进行RAW域的融合,然后对融合后融合图像进行第一后端处理,使其从RAW域转换至YUV域,得到目标图像。由于在RAW域进行融合之前,初始图像各自进行了一系列处理,并且在RAW域融合保留了更多的细节,从而可以实现图像色彩的最大还原和信噪比的最佳表现。
在第一方面一种可能的实现方式中,所述分别对所述多帧初始图像中的每帧初始图像进行处理,包括:
分别对所述多帧初始图像中的每帧初始图像进行前端处理和颜色校正,得到位于RGB域的校正图像;所述颜色校正用于将图像从RAW域转换成RGB域;
所述将多帧所述处理图像进行融合,得到目标图像,包括:
将所述校正图像进行RGB域融合处理,得到位于RGB域的融合图像;
对所述位于RGB域的融合图像进行第二后端处理,得到位于YUV域的所述目标图像。
在该实现方式中,基于包含通道信号不同的多帧初始图像,先各自经过前端处理和颜色校正之后,再进行RGB域的融合,然后对融合后融合图像进行第二后端处理,使其从RGB域转换至YUV域,得到目标图像,由于在RGB域融合之前,初始图像各自进行了一系列处理和颜色校正,从而可以实现图像色彩的最大还原和信噪比的最佳表现。
在第一方面一种可能的实现方式中,所述分别对所述多帧初始图像中的每帧初始图像进行处理,包括:
分别对所述多帧初始图像中的每帧初始图像进行前端处理和第一后端处理,得到位于YUV域的中间处理图像;所述中间处理用于将图像从RGB域转换成YUV域;
所述将多帧所述处理图像进行融合,得到目标图像,包括:
将所述中间处理图像进行YUV域融合处理,得到位于YUV域的融合图像,所述融合图像为所述目标图像。
在该实施例中,第一后端处理也可以称为第一中间处理或第二中间处理。
在该实现方式中,基于包含通道信号不同的多帧初始图像,先各自经过前端处理和第一后端处理之后,再进行YUV域的融合,得到目标图像,由于在YUV域融合之前,初始图像各自进行了一系列处理和颜色校正,从而可以实现图像色彩的最大还原和信噪比的最佳表现。
在第一方面一种可能的实现方式中,所述方法还包括:
在同一图像信号处理器中,分别对所述多帧初始图像中的每帧初始图像进行处理, 得到各自对应的处理图像,以及将多帧所述处理图像进行融合,得到所述目标图像。
在该实现方式中,在一个图像信号处理器中进行处理,可以降低成本。
在第一方面一种可能的实现方式中,所述方法还包括:
在不同图像信号处理器中,分别对所述多帧初始图像中的每帧初始图像进行处理,得到各自对应的处理图像。
在该实现方式中,为了图像处理流程中后续的融合处理能得到更好的颜色还原效果,第一前端处理和第二前端处理需分开在两个图像信号处理器中进行。
在第一方面一种可能的实现方式中,所述方法还包括:
利用多光谱传感器,获取待预处理图像;
对所述待预处理图像进行预处理,得到所述多帧初始图像;所述预处理用于转换所述待预处理图像包含的通道信号。
应理解,该待预处理图像包含多个通道信号。
可选地,预处理可以为:水平方向binning、垂直方向binning、v2h2binning,或者remosaic。
在该实现方式中,多光谱传感器可以仅用于获取一帧包含通道信号比较多的预处理图像,通过对预处理图像进行预处理,也即进行拆分,从而拆分出包含不同通道信号的多帧初始图像;然后,再基于包含通道信号不同的多帧初始图像进行处理和融合,得到目标图像。
在第一方面一种可能的实现方式中,利用多光谱传感器,获取多个图像信号;
在所述多光谱传感器中,根据所述多个图像信号,确定所述多帧初始图像,以及分别对所述多帧初始图像中的每帧初始图像进行前端处理,得到所述位于RAW域的前端处理图像;
在所述多光谱传感器中,将所述前端处理图像进行RAW域融合处理,得到所述位于RAW域的融合图像。
其中,可以利用多光谱传感器中的有效像素区,获取多个图像信号。
在该实现方式中,一般传感器会包含有一行或者几行不参与感光的像素,为了避免影响后续颜色还原的效果,可以将其排除,仅利用多光谱传感器中可以感光的有效像素区中的有效像素,来获取图像信号,从而实现提高颜色还原效果的目的。
在第一方面一种可能的实现方式中,所述方法还包括:
当所述多帧初始图像包括通道信号不同的第一初始图像和第二初始图像时,利用以下公式进行融合:
I_f(i,j)=W_ij×I_c(i,j)+(1-W_ij)×I_r(i,j)
其中,(i,j)为像素坐标;I_c(i,j)为所述第一初始图像对应的处理图像,I_r(i,j)为所述第二初始图像对应的处理图像,W_ij为所述第一初始图像对应的处理图像所分配的权重,1-W_ij为所述第二初始图像对应的处理图像所分配的权重,I_f(i,j)为融合后的图像。
在该实现方式中,通过对不同处理图像分配不同的权重,可以达到更好的融合效果。
在第一方面一种可能的实现方式中,所述方法还包括:
利用公式W_ij=Wa_ij×para1+Wb_ij×para2+Wc_ij×para3,确定W_ij;
其中,Wa_ij为光照强度权重,Wa_ij=E/E_standard,E为拍摄环境中的照度,E_standard为预设的标准照度;Wb_ij为色温权重,Wb_ij=T/T_standard,T为拍摄环境中的色温,T_standard为预设的标准色温;Wc_ij为场景类型权重,不同场景类型对应的所述场景类型权重的大小不同,所述场景类型包括:人像、风景中的至少一项;para1、para2和para3为预设参数。
在该实现方式中,通过对融合权重进行细化分类,可以多方位、全面的提高融合效果。
在第一方面一种可能的实现方式中,所述方法还包括:
当所述场景类型为HDR时,Wc_ij=(GA_standard-GA_ij)/GA_standard;
其中,GA_ij是指像素坐标为(i,j)的像素对应的灰度值,GA_standard为预设的标准灰度值。
在该实现方式中,由于HDR场景类型的图像内容差异比较大,可以针对每个像素或分区域设定不同的场景类型权重值,以实现精细化调节。
在第一方面一种可能的实现方式中,所述前端处理包括:动态坏点补偿、降噪、镜头阴影校正和宽动态范围调整中的至少一项。
在第一方面一种可能的实现方式中,所述第一后端处理包括:颜色校正和RGB域转YUV域。
在第一方面一种可能的实现方式中,所述第二后端处理包括:RGB域转YUV域。
在第一方面一种可能的实现方式中,第一后端处理、第二后端处理均还包括:伽马校正和风格变换中的至少一项。
第二方面,提供了一种电子设备,包括用于执行第一方面或第一方面中任一种方法的模块/单元。
第三方面,提供了一种电子设备,包括多光谱传感器、处理器和存储器;
所述多光谱传感器,用于获取多帧初始图像,所述多帧初始图像包含的通道信号不同;
所述存储器,用于存储可在所述处理器上运行的计算机程序;
所述处理器,用于执行第一方面或第一方面中任一种方法中进行处理的步骤。
在第三方面一种可能的实现方式中,所述处理器包括1个图像信号处理器,所述图像信号处理器,用于分别对所述多帧初始图像中的每帧初始图像进行处理,得到各自对应的处理图像,以及将多帧所述处理图像进行融合,得到所述目标图像。
在第三方面一种可能的实现方式中,所述处理器包括多个图像信号处理器,所述多个图像信号处理器的不同图像信号处理器,用于对所述多帧初始图像中不同的初始图像进行处理,得到各自对应的处理图像。
在第三方面一种可能的实现方式中,所述多光谱传感器,还用于获取待预处理图像;
以及用于对所述待预处理图像进行预处理,得到所述多帧初始图像;所述预处理用于转换所述待预处理图像包含的通道信号。
在第三方面一种可能的实现方式中,所述多光谱传感器,还用于获取多个图像信 号;
并根据所述多个图像信号,确定所述多帧初始图像,以及分别对所述多帧初始图像中的每帧初始图像进行前端处理,得到所述位于RAW域的前端处理图像;
所述多光谱传感器,还用于将所述前端处理图像进行RAW域融合处理,得到所述位于RAW域的融合图像。
第四方面,提供了一种芯片,包括:处理器,用于从存储器中调用并运行计算机程序,使得安装有所述芯片的设备执行第一方面或第一方面中任一种方法中进行处理的步骤。
第五方面,提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时,使所述处理器执行第一方面或第一方面中任一种方法中进行处理的步骤。
第六方面,提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序代码,当所述计算机程序代码被电子设备运行时,使得该电子设备执行第一方面或第一方面中任一种方法中进行处理的步骤。
在本申请的实施例中,通过获取多帧包含不同通道信息的初始图像,然后对不同通道的初始图像进行处理之后再进行融合,从而可以实现图像色彩的最大还原和信噪比的最佳表现,避免出现偏色问题。
附图说明
图1为一种传统的RGB COMS Sensor的成像示意图;
图2为一种RGBY的光谱响应曲线;
图3为一种应用场景的示意图;
图4为本申请实施例提供的一种图像处理方法的流程示意图;
图5为本申请实施例提供的一种获取到的2帧初始图像的示意图;
图6为本申请实施例提供的另一种获取到的2帧初始图像的示意图;
图7为本申请实施例提供的一种获取到的3帧初始图像的示意图;
图8为本申请实施例提供的一种图像处理方法的流程示意图;
图9为本申请实施例提供的另一种图像处理方法的流程示意图;
图10为本申请实施例提供的又一种图像处理方法的流程示意图;
图11为本申请实施例提供的一种第一前端处理或第二前端处理的示意图;
图12为本申请实施例提供的一种第一后端处理的示意图;
图13为本申请实施例提供的一种第二后端处理的示意图;
图14为本申请实施例提供的又一种图像处理方法的流程示意图;
图15为本申请实施例提供的又一种图像处理方法的流程示意图;
图16为本申请实施例提供的又一种图像处理方法的流程示意图;
图17为本申请实施例提供的又一种图像处理方法的流程示意图;
图18为本申请实施例提供的又一种图像处理方法的流程示意图;
图19为本申请实施例提供的又一种图像处理方法的流程示意图;
图20为本申请实施例提供的又一种图像处理方法的流程示意图;
图21为本申请实施例提供的一种电子设备的显示界面的示意图;
图22为本申请实施例提供的另一种电子设备的显示界面的示意图;
图23为本申请实施例提供的一种色彩还原误差示意图;
图24为本申请实施例提供的一种在色温光源D65下关于信噪比的示意图;
图25为一种适用于本申请的电子设备的硬件系统的示意图;
图26为一种适用于本申请的电子设备的软件系统的示意图;
图27为本申请实施例提供的一种图像处理装置的结构示意图;
图28为本申请实施例提供的一种芯片的结构示意图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,在本申请实施例的描述中,“多个”是指两个或多于两个。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
首先,对本申请实施例中的部分用语进行解释说明,以便于本领域技术人员理解。
1、RGB(red,green,blue)颜色空间或RGB域,指的是一种与人的视觉系统结构相关的颜色模型。根据人眼睛的结构,将所有颜色都当作是红色、绿色和蓝色的不同组合。
2、YUV颜色空间或YUV域,指的是一种颜色编码方法,Y表示亮度,U和V表示的则是色度。上述RGB颜色空间着重于人眼对色彩的感应,YUV颜色空间则着重于视觉对亮度的敏感程度,RGB颜色空间和YUV颜色空间可以互相转换。
3、像素值,指的是位于RGB颜色空间的彩色图像中每个像素对应的一组颜色分量。例如,每个像素对应一组三基色分量,其中,三基色分量分别为红色分量R、绿色分量G和蓝色分量B。
4、拜耳格式(bayer pattern)彩色滤波阵列(color filter array,CFA),图像由实际的景物转换为图像数据时,通常是图像传感器分别接收红色通道信号、绿色通道信号和蓝色通道信号,三个通道信号的信息,然后将三个通道信号的信息合成彩色图像,但是,这种方案中每个像素位置处都对应需要三块滤镜,价格昂贵且不好制作,因此,如图1所示,可以在图像传感器表面覆盖一层彩色滤波阵列,以获取三个通道信号的信息。拜耳格式彩色滤波阵列指的是滤镜以棋盘格式进行排布。例如,该拜耳格式彩色滤波阵列中的最小重复单元为:一个获取红色通道信号的滤镜、两个获取绿色通道信号的滤镜、一个获取蓝色通道信号的滤镜以2×2的方式排布。
5、拜耳格式图像(bayer image),即基于拜耳格式彩色滤波阵列的图像传感器输出的图像。该图像中的多种颜色的像素以拜耳格式进行排布。其中,拜耳格式图像中的每个像素仅对应一种颜色的通道信号。示例性的,由于人的视觉对绿色较为敏感,所以,可以设定绿色像素(对应绿色通道信号的像素)占全部像素的50%,蓝色像素 (对应蓝色通道信号的像素)和红色像素(对应红色通道信号的像素)各占全部像素的25%。其中,拜耳格式图像的最小重复单元为:一个红色像素、两个绿色像素和一个蓝色像素以2×2的方式排布。应理解,RAW域为RAW颜色空间,为拜耳格式图像的图像即可以称为位于RAW域的图像。
6、灰阶图像(gray image),灰阶图像是单通道图像,用于表示不同亮度程度,最亮为全白,最暗为全黑。也就是说,灰阶图像中的每个像素对应黑色到白色之间的不同程度的亮度。通常为了对最亮到最暗之间的亮度变化进行描述,将其进行划分,例如划分为256份,即代表256个等级的亮度,并称之为256个灰阶(第0灰阶~第255灰阶)。
7、光谱响应度(spectral response),也可以称为光谱灵敏度,光谱响应度表示图像传感器对不同波长入射光能转换成电能的能力。其中,若将某一波长的光入射到图像传感器的光能量转换成光子数目,而图像传感器产生、传递到外部电路的电流以电子数来表示,则代表每一入射的光子能够转换成传输到外部电路的电子的能力,称为量子效率(quantum efficiency,QE),单位以百分比来表示,图像传感器的光谱响应度则取决于该量子效率、以及波长和积分时间等参数。
8、半峰宽,指的是光谱峰高一半处的峰宽度,又称半宽度。
以上是对本申请实施例所涉及术语的简单介绍,以下不再赘述。
目前用于可见光成像的CMOS图像传感器大部分皆为传统的RGB传感器,由于硬件的限制,导致这种图像传感器只能接收红色通道信号、绿色通道信号和蓝色通道信号。这样,相对于人眼来说,该图像传感器的光谱响应范围是非常窄的,而较窄的光谱响应范围会限制图像传感器的颜色还原能力,影响还原出的图像的颜色等信息。
此外,相对于在强光环境下对信噪比的影响,较窄的光谱响应范围在暗光环境下对信噪比的影响更为显著,这样导致还原出的图像在暗光环境下的信噪比表现非常差。
由于RGB传感器偏窄的光谱响应范围制约着成像的颜色还原的上限以及暗光下较差的信噪比表现,因此,市场上出现了一些多光谱响应的可见光成像CMOS传感器,希望以此解决成像色彩还原和暗光信噪比的问题,但目前并没有成熟的处理方案来利用好这种传感器并实现精准颜色还原这一目标。
应理解,多光谱指的是用于成像的光谱波段包括2个及2个以上数量的波段。根据此定义,由于RGB传感器利用了红色、绿色和蓝色三个波段,所以,RGB传感器严格来说也是属于多光谱响应的,那么,本申请所指的多光谱响应的可见光CMOS传感器,其实指的是比RGB传感器的光谱响应范围宽的其他多光谱传感器。
例如,所述多光谱传感器可以为RYYB传感器、RGBW传感器等。应理解,该RYYB传感器接收的是红色通道信号、黄色通道信号和蓝色通道信号。该RGBW传感器接收的是红色通道信号、绿色通道信号、蓝色通道信号和白色通道信号。
图2提供了一种RGBY的光谱响应曲线的示意图。横轴表示波长,纵轴表示不同光谱所对应的光谱响应度。其中,Y所指示的光谱响应曲线表示黄光在不同波长所对应的不同光谱响应度,R所指示的光谱响应曲线表示红光在不同波长所对应的不同光谱响应度,G所指示的光谱响应曲线表示绿光在不同波长所对应的不同光谱响应度,B 所指示的光谱响应曲线表示蓝光在不同波长所对应的不同光谱响应度。
以RYYB传感器为例,相对于RGB传感器来说,接收的黄色通道信号相当于是红色通道信号和绿色通道信号的叠加,这样,通过增加黄光的透光量,可以提升暗光下的表现,改善信噪比,但是,如图2所示,由于在光谱响应曲线图中,Y所指示的光谱响应曲线所对应的半峰宽相对于R、G分别所指示的光谱响应曲线所对应的半峰宽会宽一些,反而又会导致图像还原时丢失部分色彩信息,进而导致出现特定场景偏色或过曝等问题。
以RGBW传感器为例,相对于RGB传感器来说,接收的白色通道信号相当于是所有颜色通道信号的叠加,透光性更好,可改善暗光下的信噪比问题,但是同样的,由于光谱响应曲线图(图2中未示出)中白光对应的半峰宽会非常宽,在图像还原时也会出现丢失部分色彩信息的问题,进而导致出现特定场景偏色或过曝等问题。
由此,亟待一种新的处理方案,能对以上多个问题均进行有效的解决。
有鉴于此,本申请实施例提供了一种图像处理方法,通过获取多帧包含不同通道信息的初始图像,利用通道信息的不同完成图像动态融合,从而实现图像色彩的最大还原和信噪比的最佳表现。
本申请实施例提供的图像处理方法可以应用于拍摄领域。例如,可以应用于在暗光环境下拍摄图像或者录制视频。
图3示出了本申请实施例提供的一种应用场景的示意图。在一个示例中,以电子设备为手机进行举例说明,该手机包括非RGB传感器的多光谱传感器。
如图3所示,响应于用户的操作,电子设备可以启动相机应用,显示如图3中所示的图形用户界面(graphical user interface,GUI),该GUI界面可以称为预览界面。该预览界面包括多种拍摄模式选项和第一控件。该多种拍摄模式例如包括:拍照模式、录像模式等,该第一控件例如为拍摄键11,拍摄键11用于指示当前拍摄模式为多种拍摄模式中的其中一种。
示例性的,如图3所示,当用户启动相机应用,想在夜晚对户外草地、树木进行拍照时,用户点击预览界面上的拍摄键11,电子设备检测到用户对拍摄键11的点击操作后,响应于该点击操作,运行图像处理方法对应的程序,获取图像。
应理解,虽然该电子设备包括的多光谱传感器不是RGB传感器,例如为RYYB传感器,该电子设备的光谱响应范围相对于现有技术有所扩大,也就是说,颜色还原能力以及信噪比表现都有所提高,但是,由于黄光的影响,其拍摄的图像的颜色相对于实际场景中的颜色还是发生了偏色,导致拍摄出的图像颜色失真。对此,若该电子设备采用本申请实施例提供的图像处理方法进行处理,则能够校正颜色,改善拍摄的图像的视觉效果,提高图像质量。
应理解,上述为对应用场景的举例说明,并不对本申请的应用场景作任何限定。
下面结合说明书附图,对本申请实施例提供的图像处理方法进行详细描述。
实施例1
图4示出了本申请实施例提供的一种图像处理方法的流程示意图。如图4所示, 本申请实施例提供了一种图像处理方法1,该图像处理方法1包括以下S11至S15。
S11、显示预览界面,预览界面包括第一控件。
S12、检测到对第一控件的第一操作。
第一控件例如为图3中所示的拍摄键11,第一操作例如为点击操作,当然,第一操作也可以为其他操作,本申请实施例对此不进行任何限制。
S13、响应于第一操作,获取多帧初始图像。该多帧初始图像中每帧初始图像所包含的通道信号不同。
应理解,该多帧初始图像均为拜耳格式图像,或者说,均位于RAW域。该多帧初始图像所包含的通道信号不同,是指多帧初始图像中的每帧初始图像以拜耳格式进行排布的像素所对应的多种颜色不相同。
应理解,该多帧初始图像可以是利用电子设备自身包括的多光谱传感器采集的或从其他设备获取的,具体可以根据需要进行设置,本申请实施例对此不进行任何限制。
应理解,当利用自身多光谱传感器获取多帧初始图像时,该多光谱传感器可以同时输出多帧包含不同通道信息的初始图像,或者,也可以串行输出多帧包含不同通道信息的初始图像,具体可以需要进行选择和设置,本申请实施例对此不作任何限制。
还应理解,虽然从多光谱传感器输出多帧初始图像时,可以是同时输出或串行输出,但无论如何输出,该多帧初始图像其实都是对待拍摄场景进行同一次拍摄所生成的图像。
图5示出了一种获取到的2帧初始图像的示意图。
示例性的,获取2帧初始图像,如图5中的(a)所示,其中一帧初始图像P1包括3种颜色的通道信号,例如,分别为通道信号T1、通道信号T2和通道信号T3;如图5中的(b)所示,另一帧初始图像P2包括2种颜色的通道信号(例如为T1和T2),或者,如图5中的(c)所示,初始图像P2也可以包括3种颜色的通道信号(例如为T1、T2和T4),或者,如图5中的(d)所示,初始图像P2也可以包括4种颜色的通道信号(例如为T1、T2、T3和T4),当然,初始图像P2也可以包括更多种颜色的通道信号,本申请实施例对此不进行任何限制。
其中,例如,图5获取的2帧初始图像中,初始图像P1包括的3种颜色的通道信号分别为红色通道信号(R)、绿色通道信号(G)和蓝色通道信号(B),并且该3种颜色以RGGB的排布方式进行重复。
例如,当初始图像P2包括3种颜色的通道信号时,初始图像P2包括的3种颜色的通道信号可以为红色通道信号(R)、黄色通道信号(Y)和蓝色通道信号(B),并且该3种颜色可以以RYYB的排布方式进行重复。或者,初始图像P2包括的3种颜色的通道信号可以为红色通道信号(R)、绿色通道信号(G)和青色通道信号(C),并且该3种颜色可以以RGGC的排布方式进行重复。或者,初始图像P2包括的3种颜色的通道信号可以为红色通道信号(R)、黄色通道信号(Y)和青色通道信号(C),并且该3种颜色以RYYC的排布方式进行重复。或者,初始图像P2包括的3种颜色的通道信号可以为红色通道信号(R)、白色通道信号(W)和蓝色通道信号(B),并且该3种颜色以RWWB的排布方式进行重复。或者,初始图像P2包括的3种颜色的通道信号可以为青色通道信号(C)、黄色通道信号(Y)和品红色通道信号(M), 并且该3种颜色可以以CYYM的排布方式进行重复。
当初始图像P2包括4种颜色的通道信号时,初始图像P2包括的4种颜色的通道信号可以为红色通道信号(R)、绿色通道信号(G)、蓝色通道信号(B)和白色通道信号(W),并且该4种颜色可以以RGBW的排布方式进行重复。或者,初始图像P2包括的4种颜色的通道信号可以为红色通道信号(R)、绿色通道信号(G)、蓝色通道信号(B)和近红外通道信号(NIR),并且该4种颜色可以以RGB-NIR的排布方式进行重复。
应理解,上述为对通道信号的举例说明,并不对本申请的通道信号作任何限定。
图6示出了另一种获取到的2帧初始图像的示意图。
示例性的,获取2帧初始图像,如图6中的(a)、图6中的(b)和图6中的(c)所示,其中一帧初始图像P2与图5中的(b)、图5中的(c)、图5中的(d)所示的P2情况相同,在此不再赘述;而另一帧初始图像P3则如图6中的(d)所示,可以包括2种颜色的通道信号(例如为T1和T3),或者,如图6中的(e)所示,也可以包括4种颜色的通道信号(例如为T1、T2、T3和T5),当然,也可以包括更多种颜色的通道信号,本申请实施例对此不进行任何限制。
图7示出了一种获取到的3帧初始图像的示意图。
示例性的,获取3帧初始图像,其中,如图7中的(a)所示,初始图像P1包括3种颜色的通道信号(例如为T1、T2和T3),如图7中的(b)所示,初始图像P2也包括3种颜色的通道信号(例如为T1、T2和T4),如图7中的(c)所示,而初始图像P3则包括4种颜色的通道信号(例如为T1、T2、T3和T5)。当然,也可以获取更多帧初始图像,以及每帧初始图像包括不同颜色的通道信号,本申请实施例对此不进行任何限制。
应理解,上述为对获取到的2帧初始图像和3帧初始图像的举例说明,并不对本申请的获取的初始图像的帧数作任何限定。
还应理解,当多帧初始图像分别包括的通道信号不同时,后续处理过程中对颜色还原的能力更好;当多帧初始图像分别包括的通道信号具有部分相同时,后续处理过程中对信噪比的提升效果更好。
S14、分别对多帧初始图像中的每帧初始图像进行处理,得到各自对应的处理图像。
S15、将多帧处理图像进行融合,得到目标图像。
本申请实施例提供的图像处理方法,通过获取多帧包含不同通道信息的初始图像,然后对不同通道的初始图像进行处理之后再进行融合,从而可以实现图像色彩的最大还原和信噪比的最佳表现,避免出现偏色问题。
实施例2a
图8示出了本申请实施例提供的一种图像处理方法的流程示意图。如图8所示,本申请实施例提供了一种图像处理方法2,该图像处理方法2包括以下S21至S26。
S21、显示预览界面,预览界面包括第一控件。
S22、检测到对第一控件的第一操作。
第一控件和第一操作的描述与上述S11、S12中的描述相同,在此不再赘述。
S23、响应于第一操作,利用多光谱传感器,获取第一初始图像和第二初始图像,其中,第一初始图像和第二初始图像分别包含的通道信号不同。
示例性的,第一初始图像包含3个通道信号,每个像素对应一种颜色通道信号,分别为红色通道信号、绿色通道信号和蓝色通道信号,例如图8中所示的T1、T2和T3。第二初始图像也包含3个通道信号,每个像素对应一种颜色通道信号,该3个通道信号与第一初始图像的通道信号均不同,分别为青色通道信号、品红色通道信号和黄色通道信号,例如图8中所示的T4、T5和T6。后续第一初始图像和第二初始图像均以此为例,不再赘述。
此处,第一初始图像和第二初始图像均为拜耳格式图像,或者说,均为位于RAW域的图像。由于第一初始图像和第二初始图像分别包括的通道信号不同,后续处理过程中对颜色还原的能力更好。
应理解,该多光谱传感器可以获取的通道信号的数量应该大于或等于第一初始图像和第二初始图像对应的通道信号数量之和。例如,当第一初始图像包含红色通道信号、绿色通道信号和蓝色通道信号,第二初始图像包含青色通道信号、品红色通道信号和黄色通道信号时,该多光谱传感器至少可以获取6种不同颜色的通道信号,分别为红色通道信号、绿色通道信号、蓝色通道信号、青色通道信号、品红色通道信号和黄色通道信号,由此,才可以生成两个包含3种不同颜色通道信号的第一初始图像和第二初始图像。
应理解,在本申请实施例中,利用多光谱传感器获取的初始图像的数量,以及每帧初始图像对应的通道信号均可以根据需要进行设置和更改,本申请实施例对此不进行任何限制,上述仅为一种示例。
应理解,此处,多光谱传感器可以通过一路数据通路,输出第一初始图像和第二初始图像,或者,也可以通过两路数据通路,分别输出第一初始图像和第二初始图像,具体可以根据需要进行设置,本申请实施例对此不进行任何限制。图8以两路数据通路进行传输进行示意。
S24、在第一图像信号处理器(如图8中所示的ISP1)中,对第一初始图像进行第一前端处理,得到位于RAW域的第一前端处理图像;并在该第一图像信号处理器中,对第二初始图像进行第二前端处理,得到位于RAW域的第二前端处理图像。
应理解,处理后的第一前端处理图像和第二前端处理图像均位于RAW域,也就是说,第一前端处理图像和第二前端处理图像均是拜耳格式图像。
可选地,第一前端处理和第二前端处理均可以包括:动态坏点补偿(defect pixel correction,DPC)、降噪、镜头阴影校正(lens shading correction,LSC)和宽动态范围调整(wide range compression,WDR)中的至少一项。
应理解,动态坏点补偿用于解决多光谱传感器上光线采集的点形成的阵列所存在的缺陷,或者光信号进行转化的过程中存在的错误;通常通过在亮度域上取其他周围像素点均值来消除坏点。
应理解,降噪用于减少图像中噪声,一般方法有均值滤波、高斯滤波、双边滤波等。镜头阴影校正用于消除由于镜头光学系统原因造成的图像四周颜色以及亮度与图 像中心不一致的问题。
宽动态范围调整指的是:当在强光源(日光、灯具或反光等)照射下的高亮度区域及阴影、逆光等相对亮度较低的区域在图像中同时存在时,图像会出现明亮区域因曝光过度成为白色,而黑暗区域因曝光不足成为黑色,严重影响图像质量。因此,可以在同一场景中对最亮区域及较暗区域进行调整,例如使暗区在图像中变亮,亮区在图像变暗,从而使得处理后的图像可以呈现暗区和亮区中的更多细节。
应理解,第一前端处理和第二前端处理均可以包括上述一个或多个处理步骤,当第一前端处理或第二前端处理包括多个处理步骤时,该多个处理步骤的顺序可以根据需要进行调整,本申请实施例对此不进行任何限制。此外,第一前端处理和第二前端处理均还可以包括其他步骤,具体可以根据需要进行增加,本申请实施例对此不进行任何限制。
图11示出了一种第一前端处理或第二前端处理的示意图。
例如,如图11所示,第一前端处理或第二前端处理按处理顺序均包括:动态坏点补偿、降噪、镜头阴影校正和宽动态范围调整。
例如,在图11的基础上,第一前端处理和第二前端处理还可以包括:自动白平衡(auto white balance,AWB)。
其中,自动白平衡用于使得白色在任何色温下相机均能呈现出真正的白色。
此处,第一前端处理和第二前端处理可以相同也可以不相同,具体可以根据需要进行设置和更改,本申请实施例对此不进行任何限制。
应理解,第一初始图像经过第一前端处理、第二初始图像经过第二前端处理之后,第一初始图像和第二初始图像中的坏点将减少、噪声降低、颜色均衡、亮区和暗区的细节更清楚,动态范围得以提升,从而整个图像的质量将有效提高。
S25、在第一图像信号处理器ISP1中,将第一前端处理图像和第二前端处理图像进行第一融合处理,得到位于RAW域的第一融合图像。
应理解,由于第一前端处理图像和第二前端处理图像均位于RAW域,第一融合图像也位于RAW域,因此,可以得知第一融合处理为实际上为RAW域的融合处理。在RAW域进行融合处理,可以保留图像更多的细节。
S26、在第一图像信号处理器ISP1中,对位于RAW域的第一融合图像进行第一后端处理,得到位于YUV域的目标图像。
可选地,第一后端处理可以包括:颜色校正(color correction matrix,CCM)和RGB域转YUV域。
其中,颜色校正用于校准除白色以外其他颜色的准确度,应理解,在颜色校正的过程中,相当于将位于RAW域的图像转换至了RGB域,而位于RGB域的图像也就是通常所说的每个像素均包括红色通道信号、绿色通道信号和蓝色通道信号的彩色图像。
RGB域转YUV域,用于将位于RGB域的图像转换至YUV域。
可选地,第一后端处理还可以包括:伽马(Gamma)校正和风格变换(3dimensional look up table,3DLUT)中的至少一项。
其中,伽马校正用于通过调整伽马曲线来调整图像的亮度,对比度,动态范围等; 风格变换指示颜色的风格变换,即使用颜色滤镜,使原始的图像风格变成其他的图像风格,常见的风格比如,电影风格、日系风格、阴森风格等。
应理解,第一后端处理可以包括上述一个或多个处理步骤,当第一后端处理包括多个处理步骤时,该多个处理步骤的顺序可以根据需要进行调整,本申请实施例对此不进行任何限制。此外,第一后端处理均还可以包括其他步骤,具体可以根据需要进行增加,本申请实施例对此不进行任何限制。
图12示出了一种第一后端处理的示意图。
例如,如图12所示,第一后端处理按处理顺序包括:颜色校正、伽马校正、风格变换和RGB域转YUV域。
应理解,经过第一后端处理之后,第一融合图像从RAW域转换至YUV域,可以减小后续传输的数据量,节省带宽。
还应理解,目标图像位于YUV域。目标图像可以被作为拍摄图像在电子设备100的界面上进行显示,或者,仅进行存储,具体可以根据需要进行设置,本申请实施例对此不进行任何限制。
在该实施例中,基于包含通道信号不同的第一初始图像和第二初始图像,在同一图像信号处理器中,先各自经过前端处理之后,再进行RAW域的融合,然后对融合后第一融合图像进行第一后端处理,使其从RAW域转换至YUV域,接着,从图像信号处理器中输出第一后端处理处理后的位于YUV域的目标图像。由于在RAW域进行融合之前,初始图像各自包含的通道信号不同且进行了一系列处理,并且在RAW域融合保留了更多的细节,从而可以实现图像色彩的最大还原和信噪比的最佳表现。
还应理解,上述过程仅为一种示例,具体可以根据需要进行顺序上的调整,当然,还可以增加或减少步骤,本申请实施例对此不进行任何限制。
实施例2b
图9示出了本申请实施例提供的一种图像处理方法的流程示意图。如图9所示,本申请实施例提供了一种图像处理方法3,该图像处理方法3包括以下S31至S37。
S31、显示预览界面,预览界面包括第一控件。
S32、检测到对第一控件的第一操作。
其中,对第一控件和第一操作的描述与上述S11、S12中的描述相同,在此不再赘述。
S33、响应于第一操作,利用多光谱传感器,获取第一初始图像和第二初始图像,其中,第一初始图像和第二初始图像分别包含的通道信号不同。
其中,对第一初始图像和第二初始图像的描述与上述S23中的描述相同,在此不再赘述。第一初始图像和第二初始图像仅为本申请实施例提供的一种示例。
S34、在第一图像信号处理器(如图9中所示的ISP1)中,对第一初始图像进行第一前端处理,得到位于RAW域的第一前端处理图像;并在该第一图像信号处理器中,对第二初始图像进行第二前端处理,得到位于RAW域的第二前端处理图像。
对第一前端处理图像和第二前端处理图像的描述与上述S24中的描述相同,在此 不再赘述。
S35、在第一图像信号处理器ISP1中,对第一前端处理图像进行颜色校正,得到位于RGB域的第一校正图像;并在该第一图像信号处理器中,对第二前端处理图像进行颜色校正,得到位于RGB域的第二校正图像。
其中,颜色校正用于校准除白色以外其他颜色的准确度,应理解,在颜色校正的过程中,相当于将位于RAW域的图像转换至了RGB域,而位于RGB域的图像也就是通常所说的每个像素均包括红色通道信号、绿色通道信号和蓝色通道信号的彩色图像。
应理解,由于第一前端处理图像和第二前端处理图像均位于RAW域,颜色校正相当于进行RAW域转换至RGB域的处理,由此,得到的第一校正图像和第二校正图像均位于RGB域。
S36、在第一图像信号处理器ISP1中,对位于RGB域的第一校正图像和位于RGB域的第二校正图像进行第二融合处理,得到位于RGB域的第二融合图像。
第二融合图像也位于RGB域,因此,可以得知第二融合处理实际上为RGB域的融合处理。通过将包含不同通道信号的RAW域图像,转换至相同标准的RGB域后再进行融合,有利于在相同颜色空间对图像进行处理,得到该RGB颜色空间的最优效果。
S37、在第一图像信号处理器ISP1中,对于RGB域的第二融合图像进行第二后端处理,得到位于YUV域的目标图像。
可选地,第二后端处理可以包括:RGB域转YUV域。
RGB域转YUV域,用于将位于RGB域的图像转换至YUV域。
可选地,第二后端处理还可以包括:伽马校正和风格变换中的至少一项。
其中,对伽马校正和风格变换的描述与上述S26中的描述相同,在此不再赘述。
应理解,第二后端处理可以包括上述一个或多个处理步骤,当第一后端处理包括多个处理步骤时,该多个处理步骤的顺序可以根据需要进行调整,本申请实施例对此不进行任何限制。此外,第二后端处理均还可以包括其他步骤,具体可以根据需要进行增加,本申请实施例对此不进行任何限制。
图13示出了一种第二后端处理的示意图。
例如,如图13所示,第二后端处理按处理顺序包括:伽马校正、风格变换和RGB域转YUV域。
应理解,经过第二后端处理之后,第二融合图像从RGB域转换至YUV域,可以减小后续传输的数据量,节省带宽。
还应理解,目标图像位于YUV域。目标图像可以被作为拍摄图像在电子设备100的界面上进行显示,或者,仅进行存储,具体可以根据需要进行设置,本申请实施例对此不进行任何限制。
在该实施例中,基于包含通道信号不同的第一初始图像和第二初始图像,在同一图像信号处理器中,先各自经过前端处理和颜色校正之后,再进行RGB域的融合,然后对融合后第二融合图像进行第二后端处理,使其从RGB域转换至YUV域,接着,从图像信号处理器中输出第二后端处理处理后的位于YUV域的目标图像。由于在RGB域融合之前,初始图像各自包含的通道信号不同且进行了一系列处理和颜色校正,从 而可以实现图像色彩的最大还原和信噪比的最佳表现。
还应理解,上述过程仅为一种示例,具体可以根据需要进行顺序上的调整,当然,还可以增加或减少步骤,本申请实施例对此不进行任何限制。
实施例2c
图10示出了本申请实施例提供的一种图像处理方法的流程示意图。如图10所示,本申请实施例提供了一种图像处理方法4,该图像处理方法4包括以下S41至S46。
S41、显示预览界面,预览界面包括第一控件。
S42、检测到对第一控件的第一操作。
其中,对第一控件和第一操作的描述与上述S11、S12中的描述相同,在此不再赘述。
S43、响应于第一操作,利用多光谱传感器,获取第一初始图像和第二初始图像,其中,第一初始图像和第二初始图像分别包含的通道信号不同。
其中,对第一初始图像和第二初始图像的描述与上述S23中的描述相同,在此不再赘述。第一初始图像和第二初始图像仅为本申请实施例提供的一种示例。
S44、在第一图像信号处理器(如图10中所示的ISP1)中,对第一初始图像进行第一前端处理,得到位于RAW域的第一前端处理图像;并在该第一图像信号处理器中,对第二初始图像进行第二前端处理,得到位于RAW域的第二前端处理图像。
对第一前端处理图像和第二前端处理图像的描述与上述S24中的描述相同,在此不再赘述。
S45、在第一图像信号处理器ISP1中,对第一前端处理图像进行第一中间处理,得到位于YUV域的第一中间处理图像;并在该第一图像信号处理器中,对第二前端处理图像进行第二中间处理,得到位于YUV域的第二中间处理图像。
可选地,第一中间处理和第二中间处理均可以包括:颜色校正和RGB域转YUV域。
其中,颜色校正用于校准除白色以外其他颜色的准确度,应理解,在颜色校正的过程中,相当于将位于RAW域的图像转换至了RGB域,而位于RGB域的图像也就是通常所说的每个像素均包括红色通道信号、绿色通道信号和蓝色通道信号的彩色图像。
RGB域转YUV域,用于将位于RGB域的图像转换至YUV域。
应理解,由于第一前端处理图像和第二前端处理图像均位于RAW域,颜色校正相当于进行RAW域转换至RGB域的处理,然后再进行RGB域转YUV域,相当于将第一前端处理图像从RAW域转换至YUV域,将第二前端处理图像从RAW域转换至YUV域,从而得到的第一中间处理图像和第二中间处理图像均位于YUV域。
可选地,第一中间处理和第二中间处理均还可以包括:伽马校正和风格变换中的至少一项。
其中,对伽马校正和风格变换的描述与上述S26中的描述相同,在此不再赘述。
应理解,第一中间处理和第二中间处理可以包括上述一个或多个处理步骤,当第 一中间处理和第二中间处理包括多个处理步骤时,该多个处理步骤的顺序可以根据需要进行调整,本申请实施例对此不进行任何限制。此外,第一中间处理和第二中间处理均还可以包括其他步骤,具体可以根据需要进行增加,本申请实施例对此不进行任何限制。
例如,如图12所示,第一中间处理、第二中间处理与第一后端处理相同,均按处理顺序包括:颜色校正、伽马校正、风格变换和RGB域转YUV域。
此处,第一中间处理和第二中间处理可以相同也可以不相同,具体可以根据需要进行设置和更改,本申请实施例对此不进行任何限制。
S46、在第一图像信号处理器ISP1中,对位于YUV域的第一中间处理图像和位于YUV域的第二中间处理图像进行第三融合处理,得到位于YUV域的第三融合图像,第三融合图像也就是目标图像。
第三融合图像也位于YUV域,因此,可以得知第三融合处理为实际上为YUV域的融合处理。在YUV域进行融合处理,数据量小,处理速度更快。
还应理解,目标图像,也即第三融合图像可以被作为拍摄图像在电子设备100的界面上进行显示,或者,仅进行存储,具体可以根据需要进行设置,本申请实施例对此不进行任何限制。
在该实施例中,基于包含通道信号不同的第一初始图像和第二初始图像,在同一图像信号处理器中,先各自经过前端处理和中间处理之后,再进行YUV域的融合,接着,从图像信号处理器中直接输出YUV域融合之后位于YUV域的目标图像。由于在YUV域融合之前,初始图像各自包含的通道信号不同且进行了一系列处理和颜色校正,从而可以实现图像色彩的最大还原和信噪比的最佳表现。
还应理解,上述过程仅为一种示例,具体可以根据需要进行顺序上的调整,当然,还可以增加或减少步骤,本申请实施例对此不进行任何限制。
实施例3a
图14示出了本申请实施例提供的一种图像处理方法的流程示意图。如图14所示,本申请实施例提供了一种图像处理方法5,该图像处理方法5包括以下S51至S56。
S51、显示预览界面,预览界面包括第一控件。
S52、检测到对第一控件的第一操作。
第一控件和第一操作的描述与上述S11、S12中的描述相同,在此不再赘述。
S53、响应于第一操作,利用多光谱传感器,获取第一初始图像和第二初始图像,其中,第一初始图像和第二初始图像分别包含的通道信号不同。
其中,对第一初始图像和第二初始图像的描述与上述S23中的描述相同,在此不再赘述。第一初始图像和第二初始图像仅为本申请实施例提供的一种示例。
S54、在第二图像信号处理器(如图14中所示的ISP2)中,对第一初始图像进行第一前端处理,得到位于RAW域的第一前端处理图像并输出;在第三图像信号处理器(如图14中所示的ISP3)中,对第二初始图像进行第二前端处理,得到位于RAW域的第二前端处理图像并输出。
其中,对第一前端处理图像和第二前端处理图像的描述与上述S24中的描述相同,在此不再赘述。为了图像处理流程中后续的融合处理能得到更好的颜色还原效果,第一前端处理和第二前端处理需分开在两个图像信号处理器中进行。
S55、将第一前端处理图像和第二前端处理图像进行第一融合处理,得到位于RAW域的第一融合图像。
第一融合处理为实际上为RAW域的融合处理。
S56、对位于RAW域的第一融合图像进行第一后端处理,得到位于YUV域的目标图像。
其中,对第一后端处理的描述与上述S26中的描述相同,在此不再赘述。
应理解,第一融合处理和第一后端处理可以在第二图像信号处理器ISP2中进行,也可以在第三图像信号处理器ISP3中进行,当然,还可以在其他图像信号处理器中进行,本申请实施例对此不进行任何限制。
还应理解,目标图像,可以被作为拍摄图像在电子设备100的界面上进行显示,或者,仅进行存储,具体可以根据需要进行设置,本申请实施例对此不进行任何限制。
在该实施例中,基于包含通道信号不同的第一初始图像和第二初始图像,在不同图像信号处理器中,先各自进行前端处理;然后,再将从不同图像信号处理器输出的前端处理图像进行RAW域的融合;接着,对融合后的第一融合图像进行第一后端处理,使其从RAW域转换至YUV域。由于在RAW域融合之前,初始图像各自包含的通道信号不同且进行了一系列处理,并且在RAW域融合保留了更多的细节,从而可以实现图像色彩的最大还原和信噪比的最佳表现。
还应理解,上述过程仅为一种示例,具体可以根据需要进行顺序上的调整,当然,还可以增加或减少步骤,本申请实施例对此不进行任何限制。
实施例3b
图15示出了本申请实施例提供的一种图像处理方法的流程示意图。如图15所示,本申请实施例提供了一种图像处理方法6,该图像处理方法6包括以下S61至S67。
S61、显示预览界面,预览界面包括第一控件。
S62、检测到对第一控件的第一操作。
其中,对第一控件和第一操作的描述与上述S11、S12中的描述相同,在此不再赘述。
S63、响应于第一操作,利用多光谱传感器,获取第一初始图像和第二初始图像,其中,第一初始图像和第二初始图像分别包含的通道信号不同。
其中,对第一初始图像和第二初始图像的描述与上述S23中的描述相同,在此不再赘述。第一初始图像和第二初始图像仅为本申请实施例提供的一种示例。
S64、在第二图像信号处理器(如图15中所示的ISP2)中,对第一初始图像进行第一前端处理,得到位于RAW域的第一前端处理图像;在第三图像信号处理器(如图15中所示的ISP3)中,对第二初始图像进行第二前端处理,得到位于RAW域的第二前端处理图像。
其中,对第一前端处理图像和第二前端处理图像的描述与上述S24中的描述相同, 在此不再赘述。
S65、在第二图像信号处理器ISP2中,对第一前端处理图像进行颜色校正,得到位于RGB域的第一校正图像并输出;以及在第三图像信号处理器ISP3中,对第二前端处理图像进行颜色校正,得到位于RGB域的第二校正图像并输出。
其中,对颜色校正的描述,与对图9中的颜色校正的描述相同,在此不再赘述。
S66、对位于RGB域的第一校正图像和位于RGB域的第二校正图像进行第二融合处理,得到位于RGB域的第二融合图像。
第二融合图像也位于RGB域,因此,可以得知第二融合处理为实际上为RGB域的融合处理。通过将包含不同通道信号的RAW图,转换至相同标准的RGB域进行融合,有利于在相同颜色空间对图像进行处理,得到该RGB颜色空间的最优效果。
S67、在第一图像信号处理器ISP1中,对于RGB域的第二融合图像进行第二后端处理,得到位于YUV域的目标图像。
其中,对第二后端处理的描述与上述S37中的描述相同,在此不再赘述。
应理解,第二融合处理和第二后端处理可以在第二图像信号处理器ISP2中进行,也可以在第三图像信号处理器ISP3中进行,当然,还可以在其他图像信号处理器中进行,本申请实施例对此不进行任何限制。
还应理解,目标图像,可以被作为拍摄图像在电子设备100的界面上进行显示,或者,仅进行存储,具体可以根据需要进行设置,本申请实施例对此不进行任何限制。
在该实施例中,基于包含通道信号不同的第一初始图像和第二初始图像,在不同图像信号处理器中,先各自经过前端处理和颜色校正;然后,再将从不同图像信号处理器输出的校正图像进行RGB域的融合;接着,对融合后第二融合图像进行第二后端处理,使其从RGB域转换至YUV域。由于在RGB域融合之前,初始图像各自包含的通道信号不同且进行了一系列处理和颜色校正,从而可以实现图像色彩的最大还原和信噪比的最佳表现。
还应理解,上述过程仅为一种示例,具体可以根据需要进行顺序上的调整,当然,还可以增加或减少步骤,本申请实施例对此不进行任何限制。
实施例3c
图16示出了本申请实施例提供的一种图像处理方法的流程示意图。如图16所示,本申请实施例提供了一种图像处理方法7,该图像处理方法7包括以下S71至S76。
S71、显示预览界面,预览界面包括第一控件。
S72、检测到对第一控件的第一操作。
其中,对第一控件和第一操作的描述与上述S11、S12中的描述相同,在此不再赘述。
S73、响应于第一操作,利用多光谱传感器,获取第一初始图像和第二初始图像,其中,第一初始图像和第二初始图像分别包含的通道信号不同。
其中,对第一初始图像和第二初始图像的描述与上述S23中的描述相同,在此不再赘述。第一初始图像和第二初始图像仅为本申请实施例提供的一种示例。
S74、在第二图像信号处理器(如图16中所示的ISP2)中,对第一初始图像进行 第一前端处理,得到位于RAW域的第一前端处理图像;在第三图像信号处理器(如图16中所示的ISP3)中,对第二初始图像进行第二前端处理,得到位于RAW域的第二前端处理图像。
其中,对第一前端处理图像和第二前端处理图像的描述与上述S24中的描述相同,在此不再赘述。
S75、在第二图像信号处理器ISP2中,对第一前端处理图像进行第一中间处理,得到位于YUV域的第一中间处理图像并输出;以及在第三图像信号处理器ISP3中,对第二前端处理图像进行第二中间处理,得到位于YUV域的第二中间处理图像并输出。
其中,对第一中间处理和第二中间处理的描述与上述S45中的描述相同,在此不再赘述。
S76、对位于YUV域的第一中间处理图像和位于YUV域的第二中间处理图像进行第三融合处理,得到位于YUV域的第三融合图像,第三融合图像也就是目标图像。
第三融合图像也位于YUV域,因此,可以得知第三融合处理为实际上为YUV域的融合处理。在YUV域进行融合处理,数据量小,处理速度更快。
应理解,第三融合处理可以在第二图像信号处理器ISP2中进行,也可以在第三图像信号处理器ISP3中进行,当然,还可以在其他图像信号处理器中进行,本申请实施例对此不进行任何限制。
还应理解,目标图像,也即第三融合图像可以被作为拍摄图像在电子设备100的界面上进行显示,或者,仅进行存储,具体可以根据需要进行设置,本申请实施例对此不进行任何限制。
在该实施例中,基于包含通道信号不同的第一初始图像和第二初始图像,在不同图像信号处理器中,先各自经过前端处理和中间处理,转换成位于YUV域的中间处理图像;然后,再将从不同图像信号处理器输出的中间处理图像进行YUV域的融合,得到位于YUV域的目标图像。由于在YUV域融合之前,初始图像各自包含的通道信号不同且进行了一系列处理和颜色校正,从而实现图像色彩的最大还原和信噪比的最佳表现。
还应理解,上述过程仅为一种示例,具体可以根据需要进行顺序上的调整,当然,还可以增加或减少步骤,本申请实施例对此不进行任何限制。
实施例4a
结合上述实施例3a,本申请还提供了另一种图像处理方法的流程示意图。如图17所示,本申请实施例提供了一种图像处理方法8。
在该图像处理方法8中,除了S53,其他处理步骤均与图14所示的图像处理方法5中的步骤相同,在此不再进行赘述。
在该图像处理方法8中,S53包括以下S531和S532:
S531、响应于第一操作,利用多光谱传感器,获取待预处理图像。
S532、对待预处理图像进行预处理,得到第一初始图像和第二初始图像。该预处理用于转换待预处理图像包含的通道信号。
应理解,该待预处理图像包含多个通道信号,该通道信号的数量大于或等于第一初始图像和第二初始图像的通道信号的总和。
其中,对第一初始图像和第二初始图像的描述与上述S23中的描述相同,在此不再赘述。此处以待预处理图像包含的通道信号等于第一初始图像和第二初始图像的总和为例,当第一初始图像包含红色通道信号、绿色通道信号和蓝色通道信号,第二初始图像包含青色通道信号、品红色通道信号和黄色通道信号时,该多光谱传感器获取的待预处理图像至少包括6种不同颜色的通道信号,分别为红色通道信号、绿色通道信号、蓝色通道信号、青色通道信号、品红色通道信号和黄色通道信号。应理解,上述仅为一种示例。
可选地,预处理可以为:水平方向binning、垂直方向binning、v2h2binning,或者remosaic。
应理解,binning指的是将拜耳格式阵列中相邻像素感应的电荷加在一起,以一个像素的模式输出。例如,水平方向binning指的是将相邻的行的电荷加在一起输出;垂直方向binning指的是将相邻的列的电荷加在一起输出;v2h2binning指的是水平方向和垂直方向同时都进行相加,这样可以将2×2分布的像素合成1×1,由此图像的长宽均缩短为原来的一半,输出分辨率降低为原本的四分之一。
应理解,remosaic也指的是将四个像素合并为一个像素,但与v2h2binning不同的是,remosaic指的是将Quadra CFA(Quadra Color Filter array)格式中的四个像素合并为一个像素。此时,待预处理图像的格式为Quadra CFA,该格式的待预处理图像中相邻的四个像素实际上对应同一种颜色的通道信号,基于此,经过remosaic处理之后,可以还原出拜耳格式图像。
当然,预处理还可以为其他方式,本申请实施例对此不进行任何限制。
应理解,预处理可以在第二图像信号处理器ISP2中进行,也可以在第三图像信号处理器ISP3中进行,当然,还可以在其他图像信号处理器中进行,本申请实施例对此不进行任何限制。
此外,后续对第一初始图像进行第一前端处理可以在第二图像信号处理器ISP2中进行,对第二初始图像进行第二前端处理可以在第三图像信号处理器ISP3中进行,或者,对第一初始图像进行第一前端处理,对第二初始图像进行第二前端处理也可以一个图像信号处理器中进行,本申请实施例对此不进行任何限制。
在该实施例中,多光谱传感器可以仅用于获取一帧包含通道信号比较多的预处理图像,通过对预处理图像进行预处理,也即进行拆分,从而拆分出包含不同通道信号的第一初始图像和第二初始图像;然后,再基于包含通道信号不同的第一初始图像和第二初始图像,在不同图像信号处理器中,先各自进行前端处理;再将从不同图像信号处理器输出的前端处理图像进行RAW域的融合;接着,对融合后的第一融合图像进行第一后端处理,使其从RAW域转换至YUV域。由于在RAW域进行融合之前,初始图像包含的通道信号不同且各自进行了一系列处理,并且在RAW域融合保留了更多的细节,从而可以实现图像色彩的最大还原和信噪比的最佳表现。
实施例4b
结合上述实施例3b,本申请还提供了另一种图像处理方法的流程示意图。如图18所示,本申请实施例提供了一种图像处理方法9。
在该图像处理方法9中,除了S63,其他处理步骤均与图15所示的图像处理方法6中的步骤相同,在此不再进行赘述。
在该图像处理方法9中,S63可以包括以下S631和S632:
S631、响应于第一操作,利用多光谱传感器,获取待预处理图像。
S632、对待预处理图像进行预处理,得到第一初始图像和第二初始图像。该预处理用于转换待预处理图像包含的通道信号。
应理解,该待预处理图像包含多个通道信号。该通道信号的数量大于或等于第一初始图像和第二初始图像的通道信号的总和。
其中,对第一初始图像和第二初始图像的描述与上述S24中的描述相同,在此不再赘述。此处以待预处理图像包含的通道信号等于第一初始图像和第二初始图像的总和为例,当第一初始图像包含红色通道信号、绿色通道信号和蓝色通道信号,第二初始图像包含青色通道信号、品红色通道信号和黄色通道信号时,该多光谱传感器获取的待预处理图像至少包括6种不同颜色的通道信号,分别为红色通道信号、绿色通道信号、蓝色通道信号、青色通道信号、品红色通道信号和黄色通道信号。应理解,上述仅为一种示例。
对预处理的描述与上述实施例4a中的描述相同,在此不再赘述。
应理解,预处理可以在第二图像信号处理器ISP2中进行,也可以在第三图像信号处理器ISP3中进行,当然,还可以在其他图像信号处理器中进行,本申请实施例对此不进行任何限制。
此外,后续对第一初始图像进行第一前端处理和颜色校正可以在第二图像信号处理器ISP2中进行,对第二初始图像进行第二前端处理和颜色校正可以在第三图像信号处理器ISP3中进行,或者,对第一初始图像进行第一前端处理和颜色校正,对第二初始图像进行第二前端处理和颜色校正也可以一个图像信号处理器中进行,本申请实施例对此不进行任何限制。
在该实施例中,多光谱传感器可以仅用于获取一帧包含通道信号比较多的预处理图像,通过对预处理图像进行预处理,也即进行拆分,从而拆分出包含不同通道信号的第一初始图像和第二初始图像;然后,在不同图像信号处理器中,先各自经过前端处理和颜色校正;再将从不同图像信号处理器输出的校正图像进行RGB域的融合;接着,对融合后第二融合图像进行第二后端处理,使其从RGB域转换至YUV域。由于在RGB域融合之前,初始图像各自包含的通道信号不同且进行了一系列处理和颜色校正,从而可以实现图像色彩的最大还原和信噪比的最佳表现。
实施例4c
结合上述实施例3c,本申请还提供了另一种图像处理方法的流程示意图。如图19所示,本申请实施例提供了一种图像处理方法10。
在该图像处理方法10中,除了S73,其他处理步骤均与图16所示的图像处理方 法7中的步骤相同,在此不再进行赘述。
在该图像处理方法10中,S73可以包括以下S731和S732:
S731、响应于第一操作,利用多光谱传感器,获取待预处理图像。
S732、对待预处理图像进行预处理,得到第一初始图像和第二初始图像。该预处理用于转换待预处理图像包含的通道信号。
应理解,该待预处理图像包含多个通道信号。该通道信号的数量大于或等于第一初始图像和第二初始图像的通道信号的总和。
其中,对第一初始图像和第二初始图像的描述与上述S24中的描述相同,在此不再赘述。此处以待预处理图像包含的通道信号等于第一初始图像和第二初始图像的总和为例,当第一初始图像包含红色通道信号、绿色通道信号和蓝色通道信号,第二初始图像包含青色通道信号、品红色通道信号和黄色通道信号时,该多光谱传感器获取的待预处理图像至少包括6种不同颜色的通道信号,分别为红色通道信号、绿色通道信号、蓝色通道信号、青色通道信号、品红色通道信号和黄色通道信号。应理解,上述仅为一种示例。
对预处理的描述与上述实施例4a中的描述相同,在此不再赘述。
此外,后续对第一初始图像进行第一前端处理和第一中间处理可以在第二图像信号处理器ISP2中进行,对第二初始图像进行第二前端处理和第二中间处理可以在第三图像信号处理器ISP3中进行,或者,对第一初始图像进行第一前端处理和第一中间处理,对第二初始图像进行第二前端处理和第二中间处理也可以一个图像信号处理器中进行,本申请实施例对此不进行任何限制。
在该实施例中,多光谱传感器可以仅用于获取一帧包含通道信号比较多的预处理图像,通过对预处理图像进行预处理,也即进行拆分,从而拆分出包含不同通道信号的第一初始图像和第二初始图像;然后,在不同图像信号处理器中,先各自经过前端处理和中间处理,转换成位于YUV域的中间处理图像;然后,再将从不同图像信号处理器输出的中间处理图像进行YUV域的融合,得到位于YUV域的目标图像。由于在YUV域融合之前,初始图像各自包含的通道信号不同且进行了一系列处理和颜色校正,从而可以实现图像色彩的最大还原和信噪比的最佳表现。
实施例5
图20示出了本申请实施例提供的又一种图像处理方法的流程示意图。如图20所示,本申请实施例提供了一种图像处理方法11,该图像处理方法11包括以下S111至S116。
S111、显示预览界面,预览界面包括第一控件。
S112、检测到对第一控件的第一操作。
第一控件和第一操作的描述与上述S11、S12中的描述相同,在此不再赘述。
S113、响应于第一操作,利用多光谱传感器中的有效像素区,获取第一初始图像和第二初始图像,其中,第一初始图像和第二初始图像分别包含的通道信号不同。
应理解,一般传感器会包含有一行或者几行不参与感光的像素,为了避免影响后 续颜色还原的效果,可以将其排除,仅利用多光谱传感器中可以感光的有效像素区中的有效像素,来获取图像信号。其中,有效像素,即为多光谱传感器中可以感光的像素,有效像素区即为多光谱传感器中全部有效像素组成的区域。
其中,对第一初始图像和第二初始图像的描述与上述S23中的描述相同,在此不再赘述。第一初始图像和第二初始图像仅为本申请实施例提供的一种示例。
S114、在多光谱传感器中,对第一初始图像进行第一前端处理,得到位于RAW域的第一前端处理图像;并在多光谱传感器中,对第二初始图像进行第二前端处理,得到位于RAW域的第二前端处理图像。
其中,对第一前端处理图像和第二前端处理图像的描述与上述S24中的描述相同,在此不再赘述。
S115、在多光谱传感器中,将第一前端处理图像和第二前端处理图像进行第一融合处理,得到位于RAW域的第一融合图像并输出。
第一融合处理为实际上为RAW域的融合处理。
应理解,第一融合图像位于RAW域,第一融合图像从多光谱传感器输出后,相对于其他ISP来说,也可以称为第三初始图像。
S116、对位于RAW域的第一融合图像进行第一后端处理,得到位于YUV域的目标图像。
其中,对第一后端处理的描述与上述S26中的描述相同,在此不再赘述。
应理解,第一后端处理可以在第四图像信号处理器ISP4中进行。
应理解,在多光谱传感器中进行第一前端处理和第二前端处理,以及进行第一融合处理,可以降低后续处理的计算量。例如,当第一后端处理在ISP4中进行时,若此前的第一前端处理和第二前端处理,以及进行第一融合处理均在多光谱传感器中已经进行了处理,则可以降低后续ISP4的计算量,进而降低功耗。
还应理解,目标图像,可以被作为拍摄图像在电子设备100的界面上进行显示,或者,仅进行存储,具体可以根据需要进行设置,本申请实施例对此不进行任何限制。
在该实施例中,在多光谱传感器中,基于有效像素区,确定包含通道信号不同的第一初始图像和第二初始图像;继续在多光谱传感器中,对第一初始图像和第二初始图像先各自进行前端处理,然后,再将前端处理后的图像进行RAW域的融合并输出;接着,对融合后的第一融合图像进行第一后端处理,使其从RAW域转换至YUV域。由于在多光谱传感器中对包含不同通道信号的初始图像进行了一系列处理和融合,从而可以实现图像色彩的最大还原和信噪比的最佳表现。此外,还可以降低后续处理的计算量,进而降低功耗。
还应理解,上述过程仅为一种示例,具体可以根据需要进行顺序上的调整,当然,还可以增加或减少步骤,本申请实施例对此不进行任何限制。
结合上述实施例1至实施例5,在进行第一融合处理、第二融合处理或第三融合处理时,为了达到更好的融合效果,本申请的各个实施例还可以包括以下内容:
例如,在图像处理方法2中,当所述多帧初始图像包括通道信号不同的第一初始图像和第二初始图像时,针对第一初始图像进行第一前端处理后的第一前端处理图像, 以及第二初始图像进行第二前端处理后的第二前端处理图像,利用以下公式(1)进行融合:
I_f(i,j)=W_ij×I_c(i,j)+(1-W_ij)×I_r(i,j)  (1)
其中,(i,j)为图像中的像素坐标;I_c(i,j)为第一初始图像对应的第一前端处理图像,I_r(i,j)为第二初始图像对应的第二前端处理图像,W_ij为第一初始图像对应的第一前端处理图像所分配的权重,1-W_ij为第二初始图像对应的第二前端处理图像所分配的权重,I_f(i,j)为融合后的图像,也即第一融合图像。
这样,利用上述公式,将第一前端处理图像和第二前端处理图像中对应同一位置的像素两两进行融合,即可得到第一融合图像中全部像素的内容。
在此基础上,为了进一步提高融合效果,可以将权重W_ij进一步细化,例如,权重W_ij可以包括光照强度权重、色温权重和场景类型权重。
可选地,可以利用以下公式(2),确定权重W_ij:
W_ij=Wa_ij×para1+Wb_ij×para2+Wc_ij×para3  (2)
其中,Wa_ij为光照强度权重,Wa_ij=E/E_standard,E为拍摄环境中的照度,E_standard为预设的标准照度;Wb_ij为色温权重,Wb_ij=T/T_standard,T为拍摄环境中的色温,T_standard为预设的标准色温;Wc_ij为场景类型权重,不同场景类型对应的场景类型权重的大小不同,场景类型包括:人像、风景中的至少一项;para1、para2和para3为预设参数。
应理解,拍摄环境中的照度E可以通过自动曝光的统计值(auto exposure value,AE)进行换算。标准照度E_standard可以根据需要进行预设。例如,标准照度可以在电子设备出厂之前通过烧录的方式固化在OTP(one time programmable)存储器中。
拍摄环境中的色温T可以通过色温估计算法确定或者可以通过多光谱色温传感器采集得到。标准色温T_standard可以根据需要进行预设,例如,可以设定为5000K。
应理解,场景类型的种类可以根据需要进行设定,不同场景类型对应的场景类型权重也可以根据需要进行预设和更改,本申请实施例对此不进行任何限制。例如,当场景类型为人像时,可以定义对应的场景类型权重值小一些,以此来改善色相偏差。当场景类型为风景时,可以定义对应的场景类型权重值大一些,以此来提升信噪比。
应理解,上述光强照度权重、色温权重、以及场景类型权重三个权重可以均为全局操作的系数,此时,针对每个像素,由于三个权重分别保持一致,从而实现全局操作。
当然,也可以将图像分割区域之后,给不同区域分配不同大小的光强照度权重、色温权重或场景类型权重,以实现局部操作。其中,分割方法可以根据需要进行选择,本申请实施例对此不进行任何限制。例如,可以从被处理的图像中以方框、圆、椭圆、不规则多边形等方式勾勒出需要处理的区域,将其作为感兴趣区域(region of interest,ROI),然后针对感兴趣区域和非感兴趣区域分配不同大小的权重。
此外,针对场景类型权重,当场景类型为HDR时,由于图像内容差异比较大,可以针对每个像素或分区域设定不同的场景类型权重值,以实现精细化调节。
其中,HDR对应的场景类型权重可以通过以下公式(3)进行确定:
Wc_ij=(GA_standard-GA_ij)/GA_standard  (3)
其中,GA_ij是指像素坐标为(i,j)的像素对应的灰度值,GA_standard为预设的标准灰度值。
应理解,如果进行的是RAW域融合,说明融合的两帧图像均位于RAW域,此时每个像素对应的Y值即为灰度值。如果进行的是YUV域融合,说明进行融合的两帧图像均位于YUV域,此时每个像素对应的Y值为灰度值。如果进行是RGB域的融合,说明进行融合的两帧图像均位于RGB域,此时,每个像素对应的Y值可以根据对应的三基色像素值进行确定。
示例性一,当场景类型为HDR模式时,可以确定每个像素对应的灰度值,然后利用上述公式(3)确定出对应的场景类型权重,然后,带入到上述公式(2)中,确定出每个像素对应的权重W_ij。
示例性二,当场景类型为HDR模式时,可以利用分割方法先划分出感兴趣区域和非感兴趣区域,然后,针对每个区域,可以通过上述公式(3)确定出每个像素对应的场景类型权重,然后,通过求平均值或其他方式确定出该区域对应的场景类型权重,再通过加权的方式或其他方式确定出整张图像的场景类型权重。
应理解,上述仅为几种确定权重的示例,具体可以根据需要进行设计和调整,本申请实施例对此不进行任何限制。
上述对本申请实施例提供的图像处理方法进行了详细介绍,下面结合电子设备的显示界面介绍一下用户如何启用本申请实施例提供的图像处理方法。
图21为本申请实施例提供的一种电子设备的显示界面的示意图。
示例性的,响应于用户的点击操作,当电子设备100运行相机应用时,电子设备100显示如图21中的(a)所示的拍摄界面。用户可以在该界面上进行滑动操作,使得拍摄键11指示拍摄选项“更多”上。
响应于用户针对拍摄选项“更多”的点击操作,电子设备100显示如图21中的(b)所示的拍摄界面,在该界面上显示有多个拍摄模式选项,例如:专业模式、全景模式、HDR模式、延时摄影模式、水印模式、色彩还原模式等。应理解,上述拍摄模式选项仅为示例,具体可以根据需要进行设定和修改,本申请实施例对此不进行任何限制。
响应于用户针对“色彩还原”模式的点击操作,电子设备100可以在拍摄启用本申请实施例提供的图像处理方法相关的程序。
图22为本申请实施例提供的另一种电子设备的显示界面的示意图。
示例性的,响应于用户的点击操作,当电子设备100运行相机应用时,电子设备100显示如图22中的(a)所示的拍摄界面,在该拍摄界面的右上角显示有“设置”按钮。用户可以在该界面上点击“设置”按钮,进入设置界面进行相关功能的设置。
响应于用户针对“设置”按钮的点击操作,电子设备100显示如图22中的(b)所示的设置界面,在该界面上显示有多个功能,例如,照片比例用于实现拍照模式下对照片比例的设定,声控拍照用于实现拍照模式下是否通过声音进行触发的设定,视频分辨率用于实现对视频分辨率的调整,视频帧率用于实现对视频帧率的调整,此外还有通用的参考线、水平仪、色彩还原等。
响应于用户针对“色彩还原”对应的开关按钮的拖动操作,电子设备100可以在拍 摄时启用本申请实施例提供的图像处理方法相关的程序。
应理解,上述仅为用户从电子设备的显示界面启用本申请实施例提供的图像处理方法的两种示例,当然也可以通过其他方式来启用本申请实施例提供的图像处理方法,或者,也可以在拍摄过程默认直接使用本申请实施例提供的图像处理方法,本申请实施例对此不进行任何限制。
结合上述实施例,图23为本申请实施例提供的一种色彩还原误差示意图。横轴表示不同的色温光源,纵轴表示色彩还原误差(Delta_E)。
如图23所示,在大多数色温光源(例如D65/D50/CWF/A/Tungsten)的照射下,利用本申请实施例提供的图像处理方法处理后,获取的目标图像的色彩还原误差更小,色彩还原更准确。
比如,在色温光源D65下,第一初始图像单独对应的成像(如mean-A)的色彩还原误差值为7.5,第二初始图像单独对应的成像(如mean-B)的色彩还原误差值为4,而利用本申请实施例提供的图像处理方法处理后,获取的目标图像(如mean-ABfusion)的色彩还原误差值为3,相对于其他两者来说,本申请获取的图像的色彩还原误最小,色彩还原效果最好。
图24为本申请实施例提供的一种在色温光源D65下关于信噪比的示意图。横轴表示照度(lux),纵轴为信噪比(SNR)。
如图24所示,在色温光源D65下,直方图中的每组数据代表不同照度下图像对应的信噪比大小,其中,每组数据均包含第一初始图像(如A)对应的信噪比、第二初始图像(如B)对应的信噪比以及融合后的目标图像(如AB Fusion)对应的信噪比。由图可知,相对于单独的初始图像,利用本申请实施例提供图像处理方法获取的融合后的目标图像的信噪比相对要高一些,说明其信噪比表现相对要好一些。
例如,在照度为300时,第一初始图像和第二初始图像对应的信噪比均近似为38,而融合后的目标图像的信噪比近似为45,信噪比得以提升,说明利用本申请实施例提供图像处理方法可以有效改善图像的信噪比表现。
上文结合图1至图24详细描述了本申请实施例提供的图像处理方法以及相关的显示界面和效果图;下面将结合图25至图28详细描述本申请实施例提供的电子设备、装置和芯片。应理解,本申请实施例中的电子设备、装置和芯片可以执行前述本申请实施例的各种图像处理方法,即以下各种产品的具体工作过程,可以参考前述方法实施例中的对应过程。
图25示出了一种适用于本申请的电子设备的硬件系统。电子设备100可用于实现上述方法实施例中描述的图像处理方法。
电子设备100可以是手机、智慧屏、平板电脑、可穿戴电子设备、车载电子设备、增强现实(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)、投影仪等等,本申请实施例对电子设备100的具体类型不作任何限制。
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
需要说明的是,图25所示的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图25所示的部件更多或更少的部件,或者,电子设备100可以包括图25所示的部件中某些部件的组合,或者,电子设备100可以包括图25所示的部件中某些部件的子部件。图25示的部件可以以硬件、软件、或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元。例如,处理器110可以包括以下处理单元中的至少一个:应用处理器(application processor,AP)、调制解调处理器、图形处理器(graphics processing unit,GPU)、图像信号处理器(image signal processor,ISP)、控制器、视频编解码器、数字信号处理器(digital signal processor,DSP)、基带处理器、神经网络处理器(neural-network processing unit,NPU)。其中,不同的处理单元可以是独立的器件,也可以是集成的器件。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在本申请实施例中,处理器110可以执行显示预览界面,预览界面包括第一控件;检测到对第一控件的第一操作;响应于第一操作,获取多帧初始图像,多帧初始图像包含的通道信号不同;分别对多帧初始图像中的每帧初始图像进行处理,得到对应的处理图像;将多帧处理图像进行融合,得到目标图像。
图25所示的各模块间的连接关系只是示意性说明,并不构成对电子设备100的各模块间的连接关系的限定。可选地,电子设备100的各模块也可以采用上述实施例中多种连接方式的组合。
电子设备100的无线通信功能可以通过天线1、天线2、移动通信模块150、无线通信模块160、调制解调处理器以及基带处理器等器件实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
电子设备100可以通过GPU、显示屏194以及应用处理器实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194可以用于显示图像或视频。
电子设备100可以通过ISP、摄像头193、视频编解码器、GPU、显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP可以对图像的噪点、亮度和色彩进行算法优化,ISP还可以优化拍摄场景的曝光和色温等参数。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的红绿蓝(red green blue,RGB),YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1、MPEG2、MPEG3和MPEG4。
上文详细描述了电子设备100的硬件系统,下面介绍电子设备100的软件系统。
图26是本申请实施例提供的电子设备的软件系统的示意图。
如图26所示,系统架构中可以包括应用层210、应用框架层220、硬件抽象层230、驱动层240以及硬件层250。
应用层210可以包括相机应用程序或者其他应用程序,其他应用程序包括但不限于:相机、图库等应用程序。
应用框架层220可以向应用层的应用程序提供应用程序编程接口(application programming interface,API)和编程框架;应用框架层可以包括一些预定义的函数。
例如,应用框架层220可以包括相机访问接口;相机访问接口中可以包括相机管理与相机设备;其中,相机管理可以用于提供管理相机的访问接口;相机设备可以用于提供访问相机的接口。
硬件抽象层230用于将硬件抽象化。比如,硬件抽象层可以包相机抽象层以及其他硬件设备抽象层;相机硬件抽象层可以调用相机算法库中的相机算法。
例如,硬件抽象层230中包括相机硬件抽象层2301与相机算法库;相机算法库中 可以包括软件算法;比如,算法1、算法2等可以是用于图像处理的软件算法。
驱动层240用于为不同硬件设备提供驱动。例如,驱动层可以包括相机设备驱动、数字信号处理器驱动和图形处理器驱动。
硬件层250可以包括多个图像传感器(sensor)、多个图像信号处理器、数字信号处理器、图形处理器以及其他硬件设备。
例如,硬件层250包括传感器和图像信号处理器;传感器中可以包括传感器1、传感器2、深度传感器(time of flight,TOF)、多光谱传感器等。图像信号处理器中可以包括图像信号处理器1、图像信号处理器2等。
在本申请中,通过调用硬件抽象层230中的硬件抽象层接口,可以实现硬件抽象层230上方的应用程序层210、应用程序框架层220与下方的驱动层240、硬件层250的连接,实现摄像头数据传输及功能控制。
其中,在硬件抽象层230中的摄像头硬件接口层中,厂商可以根据需求在此定制功能。摄像头硬件接口层相比硬件抽象层接口,更加高效、灵活、低延迟,也能更加丰富的调用ISP和GPU,来实现图像处理。其中,输入硬件抽象层230中的图像可以来自图像传感器,也可以来自存储的图片。
硬件抽象层230中的调度层,包含了通用功能性接口,用于实现管理和控制。
硬件抽象层230中的摄像头服务层,用于访问ISP和其他硬件的接口。
下面结合捕获拍照场景,示例性说明电子设备100软件以及硬件的工作流程。
应用程序层中的相机应用可以以图标的方式显示在电子设备100的屏幕上。当相机应用的图标被用户点击以进行触发时,电子设备100开始运行相机应用。当相机应用运行在电子设备100上时,相机应用调用应用程序框架层210中的相机应用对应的接口,然后,通过调用硬件抽象层230启动摄像头驱动,开启电子设备100上的包含多光谱传感器的摄像头193,并通过多光谱传感器采集通道不同的多帧初始图像。此时,多光谱传感器可按一定工作频率进行采集,并将采集的图像在多光谱传感器内部或传输至1路或多路图像信号处理器中进行处理,然后,再将处理后的目标图像进行保存和/或传输至显示屏进行显示。
下面介绍本申请实施例提供的一种用于实现上述图像处理方法的图像处理装置300。图27是本申请实施例提供的图像处理装置300的示意图。
如图27所示,图像处理装置300包括显示单元310、获取单元320和处理单元330。
其中,显示单元310用于显示预览界面,预览界面包括第一控件。
获取单元320用于检测到对第一控件的第一操作。
处理单元330用于响应于第一操作,获取多帧初始图像,多帧初始图像包含的通道信号不同。
处理单元330还用于分别对多帧初始图像中的每帧初始图像进行处理,得到对应的处理图像;将多帧处理图像进行融合,得到目标图像。
需要说明的是,上述图像处理装置300以功能单元的形式体现。这里的术语“单元”可以通过软件和/或硬件形式实现,对此不作具体限定。
例如,“单元”可以是实现上述功能的软件程序、硬件电路或二者结合。所述硬件 电路可能包括应用特有集成电路(application specific integrated circuit,ASIC)、电子电路、用于执行一个或多个软件或固件程序的处理器(例如共享处理器、专有处理器或组处理器等)和存储器、合并逻辑电路和/或其它支持所描述的功能的合适组件。
因此,在本申请的实施例中描述的各示例的单元,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机指令;当所述计算机可读存储介质在图像处理装置300上运行时,使得该图像处理装置300执行前述所示的图像处理方法。
所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或者数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可以用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带),光介质、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
本申请实施例还提供了一种包含计算机指令的计算机程序产品,当其在图像处理装置300上运行时,使得图像处理装置300可以执行前述所示的图像处理方法。
图28为本申请实施例提供的一种芯片的结构示意图。图28所示的芯片可以为通用处理器,也可以为专用处理器。该芯片包括处理器401。其中,处理器401用于支持图像处理装置300执行前述所示的技术方案。
可选的,该芯片还包括收发器402,收发器402用于接受处理器401的控制,用于支持图像处理装置300执行前述所示的技术方案。
可选的,图28所示的芯片还可以包括:存储介质403。
需要说明的是,图28所示的芯片可以使用下述电路或者器件来实现:一个或多个现场可编程门阵列(field programmable gate array,FPGA)、可编程逻辑器件(programmable logic device,PLD)、控制器、状态机、门逻辑、分立硬件部件、任何其他适合的电路、或者能够执行本申请通篇所描述的各种功能的电路的任意组合。
上述本申请实施例提供的电子设备、图像处理装置300、计算机存储介质、计算机程序产品、芯片均用于执行上文所提供的方法,因此,其所能达到的有益效果可参考上文所提供的方法对应的有益效果,在此不再赘述。
应理解,上述只是为了帮助本领域技术人员更好地理解本申请实施例,而非要限制本申请实施例的范围。本领域技术人员根据所给出的上述示例,显然可以进行各种等价的修改或变化,例如,上述检测方法的各个实施例中某些步骤可以是不必须的,或者可以新加入某些步骤等。或者上述任意两种或者任意多种实施例的组合。这样的修改、变化或者组合后的方案也落入本申请实施例的范围内。
还应理解,上文对本申请实施例的描述着重于强调各个实施例之间的不同之处, 未提到的相同或相似之处可以互相参考,为了简洁,这里不再赘述。
还应理解,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
还应理解,本申请实施例中,“预先设定”、“预先定义”可以通过在设备(例如,包括电子设备)中预先保存相应的代码、表格或其他可用于指示相关信息的方式来实现,本申请对于其具体的实现方式不做限定。
还应理解,本申请实施例中的方式、情况、类别以及实施例的划分仅是为了描述的方便,不应构成特别的限定,各种方式、类别、情况以及实施例中的特征在不矛盾的情况下可以相结合。
还应理解,在本申请的各个实施例中,如果没有特殊说明以及逻辑冲突,不同的实施例之间的术语和/或描述具有一致性、且可以相互引用,不同的实施例中的技术特征根据其内在的逻辑关系可以组合形成新的实施例。
最后应说明的是:以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (19)

  1. 一种图像处理方法,其特征在于,应用于包括多光谱传感器的电子设备,所述方法包括:
    显示预览界面,所述预览界面包括第一控件;
    检测到对所述第一控件的第一操作;
    响应于所述第一操作,获取多帧初始图像,所述多帧初始图像包含的通道信号不同;
    分别对所述多帧初始图像中的每帧初始图像进行处理,得到各自对应的处理图像;
    将多帧所述处理图像进行融合,得到目标图像。
  2. 根据权利要求1所述的图像处理方法,其特征在于,
    所述分别对所述多帧初始图像中的每帧初始图像进行处理,包括:
    分别对所述多帧初始图像中的每帧初始图像进行前端处理,得到位于RAW域的前端处理图像;
    所述将多帧所述处理图像进行融合,得到目标图像,包括:
    将所述前端处理图像进行RAW域融合处理,得到位于RAW域的融合图像;
    对所述位于RAW域的融合图像进行第一后端处理,得到位于YUV域的所述目标图像。
  3. 根据权利要求1所述的图像处理方法,其特征在于,
    所述分别对所述多帧初始图像中的每帧初始图像进行处理,包括:
    分别对所述多帧初始图像中的每帧初始图像进行前端处理和颜色校正,得到位于RGB域的校正图像;所述颜色校正用于将图像从RAW域转换成RGB域;
    所述将多帧所述处理图像进行融合,得到目标图像,包括:
    将所述校正图像进行RGB域融合处理,得到位于RGB域的融合图像;
    对所述位于RGB域的融合图像进行第二后端处理,得到位于YUV域的所述目标图像。
  4. 根据权利要求1所述的图像处理方法,其特征在于,
    所述分别对所述多帧初始图像中的每帧初始图像进行处理,包括:
    分别对所述多帧初始图像中的每帧初始图像进行前端处理和第一后端处理,得到位于YUV域的中间处理图像;所述中间处理用于将图像从RGB域转换成YUV域;
    所述将多帧所述处理图像进行融合,得到目标图像,包括:
    将所述中间处理图像进行YUV域融合处理,得到位于YUV域的融合图像,所述融合图像为所述目标图像。
  5. 根据权利要求1至4中任一项所述的图像处理方法,其特征在于,所述方法还包括:
    在同一图像信号处理器中,分别对所述多帧初始图像中的每帧初始图像进行处理,得到各自对应的处理图像,以及将多帧所述处理图像进行融合,得到所述目标图像。
  6. 根据权利要求1至4中任一项所述的图像处理方法,其特征在于,所述方法还包括:
    在不同图像信号处理器中,分别对所述多帧初始图像中不同的初始图像进行处理,得到各自对应的处理图像。
  7. 根据权利要求6所述的图像处理方法,其特征在于,所述方法还包括:
    利用多光谱传感器,获取待预处理图像;
    对所述待预处理图像进行预处理,得到所述多帧初始图像;所述预处理用于转换所述待预处理图像包含的通道信号。
  8. 根据权利要求2所述的图像处理方法,其特征在于,所述方法还包括:
    利用多光谱传感器,获取多个图像信号;
    在所述多光谱传感器中,根据所述多个图像信号,确定所述多帧初始图像,以及分别对所述多帧初始图像中的每帧初始图像进行前端处理,得到所述位于RAW域的前端处理图像;
    在所述多光谱传感器中,将所述前端处理图像进行RAW域融合处理,得到所述位于RAW域的融合图像。
  9. 根据权利要求1至8中任一项所述的图像处理方法,其特征在于,所述方法还包括:
    当所述多帧初始图像包括通道信号不同的第一初始图像和第二初始图像时,利用以下公式进行融合:
    I_f(i,j)=W_ij×I_c(i,j)+(1-W_ij)×I_r(i,j)
    其中,(i,j)为像素坐标;I_c(i,j)为所述第一初始图像对应的处理图像,I_r(i,j)为所述第二初始图像对应的处理图像,W_ij为所述第一初始图像对应的处理图像所分配的权重,1-W_ij为所述第二初始图像对应的处理图像所分配的权重,I_f(i,j)为融合后的图像。
  10. 根据权利要求9所述的图像处理方法,其特征在于,所述方法还包括:
    利用公式W_ij=Wa_ij×para1+Wb_ij×para2+Wc_ij×para3,确定W_ij;
    其中,Wa_ij为光照强度权重,Wa_ij=E/E_standard,E为拍摄环境中的照度,E_standard为预设的标准照度;Wb_ij为色温权重,Wb_ij=T/T_standard,T为拍摄环境中的色温,T_standard为预设的标准色温;Wc_ij为场景类型权重,不同场景类型对应的所述场景类型权重的大小不同,所述场景类型包括:人像、风景中的至少一项;para1、para2和para3为预设参数。
  11. 根据权利要求10所述的图像处理方法,其特征在于,所述方法还包括:
    当所述场景类型为HDR时,Wc_ij=(GA_standard-GA_ij)/GA_standard;
    其中,GA_ij是指像素坐标为(i,j)的像素对应的灰度值,GA_standard为预设的标准灰度值。
  12. 根据权利要求2至4、8中任一项所述的图像处理方法,其特征在于,所述前端处理包括:动态坏点补偿、降噪、镜头阴影校正和宽动态范围调整中的至少一项。
  13. 根据权利要求2或4所述的图像处理方法,其特征在于,所述第一后端处理包括:颜色校正和RGB域转YUV域。
  14. 根据权利要求3所述的图像处理方法,其特征在于,所述第二后端处理包括:RGB域转YUV域。
  15. 根据权利要求13或14所述的图像处理方法,其特征在于,第一后端处理、第二后端处理均还包括:伽马校正和风格变换中的至少一项。
  16. 一种电子设备,其特征在于,包括多光谱传感器、处理器和存储器;
    所述多光谱传感器,用于获取多帧初始图像,所述多帧初始图像包含的通道信号不同;
    所述存储器,用于存储可在所述处理器上运行的计算机程序;
    所述处理器,用于执行如权利要求1至15中任一项所述的图像处理方法。
  17. 一种芯片,其特征在于,包括:处理器,用于从存储器中调用并运行计算机程序,使得安装有所述芯片的设备执行如权利要求1至15中任一项所述的图像处理方法。
  18. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时,使所述处理器执行如权利要求1至15中任一项所述的图像处理方法。
  19. 一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机程序代码,当所述计算机程序代码被处理器执行时,使得处理器执行权利要求1至15中任一项所述的图像处理方法。
PCT/CN2022/116201 2021-09-10 2022-08-31 图像处理方法及其相关设备 WO2023036034A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22862315.3A EP4195643A4 (en) 2021-09-10 2022-08-31 IMAGE PROCESSING METHOD AND ASSOCIATED DEVICE THEREFOR
US18/026,679 US20230342895A1 (en) 2021-09-10 2022-08-31 Image processing method and related device thereof

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202111063195 2021-09-10
CN202111063195.0 2021-09-10
CN202210108251.6 2022-01-28
CN202210108251.6A CN115802183B (zh) 2021-09-10 2022-01-28 图像处理方法及其相关设备

Publications (1)

Publication Number Publication Date
WO2023036034A1 true WO2023036034A1 (zh) 2023-03-16

Family

ID=85431068

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/116201 WO2023036034A1 (zh) 2021-09-10 2022-08-31 图像处理方法及其相关设备

Country Status (4)

Country Link
US (1) US20230342895A1 (zh)
EP (1) EP4195643A4 (zh)
CN (1) CN115802183B (zh)
WO (1) WO2023036034A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681637A (zh) * 2023-08-03 2023-09-01 国网安徽省电力有限公司超高压分公司 特高压换流变压器红外与可见光监测图像融合方法及系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117319815B (zh) * 2023-09-27 2024-05-14 北原科技(深圳)有限公司 基于图像传感器的视频流识别方法和装置、设备、介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130094781A1 (en) * 2010-02-12 2013-04-18 Olympus Corporation Image processing apparatus
JP2017207883A (ja) * 2016-05-18 2017-11-24 株式会社日立国際電気 監視システム、カラーカメラ装置及び光学部品
CN112005153A (zh) * 2018-04-12 2020-11-27 生命科技股份有限公司 用于利用单色传感器生成彩色视频的设备、系统和方法
CN112781522A (zh) * 2020-12-25 2021-05-11 复旦大学 一种基于彩色相移结构光的去高光轮廓仪
CN113302912A (zh) * 2018-12-19 2021-08-24 法雷奥舒适驾驶助手公司 用于监控驾驶员的图像捕获设备和相关系统

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7844127B2 (en) * 2007-03-30 2010-11-30 Eastman Kodak Company Edge mapping using panchromatic pixels
US9225889B1 (en) * 2014-08-18 2015-12-29 Entropix, Inc. Photographic image acquisition device and method
JP6628497B2 (ja) * 2015-05-19 2020-01-08 キヤノン株式会社 撮像装置、撮像システム、および画像処理方法
CN107135380B (zh) * 2016-02-29 2019-05-28 华为技术有限公司 一种彩色成像装置以及获取彩色图像的方法
JP6758859B2 (ja) * 2016-03-01 2020-09-23 キヤノン株式会社 撮像装置、撮像システム、および画像処理方法
KR102603426B1 (ko) * 2016-06-27 2023-11-20 삼성전자주식회사 이미지 처리장치 및 방법
CN108419061B (zh) * 2017-02-10 2020-10-02 杭州海康威视数字技术股份有限公司 基于多光谱的图像融合设备、方法及图像传感器
CN109729281A (zh) * 2019-01-04 2019-05-07 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及终端
CN110022469B (zh) * 2019-04-09 2021-03-02 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN112532855B (zh) * 2019-09-17 2022-04-29 华为技术有限公司 一种图像处理方法和装置
CN112529775A (zh) * 2019-09-18 2021-03-19 华为技术有限公司 一种图像处理的方法和装置
CN110889403A (zh) * 2019-11-05 2020-03-17 浙江大华技术股份有限公司 文本检测方法以及相关装置
CN111246064B (zh) * 2020-02-19 2021-07-09 Oppo广东移动通信有限公司 图像处理方法、摄像头组件及移动终端
CN111861902A (zh) * 2020-06-10 2020-10-30 天津大学 基于深度学习的Raw域视频去噪方法
CN112291479B (zh) * 2020-11-23 2022-03-22 Oppo(重庆)智能科技有限公司 图像处理模组、图像处理方法、摄像头组件及移动终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130094781A1 (en) * 2010-02-12 2013-04-18 Olympus Corporation Image processing apparatus
JP2017207883A (ja) * 2016-05-18 2017-11-24 株式会社日立国際電気 監視システム、カラーカメラ装置及び光学部品
CN112005153A (zh) * 2018-04-12 2020-11-27 生命科技股份有限公司 用于利用单色传感器生成彩色视频的设备、系统和方法
CN113302912A (zh) * 2018-12-19 2021-08-24 法雷奥舒适驾驶助手公司 用于监控驾驶员的图像捕获设备和相关系统
CN112781522A (zh) * 2020-12-25 2021-05-11 复旦大学 一种基于彩色相移结构光的去高光轮廓仪

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4195643A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681637A (zh) * 2023-08-03 2023-09-01 国网安徽省电力有限公司超高压分公司 特高压换流变压器红外与可见光监测图像融合方法及系统
CN116681637B (zh) * 2023-08-03 2024-01-02 国网安徽省电力有限公司超高压分公司 特高压换流变压器红外与可见光监测图像融合方法及系统

Also Published As

Publication number Publication date
EP4195643A4 (en) 2024-05-01
EP4195643A1 (en) 2023-06-14
US20230342895A1 (en) 2023-10-26
CN115802183A (zh) 2023-03-14
CN115802183B (zh) 2023-10-20

Similar Documents

Publication Publication Date Title
US11849224B2 (en) Global tone mapping
CN111698434B (zh) 图像处理设备及其控制方法和计算机可读存储介质
WO2023036034A1 (zh) 图像处理方法及其相关设备
US8363131B2 (en) Apparatus and method for local contrast enhanced tone mapping
US8339475B2 (en) High dynamic range image combining
WO2023231583A1 (zh) 图像处理方法及其相关设备
US20140078247A1 (en) Image adjuster and image adjusting method and program
US10325354B2 (en) Depth assisted auto white balance
CN108419022A (zh) 控制方法、控制装置、计算机可读存储介质和计算机设备
CN110213502A (zh) 图像处理方法、装置、存储介质及电子设备
CN116416122B (zh) 图像处理方法及其相关设备
US20200228770A1 (en) Lens rolloff assisted auto white balance
EP4195679A1 (en) Image processing method and electronic device
US20230069500A1 (en) Tone mapping for image capture
US20200228769A1 (en) Lens rolloff assisted auto white balance
WO2023016040A1 (zh) 视频处理方法、装置、电子设备和存储介质
WO2022032666A1 (zh) 处理图像的方法和相关装置
CN115550575B (zh) 图像处理方法及其相关设备
CN117135293B (zh) 图像处理方法和电子设备
CN116051368B (zh) 图像处理方法及其相关设备
US20230017498A1 (en) Flexible region of interest color processing for cameras
CN116668838B (zh) 图像处理方法与电子设备
CN114945087B (zh) 基于人脸特征的图像处理方法、装置、设备及存储介质
EP4258676A1 (en) Automatic exposure method and electronic device
WO2023016041A1 (zh) 视频处理方法、装置、电子设备和存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022862315

Country of ref document: EP

Effective date: 20230306

NENP Non-entry into the national phase

Ref country code: DE