WO2022036817A1 - 图像处理方法、图像处理系统、电子设备及可读存储介质 - Google Patents

图像处理方法、图像处理系统、电子设备及可读存储介质 Download PDF

Info

Publication number
WO2022036817A1
WO2022036817A1 PCT/CN2020/120025 CN2020120025W WO2022036817A1 WO 2022036817 A1 WO2022036817 A1 WO 2022036817A1 CN 2020120025 W CN2020120025 W CN 2020120025W WO 2022036817 A1 WO2022036817 A1 WO 2022036817A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
image
color
converted
pixels
Prior art date
Application number
PCT/CN2020/120025
Other languages
English (en)
French (fr)
Inventor
杨鑫
李小涛
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP20950021.4A priority Critical patent/EP4202822A4/en
Publication of WO2022036817A1 publication Critical patent/WO2022036817A1/zh
Priority to US18/146,949 priority patent/US11758289B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4023Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4015Image demosaicing, e.g. colour filter arrays [CFA] or Bayer patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/133Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing panchromatic light, e.g. filters passing white light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/702SSIS architectures characterised by non-identical, non-equidistant or non-planar pixel layout
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise

Definitions

  • the present application relates to the technical field of image processing, and in particular, to an image processing method, an image processing system, an electronic device, and a readable storage medium.
  • a camera may be provided in an electronic device such as a mobile phone to realize a photographing function.
  • An image sensor for receiving light can be provided inside the camera.
  • a filter array may be provided in the image sensor.
  • Embodiments of the present application provide an image processing method, an image processing system, an electronic device, and a computer-readable storage medium.
  • Embodiments of the present application provide an image processing method for an image sensor.
  • the image sensor includes a pixel array including a plurality of panchromatic photosensitive pixels and a plurality of color photosensitive pixels.
  • the color photosensitive pixel includes a first color photosensitive pixel, a second color photosensitive pixel, and a third color photosensitive pixel with different spectral responses, the color photosensitive pixel has a narrower spectral response than the panchromatic photosensitive pixel, and the Both the second color photosensitive pixel and the third color photosensitive pixel have a narrower spectral response than the first color photosensitive pixel.
  • the image processing method includes: exposing the pixel array to obtain a first image, the first image including full-color image pixels generated by the pan-color photosensitive pixels and first-color image pixels generated by the first-color photosensitive pixels , second-color image pixels generated by second-color photosensitive pixels and third-color image pixels generated by third-color photosensitive pixels; processing the full-color image pixels in the first image to convert them into first color image pixels to obtain a second image; processing the second color image pixels and the third color image pixels in the second image to convert them into first color image pixels to obtain a third image ; Process the third image according to the first image to obtain a second color intermediate image and a third color intermediate image, the second color intermediate image including the second color image pixels, the third color intermediate image an image including the third color image pixels; and fusing the third image, the second color intermediate image, and the third color intermediate image to obtain a target image, the target image including a plurality of color image pixels, a plurality of The color image pixels are arranged in a Bayer array.
  • the embodiments of the present application provide an image processing system.
  • the image processing system includes an image sensor and a processor.
  • the image sensor includes a pixel array including a plurality of panchromatic photosensitive pixels and a plurality of color photosensitive pixels.
  • the color photosensitive pixel includes a first color photosensitive pixel, a second color photosensitive pixel, and a third color photosensitive pixel with different spectral responses, the color photosensitive pixel has a narrower spectral response than the panchromatic photosensitive pixel, and the Both the second color photosensitive pixel and the third color photosensitive pixel have a narrower spectral response than the first color photosensitive pixel.
  • the pixel array is exposed to obtain a first image, and the first image includes full-color image pixels generated by the full-color photosensitive pixels, first-color image pixels generated by the first-color photosensitive pixels, and second-color photosensitive pixels.
  • the processor is configured to process the full-color image pixels in the first image to convert them into first-color image pixels to obtain a second image; process the second color image in the second image pixels and the third color image pixels are processed to convert into first color image pixels to obtain a third image; the third image is processed according to the first image to obtain a second color intermediate image and a third color intermediate image, the second color intermediate image including the second color image pixels, the third color intermediate image including the third color image pixels; and fusing the third image, the second color intermediate image image and the third color intermediate image to obtain a target image, the target image includes a plurality of color image pixels, and a plurality of the color image pixels are arranged in a Bayer array.
  • Embodiments of the present application provide an electronic device.
  • the electronic device includes a lens, a casing and the above-mentioned image processing system.
  • the lens and the image processing system are combined with the casing, and the lens cooperates with the image sensor of the image processing system to form an image.
  • Embodiments of the present application provide a non-volatile computer-readable storage medium containing a computer program.
  • the computer program When executed by the processor, the computer program causes the processor to execute the above-mentioned image processing method.
  • FIG. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of an image processing system according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a pixel array according to an embodiment of the present application.
  • FIG. 4 is a schematic cross-sectional view of a photosensitive pixel according to an embodiment of the present application.
  • FIG. 5 is a pixel circuit diagram of a photosensitive pixel according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of the arrangement of minimum repeating units in a pixel array according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of the arrangement of the minimum repeating unit in the pixel array according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of the arrangement of minimum repeating units in another pixel array according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of the arrangement of minimum repeating units in another pixel array according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of the arrangement of minimum repeating units in yet another pixel array according to an embodiment of the present application.
  • FIG. 11 is a schematic diagram of the arrangement of minimum repeating units in another pixel array according to an embodiment of the present application.
  • FIG. 12 is a schematic flowchart of another image processing method according to an embodiment of the present application.
  • FIG. 13 is a schematic diagram of converting a full-color image pixel into a first-color image pixel according to an embodiment of the present application
  • FIG. 14 is another schematic diagram of converting a full-color image pixel into a first-color image pixel according to an embodiment of the present application
  • 15 is a schematic flowchart of another image processing method according to an embodiment of the present application.
  • 16 is a schematic flowchart of another image processing method according to an embodiment of the present application.
  • FIG. 17 is a schematic diagram of acquiring feature directions according to an embodiment of the present application.
  • FIG. 18 is another schematic diagram of converting a full-color image pixel into a first-color image pixel according to an embodiment of the present application
  • 19 is another schematic diagram of converting a full-color image pixel into a first-color image pixel according to an embodiment of the present application.
  • 21 is another schematic diagram of converting a full-color image pixel into a first-color image pixel according to an embodiment of the present application
  • FIG. 22 is another schematic diagram of converting a full-color image pixel into a first-color image pixel according to an embodiment of the present application
  • FIG. 23 is a schematic flowchart of another image processing method according to an embodiment of the present application.
  • 24 is another schematic diagram of converting a full-color image pixel into a first-color image pixel according to an embodiment of the present application
  • 25 is another schematic diagram of converting a full-color image pixel into a first-color image pixel according to an embodiment of the present application
  • 26 is a schematic flowchart of another image processing method according to an embodiment of the present application.
  • FIG. 27 is another schematic diagram of converting a full-color image pixel into a first-color image pixel according to an embodiment of the present application
  • 29 is a schematic diagram of converting a first image into a third image according to an embodiment of the present application.
  • FIG. 30 is a schematic flowchart of another image processing method according to an embodiment of the present application.
  • 31 is another schematic diagram of converting a second color image pixel into a first color image pixel according to an embodiment of the present application.
  • 35 is a schematic diagram of acquiring a second color intermediate image and a third color intermediate image according to the first image and the third image according to an embodiment of the present application;
  • 37 to 38 are still another schematic diagrams of acquiring the second color intermediate image and the third color intermediate image according to the first image and the third image according to the embodiment of the present application;
  • 39 is a schematic diagram of fusion of a third image, a second color intermediate image, and a third color intermediate image according to an embodiment of the present application;
  • FIG. 40 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 41 is a schematic diagram of interaction between a non-volatile computer-readable storage medium and a processor according to an embodiment of the present application.
  • the present application provides an image processing method for an image sensor 10 .
  • the image sensor 10 includes a pixel array 11 including a plurality of panchromatic photosensitive pixels W and a plurality of color photosensitive pixels.
  • the color photosensitive pixel includes a first color photosensitive pixel A, a second color photosensitive pixel B, and a third color photosensitive pixel C with different spectral responses, the color photosensitive pixel has a narrower spectral response than the panchromatic photosensitive pixel W, and the second color photosensitive pixel Both the photosensitive pixel B and the third color photosensitive pixel C have a narrower spectral response than the first color photosensitive pixel A.
  • Image processing methods include:
  • the pixel array 11 is exposed to obtain a first image.
  • the first image includes full-color image pixels generated by full-color photosensitive pixels, first-color image pixels generated by first-color photosensitive pixels, and first color image pixels generated by second-color photosensitive pixels. Two-color image pixels and third-color image pixels generated by third-color photosensitive pixels;
  • step 02 processing the full-color image pixels in the first image to convert them into first-color image pixels to obtain a second image, including:
  • step 02 the full-color image pixels in the first image are processed to be converted into first-color image pixels to obtain a second image, including:
  • the feature direction is the first direction
  • the pixel of the first color image that is closest to the feature direction is on the first side of the panchromatic image pixel to be converted, according to the pixel value of the panchromatic image pixel to be converted, in the first color image pixel.
  • the first deviation is obtained from the pixel value of a panchromatic image pixel adjacent to the panchromatic image pixel to be converted on one side, and the difference between the pixel value of the panchromatic image pixel to be converted and the panchromatic image pixel to be converted on the second side is obtained.
  • obtaining a second deviation for pixel values of two adjacent panchromatic image pixels the first side being opposite to the second side;
  • the second weight According to the first weight, the second weight, the pixel value of the first color image pixel that is most adjacent to the panchromatic image pixel to be converted on the first side, and the pixel value of the panchromatic image pixel to be converted on the second side.
  • the pixel value of an adjacent first-color image pixel is obtained, and the pixel value of the to-be-converted full-color image pixel converted into the first-color image pixel is obtained.
  • step 02 processing the full-color image pixels in the first image to convert them into first-color image pixels to obtain a second image, including:
  • the characteristic direction is the first direction and the pixel of the first color image that is closest to the characteristic direction is on the second side of the full-color image pixel to be converted, according to the pixel value of the full-color image pixel to be converted, The third deviation of the pixel value of a panchromatic image pixel adjacent to the panchromatic image pixel to be converted on the two sides, and according to the pixel value of the panchromatic image pixel to be converted, the difference between the first side and the panchromatic image pixel to be converted The pixel values of two adjacent full-color image pixels obtain a fourth deviation, and the first side is opposite to the second side;
  • the fourth weight According to the third weight, the fourth weight, the pixel value of a first color image pixel adjacent to the panchromatic image pixel to be converted on the first side, and the pixel value of the panchromatic image to be converted on the second side most closely
  • the pixel value of an adjacent first-color image pixel obtains the pixel value of the to-be-converted full-color image pixel converted into the first-color image pixel.
  • step 02 processing the full-color image pixels in the first image to convert them into first-color image pixels, including:
  • step 02 processing the full-color image pixels in the first image to convert them into first-color image pixels, including:
  • the step of obtaining the characteristic directions of the panchromatic image pixels to be converted includes:
  • the step of processing the second color image pixels and the third color image pixels in the second image to convert to the first color image pixels to obtain the third image comprises: in the second color image pixels In the flat area, according to the pixel values of the adjacent first color image pixels in multiple directions of the second color image pixels to be converted, the pixels of the second color image pixels to be converted are converted into the first color image pixels to obtain the pixels of the first color image pixels. and/or when the third color image pixel is in the flat area, obtain the third color image to be converted according to the pixel values of the adjacent first color image pixels in multiple directions of the third color image pixel to be converted The pixel value after the pixel is converted to the image pixel of the first color.
  • step 03 Process the second color image pixels and the third color image pixels in the second image to convert them into first color image pixels, so as to obtain the first color image pixel.
  • the three images include:
  • step 03 Process the second color image pixels and the third color image pixels in the second image to convert them into first color image pixels to obtain the first color image pixel.
  • the three images include:
  • the step of obtaining the characteristic direction of the second color image pixel to be converted when the second color image pixel is in the non-flat region includes obtaining gradient values in multiple directions of the second color image pixel to be converted , and the direction corresponding to the smallest gradient value is selected as the feature direction of the second color image pixel.
  • acquiring the characteristic direction of the third color image pixel to be converted includes acquiring gradient values in multiple directions of the third color image pixel to be converted, and selecting the smallest gradient value The corresponding direction is the characteristic direction of the third color image pixel.
  • step 04: processing the third image according to the first image to obtain the second color intermediate image and the third color intermediate image includes:
  • 041 Perform bilateral filtering processing on the third image according to the first image to obtain a second color intermediate image and a third color intermediate image.
  • the present application further provides an image processing system 100 .
  • the image processing system 100 includes an image sensor 10 and a processor 20 .
  • the image sensor 10 includes a pixel array 11 (shown in FIG. 3 ), and the pixel array 11 includes a plurality of full-color photosensitive pixels W and a plurality of color photosensitive pixels.
  • the color photosensitive pixel includes a first color photosensitive pixel A, a second color photosensitive pixel B, and a third color photosensitive pixel C with different spectral responses, the color photosensitive pixel has a narrower spectral response than the panchromatic photosensitive pixel W, and the second color photosensitive pixel Both the photosensitive pixel B and the third color photosensitive pixel C have a narrower spectral response than the first color photosensitive pixel A.
  • the pixel array 11 is exposed to obtain a first image.
  • the first image includes full-color image pixels generated by full-color photosensitive pixels, first-color image pixels generated by first-color photosensitive pixels, and second-color image pixels generated by second-color photosensitive pixels.
  • the processor 20 is configured to: process the full-color image pixels in the first image to convert them into first-color image pixels to obtain a second image; process the second-color image pixels and the third-color image pixels in the second image process to convert into first color image pixels to obtain a third image; process the third image according to the first image to obtain a second color intermediate image and a third color intermediate image, the second color intermediate image includes the first color intermediate image two color image pixels, the third color intermediate image includes the third color image pixels; and fusing the third image, the second color intermediate image and the third color intermediate image to obtain a target image, the target image includes a plurality of color image pixels, a plurality of Color image pixels are arranged in a Bayer array.
  • the processor 20 is further configured to preset a first calculation window centered on the panchromatic image pixel to be converted when the panchromatic image pixel is in a flat area; obtain The pixel values of all pixels in the first calculation window; and according to the pixel values of all pixels in the first calculation window, the pixel values of the pixels of the panchromatic image to be converted, the preset first weight matrix, the preset second weight value matrix to obtain the pixel value of the full-color image pixel to be converted into the first color image pixel.
  • the processor 20 is further configured to obtain the characteristic direction of the panchromatic image pixel to be converted when the panchromatic image pixel is in the non-flat area; when the characteristic direction is the first direction, and when the most adjacent first color image pixel in the characteristic direction is on the first side of the full-color image pixel to be converted, according to the pixel value of the full-color image pixel to be converted, on the first side and the full-color image pixel to be converted.
  • the pixel value of one panchromatic image pixel adjacent to the image pixel obtains the first deviation, and according to the pixel value of the panchromatic image pixel to be converted, two panchromatic image pixels adjacent to the panchromatic image pixel to be converted on the second side are obtained.
  • the pixel value of the image pixel obtains a second deviation, and the first side is opposite to the second side; obtains the first weight according to the first deviation and the preset weight function, and obtains the second weight according to the second deviation and the weight function; and according to A first weight, a second weight, the pixel value of the one of the first-color image pixels that is most adjacent to the panchromatic image pixel to be converted on the first side, and the one that is adjacent to the full-color image pixel to be converted on the second side
  • the pixel value of the first color image pixel is obtained after the full-color image pixel to be converted is converted into the first color image pixel.
  • the processor 20 is further configured to obtain the characteristic direction of the panchromatic image pixel to be converted when the panchromatic image pixel is in a non-flat area; when the characteristic direction is the first direction, and when the most adjacent first color image pixel in the feature direction is on the second side of the full-color image pixel to be converted, according to the pixel value of the full-color image pixel to be converted, on the second side and the full-color image pixel to be converted A third deviation of the pixel value of one panchromatic image pixel adjacent to the image pixel, and according to the pixel value of the panchromatic image pixel to be converted, two panchromatic images adjacent to the panchromatic image pixel to be converted on the first side
  • the pixel value of the pixel obtains the fourth deviation, and the first side is opposite to the second side; obtains the third weight according to the third deviation and the preset weight function, and obtains the fourth weight according to the fourth deviation and the weight function; and obtains the fourth
  • the processor 20 is further configured to obtain the characteristic direction of the panchromatic image pixel to be converted when the panchromatic image pixel is in the non-flat area; when the characteristic direction is the second direction, preset a second calculation window centered on the panchromatic image pixel to be converted; obtain the pixel values of all pixels in the second calculation window; and according to the pixel values of all pixels in the second calculation window, the panchromatic image to be converted
  • the pixel value of the image pixel, the preset third weight value matrix, and the preset fourth weight value matrix are used to obtain the pixel value of the to-be-converted full-color image pixel converted into the first color image pixel.
  • the processor 20 is further configured to obtain the characteristic direction of the panchromatic image pixel to be converted when the panchromatic image pixel is in the non-flat area; when the characteristic direction is the third direction, preset a third calculation window centered on the pixel of the panchromatic image to be converted; obtain the pixel values of all pixels in the third calculation window, and according to the pixel values of a plurality of panchromatic image pixels around the pixel of the first color image , to obtain the converted pixel values of all the first color image pixels in the third calculation window; obtain the fifth a weight matrix; and obtaining the pixel value of the to-be-converted panchromatic image pixel converted into the first color image pixel according to the converted pixel value of the first color image pixel, the fifth weight matrix and the distance weight.
  • the processor 20 is further configured to obtain gradient values in multiple directions of the pixels of the panchromatic image to be converted, and select the direction corresponding to the smallest gradient value as the panchromatic image. Feature orientation of the pixel.
  • the processor 20 is further configured to, when the second color image pixel is in the flat area, according to the pixel values of the second color image pixel to be converted adjacent to the first color image pixel in multiple directions, to Obtaining the pixel value of the image pixel of the second color to be converted into the image pixel of the first color; and/or when the image pixel of the third color is in a flat area, according to the pixel value of the image of the third color to be converted, adjacent in multiple directions
  • the pixel value of the image pixel of the first color is obtained to obtain the pixel value of the image pixel of the third color to be converted into the image pixel of the first color.
  • the processor 20 is further configured to obtain the characteristic direction of the second color image pixel to be converted when the second color image pixel is in the non-flat area; and The pixel values of two first color image pixels adjacent to the second color map pixel to be converted in the direction are obtained to obtain the pixel values of the to-be-converted second color image pixel converted into the first color image pixel.
  • the processor 20 is further configured to obtain the characteristic direction of the third color image pixel to be converted when the third color image pixel is in a non-flat area; and The pixel values of two first color image pixels adjacent to the third color map pixel to be converted in the direction are obtained to obtain the pixel value of the to-be-converted third color image pixel converted into the first color image pixel.
  • the processor 20 is further configured to obtain gradient values in multiple directions of the second color image pixel to be converted, and select the direction corresponding to the smallest gradient value as the characteristic direction of the second color image pixel; obtaining For the gradient values in multiple directions of the image pixels of the third color to be converted, the direction corresponding to the smallest gradient value is selected as the characteristic direction of the image pixels of the third color.
  • the processor 20 is further configured to perform bilateral filtering processing on the third image according to the first image, so as to obtain the second color intermediate image and the third color intermediate image.
  • the present application further provides an electronic device 1000 .
  • the electronic device 1000 according to the embodiment of the present application includes a lens 300 , a housing 200 , and the image processing system 100 according to any one of the foregoing embodiments.
  • the lens 300 and the image processing system 100 are combined with the casing 200 .
  • the lens 300 cooperates with the image sensor 10 of the image processing system 100 for imaging.
  • the present application also provides a non-volatile computer-readable storage medium 400 containing a computer program.
  • the processor 60 executes the image processing method of any one of the above embodiments.
  • the present application provides an image processing method for an image sensor 10 .
  • the image sensor 10 includes a pixel array 11 including a plurality of panchromatic photosensitive pixels W and a plurality of color photosensitive pixels.
  • the color photosensitive pixel includes a first color photosensitive pixel A, a second color photosensitive pixel B, and a third color photosensitive pixel C with different spectral responses, the color photosensitive pixel has a narrower spectral response than the panchromatic photosensitive pixel W, and the second color photosensitive pixel Both the photosensitive pixel B and the third color photosensitive pixel C have a narrower spectral response than the first color photosensitive pixel A.
  • Image processing methods include:
  • the pixel array 11 is exposed to obtain a first image.
  • the first image includes full-color image pixels generated by full-color photosensitive pixels, first-color image pixels generated by first-color photosensitive pixels, and first color image pixels generated by second-color photosensitive pixels. Two-color image pixels and third-color image pixels generated by third-color photosensitive pixels;
  • the present application further provides an image processing system 100 .
  • the image processing system 100 includes an image sensor 10 and a processor 20 .
  • the image sensor 10 includes a pixel array 11 (shown in FIG. 3 ), and the pixel array 11 includes a plurality of full-color photosensitive pixels W and a plurality of color photosensitive pixels.
  • the color photosensitive pixel includes a first color photosensitive pixel A, a second color photosensitive pixel B, and a third color photosensitive pixel C with different spectral responses, the color photosensitive pixel has a narrower spectral response than the panchromatic photosensitive pixel W, and the second color photosensitive pixel Both the photosensitive pixel B and the third color photosensitive pixel C have a narrower spectral response than the first color photosensitive pixel A.
  • Step 01 is implemented by the image sensor 10
  • steps 02 , 03 , 04 and 05 are implemented by the processor 20 .
  • the image processor 10 is configured to control the exposure of the pixel array 11 to obtain a first image, and the first image includes the full-color image pixels generated by the full-color photosensitive pixels W and the first color generated by the first-color photosensitive pixels A.
  • the image pixel A, the second color image pixel B generated by the second color photosensitive pixel B, and the third color image pixel C generated by the third color photosensitive pixel C, the processor 20 is used for the full-color image in the first image.
  • the pixel W is processed to be converted into the first color image pixel A to obtain the second image; the second color image pixel B and the third color image pixel C in the second image are processed to be converted into the first color image pixel A, to obtain a third image; process the third image according to the first image to obtain a second color intermediate image and a third color intermediate image, the second color intermediate image includes the second color image pixel B, the third color intermediate image
  • the image includes a third color image pixel C: and fuses the third image, the second color intermediate image, and the third color intermediate image to obtain a target image, the target image includes a plurality of color image pixels, and the plurality of color image pixels are arranged in a Bayer array .
  • the image processing method, the image processing system 100, the electronic device 1000, and the computer-readable storage medium 400 convert the full-color image pixel W into a spectral response ratio by adding a full-color photosensitive pixel W in the pixel array 11.
  • a wide color image pixel is obtained to obtain a second image, and the second image is subsequently processed to obtain a target image arranged in a Bayer array.
  • the problem that the image processor cannot directly process the image in which the image pixels are arranged in a non-Bayer array is solved, and at the same time, since the full-color photosensitive pixels W are added in the pixel array 11, the analytical ability and reliability of the finally obtained image can be improved. Noise ratio, which can improve the photo effect at night.
  • FIG. 3 is a schematic diagram of the image sensor 10 in the embodiment of the present application.
  • the image sensor 10 includes a pixel array 11 , a vertical driving unit 12 , a control unit 13 , a column processing unit 14 and a horizontal driving unit 15 .
  • the image sensor 10 may use a complementary metal oxide semiconductor (CMOS, Complementary Metal Oxide Semiconductor) photosensitive element or a charge-coupled device (CCD, Charge-coupled Device) photosensitive element.
  • CMOS complementary metal oxide semiconductor
  • CCD Charge-coupled Device
  • the pixel array 11 includes a plurality of photosensitive pixels 110 (shown in FIG. 4 ) arranged two-dimensionally in an array form (ie, arranged in a two-dimensional matrix form), and each photosensitive pixel 110 includes a photoelectric conversion element 1111 (shown in FIG. 5 ) .
  • Each photosensitive pixel 110 converts light into electric charge according to the intensity of light incident thereon.
  • the vertical driving unit 12 includes a shift register and an address decoder.
  • the vertical driving unit 12 includes readout scan and reset scan functions.
  • the readout scan refers to sequentially scanning the unit photosensitive pixels 110 row by row, and reading signals from these unit photosensitive pixels 110 row by row. For example, the signal output by each photosensitive pixel 110 in the selected and scanned photosensitive pixel row is transmitted to the column processing unit 14 .
  • the reset scan is used to reset the charges, and the photocharges of the photoelectric conversion element are discarded, so that the accumulation of new photocharges can be started.
  • the signal processing performed by the column processing unit 14 is correlated double sampling (CDS) processing.
  • CDS correlated double sampling
  • the reset level and the signal level output from each photosensitive pixel 110 in the selected photosensitive pixel row are taken out, and the level difference is calculated.
  • the signals of the photosensitive pixels 110 in one row are obtained.
  • the column processing unit 14 may have an analog-to-digital (A/D) conversion function for converting an analog pixel signal into a digital format.
  • A/D analog-to-digital
  • the horizontal driving unit 15 includes a shift register and an address decoder.
  • the horizontal driving unit 15 sequentially scans the pixel array 11 column by column. Through the selective scanning operation performed by the horizontal driving unit 15, each photosensitive pixel column is sequentially processed by the column processing unit 14 and sequentially output.
  • control unit 13 configures timing signals according to the operation mode, and uses various timing signals to control the vertical driving unit 12 , the column processing unit 14 and the horizontal driving unit 15 to work together.
  • FIG. 4 is a schematic diagram of a photosensitive pixel 110 in an embodiment of the present application.
  • the photosensitive pixel 110 includes a pixel circuit 111 , a filter 112 , and a microlens 113 .
  • the microlens 113 is used for condensing light
  • the filter 112 is used for passing light of a certain wavelength band and filtering out the light of other wavelength bands.
  • the pixel circuit 111 is used to convert the received light into electrical signals, and provide the generated electrical signals to the column processing unit 14 shown in FIG. 3 .
  • FIG. 5 is a schematic diagram of a pixel circuit 111 of a photosensitive pixel 110 in an embodiment of the present application.
  • the pixel circuit 111 in FIG. 5 can be applied to each photosensitive pixel 110 (shown in FIG. 4 ) in the pixel array 11 shown in FIG. 3 .
  • the working principle of the pixel circuit 111 will be described below with reference to FIGS. 3 to 5 .
  • the pixel circuit 111 includes a photoelectric conversion element 1111 (eg, a photodiode), an exposure control circuit (eg, a transfer transistor 1112 ), a reset circuit (eg, a reset transistor 1113 ), an amplifier circuit (eg, an amplifier transistor 1114 ) ) and a selection circuit (eg, selection transistor 1115).
  • a photoelectric conversion element 1111 eg, a photodiode
  • an exposure control circuit eg, a transfer transistor 1112
  • a reset circuit eg, a reset transistor 1113
  • an amplifier circuit eg, an amplifier transistor 1114
  • selection transistor 1115 selection transistor 1115
  • the transfer transistor 1112 , the reset transistor 1113 , the amplifying transistor 1114 and the selection transistor 1115 are, for example, MOS transistors, but are not limited thereto.
  • the photoelectric conversion element 1111 includes a photodiode, and the anode of the photodiode is connected to the ground, for example.
  • Photodiodes convert received light into electrical charges.
  • the cathode of the photodiode is connected to the floating diffusion unit FD via an exposure control circuit (eg, transfer transistor 1112).
  • the floating diffusion unit FD is connected to the gate of the amplifier transistor 1114 and the source of the reset transistor 1113 .
  • the exposure control circuit is the transfer transistor 1112
  • the control terminal TG of the exposure control circuit is the gate of the transfer transistor 1112 .
  • a pulse of an active level eg, VPIX level
  • the transfer transistor 1112 is turned on.
  • the transfer transistor 1112 transfers the charges photoelectrically converted by the photodiode to the floating diffusion unit FD.
  • the drain of the reset transistor 1113 is connected to the pixel power supply VPIX.
  • the source of the reset transistor 113 is connected to the floating diffusion unit FD.
  • a pulse of an effective reset level is transmitted to the gate of the reset transistor 113 via the reset line, and the reset transistor 113 is turned on.
  • the reset transistor 113 resets the floating diffusion unit FD to the pixel power supply VPIX.
  • the gate of the amplification transistor 1114 is connected to the floating diffusion unit FD.
  • the drain of the amplification transistor 1114 is connected to the pixel power supply VPIX.
  • the amplifying transistor 1114 outputs the reset level through the output terminal OUT via the selection transistor 1115 .
  • the amplifying transistor 1114 outputs the signal level through the output terminal OUT via the selection transistor 1115 .
  • the drain of the selection transistor 1115 is connected to the source of the amplification transistor 1114 .
  • the source of the select transistor 1115 is connected to the column processing unit 14 in FIG. 3 through the output terminal OUT.
  • the selection transistor 1115 is turned on.
  • the signal output by the amplification transistor 1114 is transmitted to the column processing unit 14 through the selection transistor 1115 .
  • the pixel structure of the pixel circuit 111 in the embodiment of the present application is not limited to the structure shown in FIG. 5 .
  • the pixel circuit 111 may also have a three-transistor pixel structure, in which the functions of the amplification transistor 1114 and the selection transistor 1115 are performed by one transistor.
  • the exposure control circuit is not limited to the mode of a single transfer transistor 1112, and other electronic devices or structures with the function of controlling the conduction of the control terminal can be used as the exposure control circuit in the embodiments of the present application.
  • the implementation of transistor 1112 is simple, low cost, and easy to control.
  • FIG. 6 is a schematic diagram of the arrangement of the photosensitive pixels 110 (shown in FIG. 4 ) in the minimum repeating unit according to an embodiment of the present application.
  • the minimum repeating unit is 16 photosensitive pixels 110 in 4 rows and 4 columns, and the subunit is 4 photosensitive pixels 110 in 2 rows and 2 columns.
  • the arrangement is as follows:
  • W represents a full-color photosensitive pixel
  • A represents the first color photosensitive pixel in the plurality of color photosensitive pixels
  • B represents the second color photosensitive pixel in the plurality of color photosensitive pixels
  • C represents the third color photosensitive pixel in the plurality of color photosensitive pixels Color sensitive pixels.
  • the panchromatic pixels W are arranged in the first diagonal direction D1 (that is, the direction connecting the upper left corner and the lower right corner in FIG. 6 ), and the color pixels are arranged in the second diagonal direction D2 (for example, as shown in FIG. 6 ) 6), the first diagonal direction D1 is different from the second diagonal direction D2.
  • first diagonal direction D1 and the second diagonal direction D2 are not limited to the diagonal, but also include directions parallel to the diagonal.
  • the explanation of the line direction D1 and the second diagonal line direction D2 is the same as here.
  • the "direction" here is not a single direction, but can be understood as the concept of a “straight line” indicating the arrangement, and there can be bidirectional directions at both ends of the straight line.
  • FIG. 7 is a schematic diagram of the arrangement of the photosensitive pixels 110 (shown in FIG. 4 ) in the minimum repeating unit according to an embodiment of the present application.
  • the minimum repeating unit is 16 photosensitive pixels 110 in 4 rows and 4 columns, and the subunit is 4 photosensitive pixels 110 in 2 rows and 2 columns.
  • the arrangement is as follows:
  • W represents a full-color photosensitive pixel
  • A represents the first color photosensitive pixel in the plurality of color photosensitive pixels
  • B represents the second color photosensitive pixel in the plurality of color photosensitive pixels
  • C represents the third color photosensitive pixel in the plurality of color photosensitive pixels Color sensitive pixels.
  • the panchromatic pixels W are arranged in the second diagonal direction D2 (that is, the direction connecting the upper right corner and the lower left corner in FIG. 7 ), and the color pixels are arranged in the first diagonal direction D1 (for example, as shown in FIG. 7 ). 7 in the direction in which the upper left and lower right are connected).
  • the first diagonal and the second diagonal are perpendicular.
  • FIG. 8 is a schematic diagram of the arrangement of photosensitive pixels 110 (shown in FIG. 4 ) in the minimum repeating unit according to an embodiment of the present application.
  • the minimum repeating unit is 36 photosensitive pixels 110 in 6 rows and 6 columns, and the subunit is 4 photosensitive pixels 110 in 2 rows and 2 columns.
  • the arrangement is as follows:
  • W represents a full-color photosensitive pixel
  • A represents the first color photosensitive pixel in the plurality of color photosensitive pixels
  • B represents the second color photosensitive pixel in the plurality of color photosensitive pixels
  • C represents the third color photosensitive pixel in the plurality of color photosensitive pixels Color sensitive pixels.
  • the panchromatic pixels W are arranged in the first diagonal direction D1 (that is, the direction connecting the upper left corner and the lower right corner in FIG. 8 ), and the color pixels are arranged in the second diagonal direction D2 (for example, as shown in FIG. 8 ) 8), the first diagonal direction D1 is different from the second diagonal direction D2.
  • FIG. 9 is a schematic diagram of the arrangement of the photosensitive pixels 110 (shown in FIG. 4 ) in the minimum repeating unit according to an embodiment of the present application.
  • the minimum repeating unit is 36 photosensitive pixels 110 in 6 rows and 6 columns, and the subunit is 4 photosensitive pixels 110 in 2 rows and 2 columns.
  • the arrangement is as follows:
  • W represents a full-color photosensitive pixel
  • A represents the first color photosensitive pixel in the plurality of color photosensitive pixels
  • B represents the second color photosensitive pixel in the plurality of color photosensitive pixels
  • C represents the third color photosensitive pixel in the plurality of color photosensitive pixels Color sensitive pixels.
  • the panchromatic pixels W are arranged in the second diagonal direction D2 (that is, the direction connecting the upper right corner and the lower left corner in FIG. 9 ), and the color pixels are arranged in the first diagonal direction D1 (for example, as shown in FIG. 9 ) 9 in the direction in which the upper left and lower right are connected).
  • the first diagonal and the second diagonal are perpendicular.
  • FIG. 10 is a schematic diagram of the arrangement of photosensitive pixels 110 (shown in FIG. 4 ) in the minimum repeating unit according to an embodiment of the present application.
  • the minimum repeating unit is 64 photosensitive pixels 110 in 8 rows and 8 columns, and the subunit is 4 photosensitive pixels 110 in 2 rows and 2 columns.
  • the arrangement is as follows:
  • W represents a full-color photosensitive pixel
  • A represents the first color photosensitive pixel in the plurality of color photosensitive pixels
  • B represents the second color photosensitive pixel in the plurality of color photosensitive pixels
  • C represents the third color photosensitive pixel in the plurality of color photosensitive pixels Color sensitive pixels.
  • the panchromatic pixels W are arranged in the first diagonal direction D1 (that is, the direction connecting the upper left corner and the lower right corner in FIG. 10 ), and the color pixels are arranged in the second diagonal direction D2 (for example, as shown in FIG. 10 ) 10), the first diagonal direction D1 is different from the second diagonal direction D2.
  • FIG. 11 is a schematic diagram of the arrangement of photosensitive pixels 110 (shown in FIG. 4 ) in the minimum repeating unit according to an embodiment of the present application.
  • the minimum repeating unit is 64 photosensitive pixels 110 in 8 rows and 8 columns, and the subunit is 4 photosensitive pixels 110 in 2 rows and 2 columns.
  • the arrangement is as follows:
  • W represents a full-color photosensitive pixel
  • A represents a first color photosensitive pixel in a plurality of color photosensitive pixels
  • B represents a second color photosensitive pixel in a plurality of color photosensitive pixels
  • C represents a third color photosensitive pixel in the plurality of color photosensitive pixels Color sensitive pixels.
  • the panchromatic pixels W are arranged in the second diagonal direction D2 (that is, the direction connecting the upper right corner and the lower left corner in FIG. 11 ), and the color pixels are arranged in the first diagonal direction D1 (for example, as shown in FIG. 11 ) 11 in the direction in which the upper left and lower right are connected).
  • the first diagonal and the second diagonal are perpendicular.
  • the first color photosensitive pixel A may be a green photosensitive pixel G; the second color photosensitive pixel B may be a red photosensitive pixel R; the third color photosensitive pixel C may be Blue photosensitive pixel Bu.
  • the first color photosensitive pixel A may be the yellow photosensitive pixel Y; the second color photosensitive pixel B may be the red photosensitive pixel R; the third color photosensitive pixel C may be Blue photosensitive pixel Bu.
  • the first color photosensitive pixel A can be a cyan photosensitive pixel Cy
  • the second color photosensitive pixel B can be a magenta photosensitive pixel M
  • the third color photosensitive pixel C can be It is the yellow photosensitive pixel Y.
  • the response wavelength band of the panchromatic photosensitive pixel W may be the visible light wavelength band (eg, 400 nm-760 nm).
  • the full-color photosensitive pixel W is provided with an infrared filter, so as to realize the filtering of infrared light.
  • the response band of the panchromatic photosensitive pixel W is the visible light band and the near-infrared band (for example, 400nm-1000nm), which is the same as that of the photoelectric conversion element 1111 (shown in FIG. 5 ) in the image sensor 10 (shown in FIG. 2 ). shown) to match the response band.
  • the panchromatic photosensitive pixel W may not be provided with a filter or may be provided with a filter that allows all wavelengths of light to pass through, and the response wavelength band of the panchromatic photosensitive pixel W is determined by the response wavelength band of the photoelectric conversion element 1111, that is, the two match .
  • the embodiments of the present application include, but are not limited to, the above-mentioned waveband ranges.
  • the following embodiments are described by taking the first single-color photosensitive pixel A as the green photosensitive pixel G, the second single-color photosensitive pixel B as the red photosensitive pixel R, and the third single-color photosensitive pixel as the blue photosensitive pixel Bu.
  • the control unit 13 controls the exposure of the pixel array 11 (shown in FIG. 3 ) to obtain a first image.
  • the first image includes full-color image pixels W generated by full-color photosensitive pixels W, first-color image pixels A generated by first-color photosensitive pixels A, and second-color image pixels B generated by second-color photosensitive pixels B , and the third color image pixel C generated by the third color photosensitive pixel C.
  • the processor 20 acquires the first image obtained after the exposure, and conducts the full-color image pixel W, the first color image pixel A, the second color image pixel B and the third color image pixel W in the first image.
  • Image pixel C undergoes subsequent processing to obtain the target image.
  • step 02 processing the full-color image pixels in the first image to convert them into first-color image pixels to obtain the second image includes:
  • step 0201 , step 0202 , step 0203 and step 0204 can all be implemented by the processor 20 . That is to say, the processor 20 is further configured to: determine whether the panchromatic image pixel W0 to be converted is in the flat area; when the panchromatic image pixel W0 to be converted is in the flat area, preset the panchromatic image pixel to be converted.
  • the set first weight matrix N1 and the preset second weight matrix N2 are used to obtain the pixel value of the full-color image pixel W0 to be converted into the first color image pixel A0.
  • the standard deviation of the pixel values of multiple image pixels in the detection window can be calculated by presetting a detection window centered on the pixel W of the panchromatic image to be measured, and if the standard deviation is greater than the preset value, it is determined that The to-be-converted panchromatic image pixel W0 is not in a flat area, that is, it is determined that the to-be-converted panchromatic image pixel W0 is in a non-flat area; if its standard deviation is less than a preset value, it is determined that the to-be-converted panchromatic image pixel W0 is in a flat area.
  • other methods can also be used to determine whether the full-color image pixel W0 to be converted is in the flat region, and the examples are not listed here.
  • a first calculation window C1 centered on the panchromatic image pixel W0 to be converted is preset, and the first calculation window is obtained Pixel values of all pixels within C1.
  • the first calculation window C1 is a 7 ⁇ 7 size window
  • the panchromatic image pixel W0 to be converted is placed in the center of the first calculation window C1
  • the panchromatic image pixel W0 to be converted is placed in the center of the first calculation window C1.
  • the pixel values of all image pixels in the calculation window C1 are acquired.
  • the first calculation window C1 is a virtual calculation window, not an actual structure; and the size of the first calculation window C1 can be arbitrarily changed according to actual needs, as is the case for all calculation windows mentioned below, It is not repeated here.
  • the processor 20 presets the first calculation window C1 and obtains all pixel values in the first calculation window C1
  • the processor 20 presets the first weight matrix N1 and presets according to all pixel values in the first calculation window C1
  • the second weight matrix N2 can obtain the first conversion value M1 and the second conversion value M2.
  • the processor 20 obtains the pixel value of the panchromatic image pixel W0 to be converted into the first color image pixel A0 according to the panchromatic image pixel W0 to be converted, the first conversion value M1 and the second conversion value M2.
  • the pixel value of the full-color image pixel W0 to be converted is multiplied by the first conversion coefficient to obtain the converted first conversion coefficient.
  • the processor 20 obtains the preset first weight matrix N1 and the preset first weight matrix N1 according to the position information of the first color image pixel A1 closest to the full-color image pixel W0 to be converted. It is assumed that the second weight matrix N2, the first weight matrix N1 and the second weight matrix N2 are all matrices corresponding to the first calculation window C1. And when the first color image pixel A1 closest to the panchromatic image pixel W0 to be converted is at a different position, the preset first weight matrix N1 and the preset second weight matrix N2 are also different.
  • the processor 20 obtains the first weight according to the column coordinates of the first color image pixel A1 that is in the same row as the panchromatic image pixel W0 to be converted and is closest to the panchromatic image pixel W0 to be converted
  • the preset first weight matrix Preset second weight matrix If the column coordinate of the first color image pixel A1 is greater than the column coordinate of the panchromatic image pixel W0 to be converted, as shown in FIG.
  • the panchromatic image pixel W0 to be converted is placed in the 3rd row and 3rd column of the first calculation window , the first color image pixel A1 closest to the panchromatic image pixel W0 to be converted is in the 3rd row and the 4th column of the first calculation window C1, that is, the closest first color image pixel A1 in the panchromatic image pixel W0 to be converted on the right.
  • the processor 20 may also use the row coordinates of the first color image pixel A1 that is in the same column as the panchromatic image pixel W0 to be converted and is closest to the panchromatic image pixel W0 to be converted to
  • the first weight matrix N1 and the second weight matrix N2 are obtained, which are not limited here.
  • step 02 is to process the full-color image pixels in the first image to convert them into first-color image pixels, so as to obtain the second image, further comprising:
  • the characteristic direction is the first direction and the first color image pixel closest to the characteristic direction is on the first side of the panchromatic image pixel to be converted, according to the pixel value of the panchromatic image pixel to be converted, in the first color image pixel to be converted;
  • the first deviation is obtained from the pixel value of a panchromatic image pixel adjacent to the panchromatic image pixel to be converted on one side, and the difference between the pixel value of the panchromatic image pixel to be converted and the panchromatic image pixel to be converted on the second side is obtained.
  • the second weight According to the first weight, the second weight, the pixel value of the first color image pixel that is most adjacent to the panchromatic image pixel to be converted on the first side, and the pixel value of the panchromatic image pixel to be converted on the second side.
  • the pixel value of an adjacent first-color image pixel obtains the pixel value of the to-be-converted full-color image pixel converted into the first-color image pixel.
  • step 0205 , step 0206 , step 0207 and step 0208 can all be implemented by the processor 20 . That is to say, the processor 20 is further configured to: when the panchromatic image pixel W0 is in the non-flat region, obtain the characteristic direction of the panchromatic image pixel W0 to be converted; the characteristic direction is the first direction H, and the characteristic direction is When the most adjacent first color image pixel A1 is on the first side of the panchromatic image pixel A0 to be converted, according to the pixel value of the panchromatic image pixel A0 to be converted, on the first side and the panchromatic image pixel A0 to be converted The pixel value of one adjacent panchromatic image pixel W obtains the first deviation L1, and according to the pixel value of the panchromatic image pixel W0 to be converted, two adjacent panchromatic image pixels W0 on the second side are adjacent to the panchromatic image pixel W0 to be converted.
  • the pixel value of the full-color image pixel W obtains a second deviation L2, and the first side is opposite to the second side; the first weight F(L1) is obtained according to the first deviation L1 and the preset weight function F(x), and according to The second deviation L2 and the weight function F(x) obtain the second weight F(L2); and according to the first weight F(L1), the second weight F(L2), on the first side and the panchromatic image pixel to be converted
  • the pixel value of one first color image pixel A that is most adjacent to W0, and the pixel value of one first color image pixel A adjacent to the panchromatic image pixel W0 to be converted on the second side to obtain the panchromatic image pixel to be converted W0 is converted into the pixel value of the first color image pixel A0.
  • the specific implementation method of judging whether the panchromatic image pixel W0 to be converted is in the flat area is the same as the specific implementation method of judging whether the panchromatic image pixel W0 to be converted is in the flat area shown in FIG. It should be noted that if the pixel W0 of the panchromatic image to be converted is not in the flat area, it means that the pixel W0 of the panchromatic image to be converted is in the non-flat region.
  • step 0205 further includes:
  • step 02051 can be implemented by the processor 20 . That is to say, the processor 20 is further configured to acquire gradient values in multiple directions of the panchromatic image pixel W0 to be converted, and select the direction corresponding to the smallest gradient value as the characteristic direction of the panchromatic image pixel W0.
  • the processor 20 obtains the gradient values of the panchromatic image pixels to be converted along the first direction H, the second direction V and the third direction E respectively, and selects the direction corresponding to the smallest gradient value as panchromatic Feature orientations of image pixels.
  • the first direction H includes the row direction H1 and the column direction H2; there is an included angle between the second direction V and the first direction H, and the second direction V is from the upper left corner of the first image to the lower right corner of the first image ;
  • the third direction E is perpendicular to the second direction V, and the third direction E is along the upper right corner of the first image to the lower left corner of the first image.
  • the processor 20 obtains the first gradient value g1 corresponding to the row direction H1, the second gradient value g2 corresponding to the column direction H2, the third gradient value g3 corresponding to the second direction V, and the third gradient value g3 corresponding to the third direction through calculation.
  • the characteristic direction is the first direction H
  • the first color image pixel A2 closest to the first direction H is the first color image pixel W0 to be converted
  • the pixel value of the first panchromatic image pixel W1 adjacent to the panchromatic image pixel W0 to be converted on the first side and the second panchromatic image pixel W0 adjacent to the panchromatic image pixel W0 to be converted on the second side are obtained.
  • Pixel values of the panchromatic image pixel W2 and the third panchromatic image pixel W3. Obtain the first The deviation L1 and the second deviation L2.
  • the pixel value of the pixel W2, W3' represents the pixel value of the third panchromatic image pixel W3. That is to say, first calculate the pixel value of the second panchromatic image pixel W2 and the pixel value of the third panchromatic image pixel W3, and then subtract the mean value from the pixel value of the panchromatic image pixel W0 to be converted and perform an absolute value operation. , to obtain the second deviation L2.
  • the processor 20 After obtaining the first deviation L1 and the second deviation L2, the processor 20 obtains the first weight F(L1) according to the first deviation L1 and the preset weight function F(x), and obtains the first weight F(L1) according to the second deviation L2 and the preset weight
  • the function F(x) obtains the second weight F(L2).
  • the preset weight function F(x) can be an exponential function, a logarithmic function or a power function, as long as the smaller the input value is, the larger the output weight is.
  • the weight function F mentioned below is sufficient. The same is true for (x), which will not be repeated here. For example, assuming that the first deviation L1 is greater than the second deviation L2, the first weight F(L1) is smaller than the second weight F(L2).
  • the processor 20 After obtaining the first weight F(L1) and the second weight F(L2), the processor 20 obtains the pixel value of the first color image pixel A2 that is most adjacent to the full-color image pixel W0 to be converted on the first side , and the pixel value of one first-color image pixel A3 adjacent to the full-color image pixel W0 to be converted on the second side.
  • the first weight F(L1), the second weight F(L2), the pixel value of the one first color image pixel A2 most adjacent to the panchromatic image pixel W0 to be converted on the first side, and the second side The pixel value of a first color image pixel A3 adjacent to the panchromatic image pixel W0 to be converted is obtained to obtain the pixel value of the panchromatic image pixel W0 to be converted into the first color image pixel A0.
  • the preset coefficient k can be adjusted according to the actual situation, and the preset coefficient k is 4 in this embodiment of the present application.
  • the first direction H includes the row direction H1 and the column direction H2
  • the characteristic direction is the row direction H1 in the first direction H
  • the first side of the full-color image pixel W0 to be converted The left side of the panchromatic image pixel W0 to be converted and the second side of the panchromatic image pixel W0 to be converted represent the right side of the panchromatic image pixel W0 to be converted.
  • the characteristic direction is the column direction H2 in the first direction H, please refer to FIG.
  • the first side of the to-be-converted panchromatic image pixel W0 represents the lower side of the to-be-converted panchromatic image pixel W0, the to-be-converted panchromatic image pixel W0
  • the second side of represents the upper side of the full-color image pixel W0 to be converted.
  • step 02 processes the full-color image pixels in the first image to convert them into first-color image pixels to obtain the second image and further includes:
  • the characteristic direction is the first direction and the pixel of the first color image that is closest to the characteristic direction is on the second side of the full-color image pixel to be converted, according to the pixel value of the full-color image pixel to be converted, The third deviation of the pixel value of a panchromatic image pixel adjacent to the panchromatic image pixel to be converted on the two sides, and according to the pixel value of the panchromatic image pixel to be converted, the difference between the first side and the panchromatic image pixel to be converted The pixel values of two adjacent panchromatic image pixels obtain a fourth deviation, and the first side is opposite to the second side;
  • the fourth weight According to the third weight, the fourth weight, the pixel value of a first color image pixel adjacent to the panchromatic image pixel to be converted on the first side, and the pixel value of the panchromatic image to be converted on the second side most closely
  • the pixel value of an adjacent first-color image pixel obtains the pixel value of the to-be-converted full-color image pixel converted into the first-color image pixel.
  • step 0209 , step 0210 and step 0211 can all be implemented by the processor 20 . That is to say, the processor 20 is further configured to: when the characteristic direction is the first direction H, and the most adjacent first color image pixel A4 in the characteristic direction is on the second side of the full-color image pixel W0 to be converted, according to The pixel value of the panchromatic image pixel W0 to be converted, the third deviation L3 of the pixel value of a panchromatic image pixel W adjacent to the panchromatic image pixel W0 to be converted on the second side, and according to the panchromatic image to be converted
  • the pixel value of the pixel W0, the pixel value of the two panchromatic image pixels W adjacent to the panchromatic image pixel W0 to be converted on the first side obtain a fourth deviation L4, and the first side is opposite to the second side;
  • the specific implementation method of judging whether the full-color image pixel W0 to be converted is in a flat area and acquiring the characteristic direction of the full-color image pixel to be converted is the same as the above-mentioned judging whether the full-color image pixel to be converted W0 is
  • the specific implementation methods for obtaining the characteristic directions of the pixels of the panchromatic image to be converted are the same in the flat area, and will not be repeated here.
  • the characteristic direction is the first direction H
  • the most adjacent first color image pixel A4 in the first direction H is on the second side of the full-color image pixel W0 to be converted
  • the processor 20 After obtaining the third deviation L3 and the fourth deviation L4, the processor 20 obtains the third weight F(L3) according to the third deviation L3 and the preset weight function F(x), and according to the fourth deviation L4 and the preset weight The function F(x) obtains the second weight F(L4). After obtaining the third weight F(L3) and the fourth weight F(L4), the processor 20 obtains the pixel value of a first color image pixel A4 that is most adjacent to the panchromatic image pixel W0 to be converted on the second side , and the pixel value of one first-color image pixel A5 adjacent to the full-color image pixel W0 to be converted on the first side.
  • the pixel value of one first color image pixel A4 most adjacent to the panchromatic image pixel W0 to be converted on the second side, and on the first side The pixel value of a first color image pixel A5 adjacent to the panchromatic image pixel W0 to be converted is obtained to obtain the pixel value of the panchromatic image pixel W0 to be converted into the first color image pixel A0.
  • the preset coefficient k can be adjusted according to the actual situation, and the preset coefficient k is 4 in this embodiment of the present application.
  • the first direction H includes the row direction H1 and the column direction H2
  • the characteristic direction is the row direction H1 in the first direction H
  • the first side of the full-color image pixel W0 to be converted The left side of the panchromatic image pixel W0 to be converted and the second side of the panchromatic image pixel W0 to be converted represent the right side of the panchromatic image pixel W0 to be converted.
  • the characteristic direction is the column direction H2 in the first direction H, please refer to FIG.
  • the first side of the to-be-converted panchromatic image pixel W0 represents the lower side of the to-be-converted panchromatic image pixel W0, the to-be-converted panchromatic image pixel W0
  • the second side of represents the upper side of the full-color image pixel W0 to be converted.
  • step 02 processing the full-color image pixels in the first image to convert them into first-color image pixels also includes:
  • step 0212 , step 0213 and step 0214 can all be implemented by the processor 20 . That is to say, the processor 20 is also used for: the characteristic direction is the second direction V, the second calculation window C2 is preset with the panchromatic image pixel W0 to be converted as the center, and the second direction V is the same as that of the first image.
  • the set third weight matrix N3 and the preset fourth weight matrix N4 are used to obtain the pixel value of the full-color image pixel W0 to be converted into the first color image pixel A0.
  • the specific implementation method of judging whether the full-color image pixel W0 to be converted is in a flat area and acquiring the characteristic direction of the full-color image pixel to be converted is the same as the above-mentioned judging whether the full-color image pixel to be converted W0 is
  • the specific implementation methods for obtaining the characteristic directions of the pixels of the panchromatic image to be converted are the same in the flat area, and will not be repeated here.
  • a second calculation window C2 centered on the panchromatic image pixel W0 to be converted is preset.
  • the specific method of presetting the second calculation window C2 is the same as the specific method of presetting the first calculation window C1, and details are not described here.
  • the processor 20 After the processor 20 presets the second calculation window C2 and obtains all the pixel values in the second calculation window C2, the processor 20 according to all the pixel values in the second calculation window C2, the preset third weight matrix N3 and the preset The fourth weight matrix N4 can obtain the third conversion value M3 and the fourth conversion value M4.
  • the pixel value of each image pixel in the second calculation window C2 is multiplied by the value of the corresponding position in the preset third weight matrix N3 to obtain multiple new pixel values, and the multiple new pixel values are added together. Then, divide by the sum of all the values in the preset third weight matrix N3 to obtain the third conversion value M3.
  • the pixel value of each image pixel in the second calculation window C2 is multiplied by the value of the corresponding position in the preset fourth weight matrix N4 to obtain multiple new pixel values, and the multiple new pixel values are added together. Then, divide by the sum of all the values in the preset fourth weight matrix N4 to obtain the fourth conversion value M4.
  • the processor 20 obtains the pixel value of the panchromatic image pixel W0 to be converted into the first color image pixel A0 according to the panchromatic image pixel W0 to be converted, the third conversion value M3 and the fourth conversion value M4.
  • the processor 20 obtains the preset third weight matrix N3 and the preset third weight matrix N3 according to the position information of the first color image pixel A1 closest to the panchromatic image pixel W0 to be converted. It is assumed that the fourth weight matrix N4, the third weight matrix N3 and the fourth weight matrix N4 are all matrices corresponding to the second calculation window C2. And when the first color image pixel A1 closest to the panchromatic image pixel W0 to be converted is at a different position, the preset third weight matrix N3 and the preset fourth weight matrix N4 are also different.
  • the processor 20 obtains the third weight according to the column coordinates of the first color image pixel A1 that is in the same row as the panchromatic image pixel W0 to be converted and is closest to the panchromatic image pixel W0 to be converted
  • the value matrix N3 and the fourth weight matrix N4 For example, if the column coordinate of the first color image pixel A1 is smaller than the column coordinate of the panchromatic image pixel W0 to be converted, as shown in FIG. 24 , the panchromatic image pixel W0 to be converted is placed in the third row of the first calculation window.
  • the preset third weight matrix Preset fourth weight matrix If the column coordinate of the first color image pixel A1 is greater than the column coordinate of the panchromatic image pixel W0 to be converted, as shown in FIG.
  • the panchromatic image pixel W0 to be converted is placed in the 3rd row and 3rd column of the first calculation window , the first color image pixel A1 closest to the panchromatic image pixel W0 to be converted is in the 3rd row and the 4th column of the first calculation window C1, that is, the closest first color image pixel A1 in the panchromatic image pixel W0 to be converted on the right.
  • the processor 20 may also use the row coordinates of the first color image pixel A1 that is in the same column as the panchromatic image pixel W0 to be converted and is closest to the panchromatic image pixel W0 to be converted to The first weight matrix N1 and the second weight matrix N2 are obtained, which are not limited here.
  • step 02 processing the full-color image pixels in the first image to convert them into first-color image pixels also includes:
  • step 0215 , step 0216 , step 0217 and step 0218 can all be implemented by the processor 20 . That is to say, the processor 20 is further configured to: when the characteristic direction is the third direction E, preset a third calculation window C3 centered on the panchromatic image pixel W0 to be converted, and the third direction E and the first The second direction V of the image is vertical; the pixel values of all pixels in the third calculation window C3 are obtained, and according to the pixel values of a plurality of panchromatic image pixels W around the first color image pixel A, to obtain the pixel values in the third calculation window C3 The converted pixel values of all the first color image pixels A within; the fifth weight is obtained according to the converted pixel values of the first color image pixels A, the pixel value of the panchromatic image pixel W0 to be converted, and the preset weight function F(x). value matrix N5; and obtain the pixel value of the full-color image pixel W0 to be converted into the first color image
  • the specific implementation method of judging whether the full-color image pixel W0 to be converted is in a flat area and acquiring the characteristic direction of the full-color image pixel to be converted is the same as the above-mentioned judging whether the full-color image pixel to be converted W0 is
  • the specific implementation methods for obtaining the characteristic directions of the pixels of the panchromatic image to be converted are the same in the flat area, and will not be repeated here.
  • a third calculation window C3 centered on the panchromatic image pixel W to be converted is preset.
  • the specific method of presetting the third calculation window C3 is the same as the specific method of presetting the first calculation window C1, and details are not described here.
  • the processor 20 After acquiring the pixel values of all the pixels in the third calculation window C3, the processor 20 obtains all the pixel values of all the pixels in the third calculation window C3 according to the pixel values of a plurality of panchromatic image pixels W around the pixel A of the first color image. Converted pixel value of pixel A of a color image. In some embodiments, the processor 20 calculates the average value of a plurality of panchromatic image pixels W around the first color image pixel A as the conversion value of the first color image pixel A. The following describes the calculation of the converted pixel value of the first color image pixel A located in the second row and the first column of the third window C3 as an example, and the calculation methods of other first color image pixels A are the same.
  • the converted pixel value of the first color image pixel A located in the second row and first column of the third window C3 is equal to the average value of the pixel values of the four adjacent panchromatic image pixels W, that is, it is equal to the second pixel value in the third window C3.
  • the panchromatic image pixel W at row 0 and column, the panchromatic image pixel W at the second row and second column of the third window C3, the panchromatic image pixel W at the first row and first column of the third window C3, and the panchromatic image pixel W at the first row and first column of the third window C3 The mean value of the pixel values of the panchromatic image pixels W in the first row and the third column of the three windows C3.
  • the processor 20 obtains the converted pixel values of the plurality of first-color image pixels A in the third window C3, according to the converted pixel values of the plurality of first-color image pixels A, the pixel values of the to-be-converted full-color image pixels W, and the predicted Set the weight function F(x) to obtain the fifth weight matrix N5.
  • the fifth weight matrix N5 is also a matrix with 7 rows and 7 columns.
  • the processor 20 arbitrarily extracts any image pixel in the third window C3, and if the extracted image pixel is the first color image pixel A, then the conversion value of the first color image pixel A is subtracted from the to-be-converted panchromatic image pixel W0. , to obtain the fifth deviation L5. Then obtain the fifth weight F(L5) according to the fifth deviation L5 and the preset weight function F(x), and fill the fifth weight F(L5) into the fifth weight matrix N5 and the extracted first color image The corresponding position of pixel A3.
  • the processor 20 After obtaining the fifth weight matrix N5, the processor 20 obtains the fifth conversion value M5 and the sixth conversion value M6 according to the conversion value of the first color image A, the fifth weight matrix N5 and the preset distance weight R .
  • the distance weight R satisfies that the closer the image pixel is to the center image pixel of the third window C3, the larger the weight value is.
  • a plurality of new pixel values are obtained after multiplying the converted values of the plurality of first color image pixels A in the third calculation window C3 by the numerical values of the corresponding positions in the fifth weight matrix N5, and the plurality of new pixel values are added together.
  • the sum of the values in the fifth weight matrix N5 is multiplied by the distance weight R to obtain the second calculated value, and then dividing the first calculated value by the second calculated value to obtain the fifth calculated value.
  • the pixel values of multiple pixels in the third calculation window C3 are multiplied by the numerical values of the corresponding positions in the fifth weight matrix N5 to obtain multiple new pixel values, and the multiple new pixel values are added and multiplied by the distance weight R.
  • the third calculation value is obtained, the summation of the values in the fifth weight matrix N5 is multiplied by the distance weight R to obtain the fourth calculation value, and the third calculation value is divided by the fourth calculation value to obtain the sixth conversion value M6.
  • the processor 20 obtains, according to the pixel value of the panchromatic image pixel W0 to be converted, the fifth conversion value M5 and the sixth conversion value M6, the pixel value after the panchromatic image pixel W0 to be converted is converted into the first color image pixel A0 .
  • the processor 20 arbitrarily captures an image pixel in the first image, and determines whether the captured image pixel is a full-color image pixel W, and if the captured image pixel is For the panchromatic image pixel W, the processor 20 performs the steps described in FIGS. 12 to 28 to obtain the pixel value of the panchromatic image pixel W0 to be converted into the first color image pixel A0; if the captured image pixel is not The panchromatic image pixel W directly captures another image pixel, and repeats the above steps until all image pixels in the first image are captured, so that the panchromatic image pixels in the first image can be processed to be converted into the first image pixel.
  • a color image pixel A color image pixel.
  • the processor 20 captures images in a certain order, for example, first captures the first image pixel in the upper left corner of the first image for judgment and processing, and then captures the right pixel of the image pixel. After the image pixels in the first row are all captured, the image pixels in the next row are captured, and the above steps are repeated until all the image pixels in the first image are captured.
  • the processor 20 processes all the full-color image pixels W in the first image to convert them into first-color image pixels A to obtain the second image. Only the first color image pixel A, the second color image pixel B and the third color image pixel C are included in the second image. The processor 20 processes the second color image pixels B and the third color image pixels C in the second image to convert them into first color image pixels A to obtain a third image. Only the plurality of first color image pixels A are included in the third image.
  • step 03 processes the second color image pixels and the third color image pixels to convert them into first color image pixels, so as to obtain the third image including:
  • step 031 may be implemented by the processor 20 . That is to say, the processor 20 is also used for: judging whether the second color image pixel B0 to be converted is in the flat area; when the second color image pixel B0 is in the flat area, according to the number of the second color image pixel B0 to be converted The pixel value of the adjacent first color image pixel A in the direction is obtained to obtain the pixel value of the second color image pixel B0 to be converted after being converted into the first color image pixel A0.
  • the specific method of judging whether the second color image pixel B0 to be converted is in the flat area is the same as the specific method of judging whether the full-color image pixel W0 to be converted is in the flat area in the above embodiment, and will not be repeated here.
  • the processor 20 obtains the pixel values of the first color image pixels A around the adjacent second color image pixel B0 to be converted, and compares the obtained pixel values
  • the average value of the pixel values of the plurality of first color image pixels A is taken as the pixel value after the second color image pixel B0 to be converted is converted into the first color image pixel A0.
  • the processor 20 obtains the pixel located in the second image.
  • the average value of the pixel value of the first color image pixel A and the pixel value of the first color image pixel A located in the 3rd row and 2nd column of the second image, and the average value is used as the converted first color image pixel A0. Pixel values.
  • step 03 processes the second color image pixels and the third color image pixels to convert them into first color image pixels to obtain the third image including:
  • both steps 033 and 033 can be implemented by the processor 20 . That is to say, the processor 20 is further configured to: determine whether the second color image pixel B0 to be converted is in a flat area; when the second color image pixel B0 is in a non-flat area, obtain the second color image pixel to be converted The characteristic direction of B0; and according to the pixel values of the two first color image pixels A adjacent to the second color image pixel B0 to be converted in the characteristic direction, to obtain the second color image pixel B0 to be converted and converted into The pixel value after pixel A0 of the first color image.
  • the specific method of judging whether the second color image pixel B0 to be converted is in the flat area is the same as the specific method of judging whether the full color image pixel W0 to be converted is in the flat area in the above-mentioned embodiment; obtaining the second color image to be converted
  • the specific method of the characteristic direction of the pixel B0 is also the same as the specific method of acquiring the characteristic direction of the full-color image pixel W0 to be converted in the above-mentioned embodiment, which is not repeated here.
  • the processor 20 when the pixel B0 of the second color image to be converted is in a flat area, after acquiring the characteristic direction of the pixel B0 of the second color image to be converted, the processor 20 obtains the characteristic direction of the pixel B0 to be converted.
  • the pixel values of the two first color image pixels A adjacent to the second color image pixel B0, and the obtained pixel values of the two first color image pixels A are averaged as the second color image to be converted.
  • Pixel B0 is converted to the pixel value of the first color image pixel A0.
  • the second image is composed of image pixels arranged in 5 rows and 5 columns
  • the second color image pixel B0 to be converted is in the 3rd row and 1st column of the second image
  • the processor 20 is located in the pixel value of the pixel A of the first color image in the 3rd row and 0th column of the second image, and the pixel value of the first color image pixel A located in the 3rd row and 2nd column of the second image.
  • the average value of the values is taken as the pixel value of the converted first color image pixel A0.
  • the processor 20 has the pixel value of the pixel A of the first color image located in the second row and the first column of the second image, and the pixel value of the pixel A located in the fourth row and the first column of the second image.
  • the average value of the pixel values of the first color image pixel A in 1 column is taken as the pixel value of the converted first color image pixel A0.
  • step 03 processes the second color image pixels and the third color image pixels to convert them into first color image pixels to obtain the third image including:
  • both steps 035 and 036 can be implemented by the processor 20 . That is to say, the processor 20 is also used for: judging whether the third color image pixel C0 to be converted is in the flat area; when the third color image pixel C0 is in the flat area, according to the number of the third color image pixel C0 to be converted. The pixel value of the first color image pixel A adjacent in the direction is obtained to obtain the pixel value of the third color image pixel C0 to be converted after being converted into the first color image pixel A.
  • step 03 processes the second color image pixel and the third color image pixel to be converted into the first color image pixel to obtain the third image including:
  • step 035 , step 037 and step 038 can all be implemented by the processor 20 . That is to say, the processor 20 is further configured to: obtain the characteristic direction of the third color image pixel C0 to be converted when the third color image pixel C0 to be converted is in a non-flat area; The pixel values of the two first color image pixels A adjacent to the third color image pixel C0 to be converted are obtained to obtain the pixel value of A0 after the third color image pixel C0 to be converted is converted into the first color image pixel.
  • the specific method for obtaining the pixel value after the conversion of the third color image pixel C0 to be converted into the first color image pixel A0 is the same as the above-mentioned method for obtaining the pixel value after the converted second color image pixel B0 is converted into the first color image pixel A0.
  • the specific method is the same and will not be repeated here.
  • the processor 20 after acquiring the second image, arbitrarily captures an image pixel in the second image, and determines whether the captured image pixel is the second color image pixel B or the third color image pixel C , if the captured image pixel is the second color image pixel B or the third color image pixel C, the processor 20 performs the steps described in FIGS. 30 to 34 to obtain the second color image pixel B0 or the third color image pixel to be converted.
  • the pixel value after the three-color image pixel C0 is converted into the first-color image pixel A0; if the captured image pixel is neither the second-color image pixel B nor the third-color image pixel C, then directly grab another image pixel , and repeat the above steps until all the image pixels in the first image are captured, so that the second color image pixel B or the third color image pixel C in the second image can be processed to be converted into the first color image pixel.
  • the processor 20 captures images in a certain order, for example, first captures the first image pixel in the upper left corner of the first image for judgment and processing, and then captures the right pixel of the image pixel. After the image pixels in the first row are all captured, the image pixels in the next row are captured, and the above steps are repeated until all the image pixels in the first image are captured.
  • the processor 20 processes the third image according to the first image to obtain the second color intermediate image and the third image Color intermediate image.
  • the second color intermediate image only includes the second color image pixel B
  • the third color intermediate image only includes the third color image pixel C.
  • the step 04 of processing the third image according to the first image to obtain the second color intermediate image and the third color intermediate image includes:
  • 041 Perform bilateral filtering processing according to the first image and the third image to obtain a second color intermediate image and a third color intermediate image.
  • step 041 can be implemented by the processor 20 . That is to say, the processor 20 is further configured to perform bilateral filtering processing according to the first image and the third image to obtain the second color intermediate image and the third color intermediate image.
  • the first image includes multiple second-color image pixels B and multiple third-color image pixels C, multiple second-color image pixels B are arranged to form the second-color original image, multiple third-color image pixels B are arranged The color image pixels C are arranged to form a third color original image.
  • the second color original image and the third image are subjected to bilateral filtering processing to obtain a second color intermediate image; the third color original image and the third image are subjected to bilateral filtering processing to obtain a third color intermediate image.
  • the joint bilateral filtering algorithm determines the first distance weight (f(
  • the coordinate difference between the point and point q may be 2, and the second distance weight ( g (
  • the position of the pixel of the image of the second color is not set, and its pixel value is 0.
  • the position corresponding to the pixel point p to be filtered in the second color intermediate image set by the output pixel value J p and after completing one output, the filter window moves to the next image pixel position until all image pixels in the second color original image are The filtering is completed, so that a second-color intermediate image containing only the second-color image pixels is obtained.
  • the specific method for performing bilateral filtering processing on the third color original image and the third image to obtain the third color intermediate image is the same as the specific method for performing bilateral filtering processing on the second color original image and the third image to obtain the second color intermediate image, I won't go into details here.
  • the processor 20 fuses the third image, the second color intermediate image and the third color intermediate image to obtain the target image .
  • the position information of the image pixels that are not set on the second color intermediate image and the third color intermediate image is obtained, the first color image pixel A at the corresponding position of the third image is extracted according to the position information, and then a plurality of extracted first color image pixels A are extracted.
  • the color image pixel A, a plurality of second color image pixels B in the second color intermediate image, and a plurality of third color image pixels C in the third color intermediate image are arranged to obtain a target image arranged in a Bayer array.
  • the image processing method of the embodiment of the present application adds a panchromatic photosensitive pixel W in the pixel array 11, interpolates and converts the panchromatic image pixel W into a color image pixel with a wider spectral response to obtain a second image, and then processes the second image. Subsequent processing is performed to obtain target images arranged in a Bayer array.
  • the problem that the image processor cannot directly process the image in which the image pixels are arranged in a non-Bayer array is solved, and at the same time, since the full-color photosensitive pixels W are added to the pixel array 11, the analytical ability and reliability of the finally obtained image can be improved. Noise ratio, which can improve the photo effect at night.
  • the present application further provides an electronic device 1000 .
  • the electronic device 1000 according to the embodiment of the present application includes a lens 300 , a housing 200 , and the image processing system 100 according to any one of the foregoing embodiments.
  • the lens 300 and the image processing system 100 are combined with the casing 200 .
  • the lens 300 cooperates with the image sensor 10 of the image processing system 100 for imaging.
  • the electronic device 1000 may be a mobile phone, a tablet computer, a notebook computer, a smart wearable device (eg, a smart watch, a smart bracelet, a smart glasses, a smart helmet), a drone, a head-mounted display device, etc., which are not limited herein.
  • a smart wearable device eg, a smart watch, a smart bracelet, a smart glasses, a smart helmet
  • a drone e.g., a head-mounted display device, etc., which are not limited herein.
  • the electronic device 1000 of the embodiment of the present application adds panchromatic photosensitive pixels W to the pixel array 11 in the image processing system 100, and interpolates the panchromatic image pixels W into color image pixels with wider spectral responses to obtain a second image, The second image is then subjected to subsequent processing to obtain a target image arranged in a Bayer array.
  • the problem that the image processor 10 cannot directly process the image in which the image pixels are arranged in a non-Bayer array is solved, and at the same time, since the full-color photosensitive pixels W are added to the pixel array 11, the resolution capability of the finally obtained image can be improved. Signal-to-noise ratio, which can improve the effect of taking pictures at night.
  • the present application also provides a non-volatile computer-readable storage medium 400 containing a computer program.
  • the processor 60 executes the image processing method of any one of the above embodiments.
  • the processor 60 when the computer program is executed by the processor 60, the processor 60 is caused to perform the following steps:
  • a first image is obtained by exposing the pixel array, and the first image includes full-color image pixels generated by full-color photosensitive pixels, first-color image pixels generated by first-color photosensitive pixels, and second color image pixels generated by second-color photosensitive pixels. Color image pixels and third color image pixels generated by third color photosensitive pixels;
  • processor 60 may be the same processor as the processor 20 disposed in the image processor 100, and the processor 60 may also be disposed in the device 1000, that is, the processor 60 may also be disposed in the image processor
  • the processors 20 in 100 are not the same processor, which is not limited here.
  • any description of a process or method in the flowcharts or otherwise described herein may be understood to represent a module, segment or portion of code comprising one or more executable instructions for implementing a specified logical function or step of the process , and the scope of the preferred embodiments of the present application includes alternative implementations in which the functions may be performed out of the order shown or discussed, including performing the functions substantially concurrently or in the reverse order depending upon the functions involved, which should It is understood by those skilled in the art to which the embodiments of the present application belong.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

一种图像处理方法、图像处理系统(100)、电子设备(1000)及可读存储介质(400)。图像处理方法包括:(01)像素阵列曝光得第一图像;(02)对第一图像中的全色图像像素转换成第一颜色图像像素,以获得第二图像;(03)对第二图像中的第二颜色图像像素及第三颜色图像像素转换成第一颜色图像像素,以获得第三图像;(04)根据第一图像对第三图像进行处理,以获得第二颜色中间图像及第三颜色中间图像;及(05)融合第三图像、第二颜色中间图像及第三颜色中间图像以获得目标图像。

Description

图像处理方法、图像处理系统、电子设备及可读存储介质
优先权信息
本申请请求2020年8月18日向中国国家知识产权局提交的、专利申请号为202010833968.8的专利申请的优先权和权益,并且通过参照将其全文并入此处。
技术领域
本申请涉及图像处理技术领域,特别涉及一种图像处理方法、图像处理系统、电子设备及可读存储介质。
背景技术
手机等电子设备中可以设置有摄像头以实现拍照功能。摄像头内可以设置用于接收光线的图像传感器。图像传感器中可以设置有滤光片阵列。
发明内容
本申请实施方式提供了一种图像处理方法、图像处理系统、电子设备及计算机可读存储介质。
本申请实施方式提高一种用于图像传感器的图像处理方法。所述图像传感器包括像素阵列,所述像素阵列包括多个全色感光像素及多个彩色感光像素。所述彩色感光像素包括具有不同光谱响应的第一颜色感光像素、第二颜色感光像素及第三颜色感光像素,所述彩色感光像素具有比所述全色感光像素更窄的光谱响应,且所述第二颜色感光像素及第三颜色感光像素均具有比第一颜色感光像素更窄的光谱响应。所述图像处理方法包括:所述像素阵列曝光得到第一图像,所述第一图像包括由所述全色感光像素生成的全色图像像素、由第一颜色感光像素生成的第一颜色图像像素、由第二颜色感光像素生成的第二颜色图像像素及由第三颜色感光像素生成的第三颜色图像像素;对所述第一图像中的所述全色图像像素进行处理以转换成第一颜色图像像素,以获得第二图像;对所述第二图像中的所述第二颜色图像像素及所述第三颜色图像像素成进行处理以转换成第一颜色图像像素,以获得第三图像;根据所述第一图像对所述第三图像进行处理,以获得第二颜色中间图像及第三颜色中间图像,所述第二颜色中间图像包含所述第二颜色图像像素,第三颜色中间图像包含所述第三颜色图像像素;及融合所述第三图像、所述第二颜色中间图像及所述第三颜色中间图像以获得目标图像,所述目标图像包含多个彩色图像像素,多个所述彩色图像像素呈拜耳阵列排布。
本申请实施方式提高一种图像处理系统。所述图像处理系统包括图像传感器及处理器。所述图像传感器包括像素阵列,所述像素阵列包括多个全色感光像素及多个彩色感光像素。所述彩色感光像素包括具有不同光谱响应的第一颜色感光像素、第二颜色感光像素及第三颜色感光像素,所述彩色感光像素具有比所述全色感光像素更窄的光谱响应,且所述第二颜色感光像素及第三颜色感光像素均具有比第一颜色感光像素更窄的光谱响应。所述像素阵列曝光得到第一图像,所述第一图像包括由所述全色感光像素生成的全色图像像素、由第一颜色感光像素生成的第一颜色图像像素、由第二颜色感光像素生成的第二颜色图像像素及由第三颜色感光像素生成的第三颜色图像像素。所述处理器用于对所述第一图像中的所述全色图像像素进行处理以转换成第一颜色图像像素,以获得第二图像;对所述第二图像中的所述第二颜色图像像素及所述第三颜色图像像素成进行处理以转换成第一颜色图像像素,以获得第三图像;根据所述第一图像对所述第三图像进行处理,以获得第二颜色中间图像及第三颜色中间图像,所述第二颜色中间图像包含所述第二颜色图像像素,第三颜色中间图像包含所述第三颜色图像像素;及融合所述第三图像、所述第二颜色中间图像及所述第三颜色中间图像以获得目标图像,所述目标图像包含多个彩色图像像素,多个所述彩色图像像素呈拜耳阵列排布。
本申请实施方式提供一种电子设备。所述电子设备包括镜头、壳体及上述的图像处理系统。所述镜头、所述图像处理系统与所述壳体结合,所述镜头与所述图像处理系统的图像传感器配合成像。
本申请实施方式提供一种包含计算机程序的非易失性计算机可读存储介质。所述计算机程序被处理器执行时,使得所述处理器执行上述的图像处理方法。
本申请实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
本申请的上述和/或附加的方面和优点可以从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1是本申请实施方式的一种图像处理方法的流程示意图;
图2是本申请实施方式的一种图像处理系统的结构示意图;
图3是本申请实施方式的一种像素阵列的示意图;
图4是本申请实施方式的一种感光像素的截面示意图;
图5是本申请实施方式的一种感光像素的像素电路图;
图6是本申请实施方式的一种像素阵列中最小重复单元的排布示意图;
图7是本申请实施方式的又像素阵列中最小重复单元的排布示意图;
图8是本申请实施方式的又一种像素阵列中最小重复单元的排布示意图;
图9是本申请实施方式的又一种像素阵列中最小重复单元的排布示意图;
图10是本申请实施方式的又一种像素阵列中最小重复单元的排布示意图;
图11是本申请实施方式的又一种像素阵列中最小重复单元的排布示意图;
图12是本申请实施方式的又一种图像处理方法的流程示意图;
图13是本申请实施方式的一种将全色图像像素转换为第一颜色图像像素示意图;
图14是本申请实施方式的又一种将全色图像像素转换为第一颜色图像像素示意图;
图15是本申请实施方式的又一种图像处理方法的流程示意图;
图16是本申请实施方式的又一种图像处理方法的流程示意图;
图17是本申请实施方式的一种获取特征方向的示意图;
图18是本申请实施方式的又一种将全色图像像素转换为第一颜色图像像素示意图;
图19是本申请实施方式的又一种将全色图像像素转换为第一颜色图像像素示意图;
图20是本申请实施方式的又一种图像处理方法的流程示意图;
图21是本申请实施方式的又一种将全色图像像素转换为第一颜色图像像素示意图;
图22是本申请实施方式的又一种将全色图像像素转换为第一颜色图像像素示意图;
图23是本申请实施方式的又一种图像处理方法的流程示意图;
图24是本申请实施方式的又一种将全色图像像素转换为第一颜色图像像素示意图;
图25是本申请实施方式的又一种将全色图像像素转换为第一颜色图像像素示意图;
图26是本申请实施方式的又一种图像处理方法的流程示意图;
图27是本申请实施方式的又一种将全色图像像素转换为第一颜色图像像素示意图;
图28是本申请实施方式的一种第五权值矩阵示意图;
图29是本申请实施方式的一种第一图像转换为第三图像的示意图;
图30是本申请实施方式的又一种图像处理方法的流程示意图;
图31是本申请实施方式的又一种将第二颜色图像像素转换为第一颜色图像像素示意图;
图32是本申请实施方式的又一种图像处理方法的流程示意图;
图33是本申请实施方式的又一种图像处理方法的流程示意图;
图34是本申请实施方式的又一种图像处理方法的流程示意图;
图35是本申请实施方式的一种根据第一图像及第三图像获取第二颜色中间图像及第三颜色中间图像示意图;
图36是本申请实施方式的又一种图像处理方法的流程示意图;
图37至图38是本申请实施方式的又一种根据第一图像及第三图像获取第二颜色中间图像及第三颜色中间图像示意图;
图39是本申请实施方式的一种第三图像、第二颜色中间图像及第三颜色中间图像融合示意图;
图40本申请实施方式的一种电子设备的结构示意图;
图41是本申请实施方式的一种非易失性计算机可读存储介质与处理器的交互示意图。
具体实施方式
下面详细描述本申请的实施方式,所述实施方式的示例在附图中示出,其中,相同或类似的标号自始至终表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本申请的实施方式,而不能理解为对本申请的实施方式的限制。
请参阅图1及图3,本申请提供一种用于图像传感器10的图像处理方法。图像传感器10包括像素阵列11,像素阵列11包括多个全色感光像素W和多个彩色感光像素。彩色感光像素包括具有不同光谱响应的第一颜色感光像素A、第二颜色感光像素B及第三颜色感光像素C,彩色感光像素具有比全色感光像素W更窄的光谱响应,且第二颜色感光像素B及第三颜色感光像素C均具有比第一颜色感光像素A更窄的光谱响应。图像处理方法包括:
01:像素阵列11曝光得到第一图像,第一图像包括由全色感光像素生成的全色图像像素、由第一颜色感光像素生成的第一颜色图像像素、由第二颜色感光像素生成的第二颜色图像像素及由第三颜色感光像素生成的第三颜色图像像素;
02:对第一图像中的全色图像像素进行处理以转换成第一颜色图像像素,以获得第二图像;
03:对第二图像中的第二颜色图像像素及第三颜色图像像素成进行处理以转换成第一颜色图像像素,以获得第三图像;
04:根据第一图像对第三图像进行处理,以获得第二颜色中间图像及第三颜色中间图像,第二颜色中间图像包含第二颜色图像像素,第三颜色中间图像包含第三颜色图像像素:及
05:融合第三图像、第二颜色中间图像及第三颜色中间图像以获得目标图像,目标图像包含多个彩色图像像素,多个彩色图像像素呈拜耳阵列排布。
请参阅图1及图12,在某些实施方式中,步骤02:对第一图像中的全色图像像素进行处理以转换成第一颜色图像像素,以获得第二图像,包括:
0202:在全色图像像素在平坦区时,预设以待转换的全色图像像素为中心的第一计算窗口;
0203:获取第一计算窗口内所有像素的像素值;及
0204:根据第一计算窗口内所有像素的像素值、待转换的全色图像像素的像素值、预设的第一权值矩阵、预设的第二权值矩阵以获取待转换的全色图像像素转换成第一颜色图像像素后的像素值。
请参阅图1及图15,在某些实施方式中,步骤02:第一图像中的全色图像像素进行处理以转换成第一颜色图像像素,以获得第二图像,包括:
0205:在全色图像像素在非平坦区时,获取待转换的全色图像像素的特征方向;
0206:在特征方向为第一方向,且在特征方向上最邻近的第一颜色图像像素在待转换的全色图像像素第一侧时,根据待转换的全色图像像素的像素值、在第一侧与待转换的全色图像像素相邻的一个全色图像像素的像素值获取第一偏差,并根据待转换的全色图像像素的像素值、在第二侧与待转换的全色图像像素相邻的两个全色图像像素的像素值获取第二偏差,第一侧与第二侧相背;
0207:根据第一偏差及预设的权重函数获取第一权重,并根据第二偏差及权重函数获取第二权重;
0208:根据第一权重、第二权重、在第一侧与待转换的全色图像像素最相邻的一个第一颜色图像像素的像素值、在第二侧与待转换的全色图像像素相邻的一个第一颜色图像像素的像素值,获取待转换的全色图像像素转换成第一颜色图像像素后的像素值。
请参阅图1及图20,在某些实施方式中,步骤02:对第一图像中的全色图像像素进行处理以转换成第一颜色图像像素,以获得第二图像,包括:
0205:在全色图像像素在非平坦区时,获取待转换的全色图像像素的特征方向;
0209:在特征方向为第一方向,且在特征方向上最邻近的第一颜色图像像素在待转换的全色图像像素第二侧时,根据待转换的全色图像像素的像素值、在第二侧与待转换的全色图像像素相邻的一个全色图像像素的像素值第三偏差,并根据待转换的全色图像像素的像素值、在第一侧与待转换的全色图像像素相邻的两个全色图像像素的像素值获取第四偏差,第一侧与第二侧相背;
0210:根据第三偏差及预设的权重函数获取第三权重,并根据第四偏差及权重函数获取第四权重;
0211:根据第三权重、第四权重、在第一侧与待转换的全色图像像素相邻的一个第一颜色图像像素的像素值、在第二侧与待转换的全色图像像素最相邻的一个第一颜色图像像素的像素值获取待转换的全色图像像素转换成第一颜色图像像素后的像素值。
请参阅图1及图23,在某些实施方式中,步骤02:对第一图像中的全色图像像素进行处理以转换成第一颜色图像像素,包括:
0205:在全色图像像素在非平坦区时,获取待转换的全色图像像素的特征方向;
0212:在特征方向为第二方向,预设以待转换的全色图像像素为中心的第二计算窗口;
0213:获取第二计算窗口内所有像素的像素值;及
0214:根据第二计算窗口内所有像素的像素值、待转化的全色图像像素的像素值、预设的第三权值矩阵、预设的第四权值矩阵以获取待转换的全色图像像素转换成第一颜色图像像素后的像素值。
请参阅图1及图26,在某些实施方式中,步骤02:对第一图像中的全色图像像素进行处理以转换成第一颜色图像像素,包括:
0205:在全色图像像素在非平坦区时,获取待转换的全色图像像素的特征方向;
0215:在特征方向为第三方向,预设以待转换的全色图像像素为中心的第三计算窗口;
0216:获取第三计算窗口内所有像素的像素值,并根据第一颜色图像像素周围的多个全色图像像素的像素值,以获取在第三计算窗口内所有第一颜色图像像素的转换像素值;
0217:根据第一颜色图像像素的转换像素值、待转换的全色图像像素的像素值及预设的权重函数获 取第五权值矩阵;及
0218:根据第一颜色图像像素的转换像素值、第五权值矩阵及距离权重获取待转换的全色图像像素转换成第一颜色图像像素后的像素值。
请参阅图16,在某些实施方式中,步骤在全色图像像素在非平坦区时,获取待转换的全色图像像素的特征方向包括:
02051:获取待转换的全色图像像素的多个方向上的梯度值,选择最小的梯度值对应的方向为全色图像像素的特征方向。
在某些实施方式中,步骤对第二图像中的第二颜色图像像素及第三颜色图像像素成进行处理以转换成第一颜色图像像素,以获得第三图像包括:在第二颜色图像像素在平坦区时,根据待转换的第二颜色图像像素多个方向上相邻的第一颜色图像像素的像素值,以获取待转换的第二颜色图像像素转换成第一颜色图像像素后的像素值;和/或在第三颜色图像像素在平坦区时,根据待转换的第三颜色图像像素多个方向上相邻的第一颜色图像像素的像素值,以获取待转换的第三颜色图像像素转换成第一颜色图像像素后的像素值。
请参阅图1及图32,在某些实施方式中,步骤03:对第二图像中的第二颜色图像像素及第三颜色图像像素成进行处理以转换成第一颜色图像像素,以获得第三图像包括:
033:在第二颜色图像像素在非平坦区时,获取待转换的第二颜色图像素的特征方向;及
034:根据在特征方向上与待转换的第二颜色图像素邻近的两个第一颜色图像像素的像素值,以获取待转换的第二颜色图像像素转换成第一颜色图像像素后的像素值。
请参阅图1及图33,在某些实施方式中,步骤03:对第二图像中的第二颜色图像像素及第三颜色图像像素成进行处理以转换成第一颜色图像像素,以获得第三图像包括:
037:在第三颜色图像像素在非平坦区时,获取待转换的第三颜色图像素的特征方向;及
038:根据在特征方向上与待转换的第三颜色图像素邻近的两个第一颜色图像像素的像素值,以获取待转换的第三颜色图像像素转换成第一颜色图像像素后的像素值。
在某些实施方式中,步骤在第二颜色图像像素在非平坦区时,获取待转换的第二颜色图像素的特征方向包括获取待转换的第二颜色图像像素的多个方向上的梯度值,选择最小的梯度值对应的方向为第二颜色图像像素的特征方向。步骤在在第三颜色图像像素在非平坦区时,获取待转换的第三颜色图像素的特征方向包括获取待转换的第三颜色图像像素的多个方向上的梯度值,选择最小的梯度值对应的方向为第三颜色图像像素的特征方向。
请参阅图36,在某些实施方式中,步骤04:对根据第一图像对第三图像进行处理,以获得第二颜色中间图像及第三颜色中间图像包括:
041:根据第一图像对第三图像进行双边滤波处理,以获得第二颜色中间图像及第三颜色中间图像。
请结合图1及图2,本申请还提供一种图像处理系统100。图像处理系统100包括图像传感器10及处理器20。图像传感器10包括像素阵列11(图3所示),像素阵列11包括多个全色感光像素W和多个彩色感光像素。彩色感光像素包括具有不同光谱响应的第一颜色感光像素A、第二颜色感光像素B及第三颜色感光像素C,彩色感光像素具有比全色感光像素W更窄的光谱响应,且第二颜色感光像素B及第三颜色感光像素C均具有比第一颜色感光像素A更窄的光谱响应。像素阵列11曝光得到第一图像,第一图像包括由全色感光像素生成的全色图像像素、由第一颜色感光像素生成的第一颜色图像像素、由第二颜色感光像素生成的第二颜色图像像素及由第三颜色感光像素生成的第三颜色图像像素。处理器20用于:对第一图像中的全色图像像素进行处理以转换成第一颜色图像像素,以获得第二图像;对第二图像中的第二颜色图像像素及第三颜色图像像素成进行处理以转换成第一颜色图像像素,以获得第三图像;根据第一图像对第三图像进行处理,以获得第二颜色中间图像及第三颜色中间图像,第二颜色中间图像包含第二颜色图像像素,第三颜色中间图像包含第三颜色图像像素;及融合第三图像、第二颜色中间图像及第三颜色中间图像以获得目标图像,目标图像包含多个彩色图像像素,多个彩色图像像素呈拜耳阵列排布。
请参阅图2及图12,在某些实施方式中,处理器20还用于在全色图像像素在平坦区时,预设以待转换的全色图像像素为中心的第一计算窗口;获取第一计算窗口内所有像素的像素值;及根据第一计算窗口内所有像素的像素值、待转换的全色图像像素的像素值、预设的第一权值矩阵、预设的第二权值矩阵以获取待转换的全色图像像素转换成第一颜色图像像素后的像素值。
请参阅图2及图15,在某些实施方式中,处理器20还用于在全色图像像素在非平坦区时,获取待转换的全色图像像素的特征方向;在特征方向为第一方向,且在特征方向上最邻近的第一颜色图像像素在待转换的全色图像像素第一侧时,根据待转换的全色图像像素的像素值、在第一侧与待转换的全色图像像素相邻的一个全色图像像素的像素值获取第一偏差,并根据待转换的全色图像像素的像素值、在第 二侧与待转换的全色图像像素相邻的两个全色图像像素的像素值获取第二偏差,第一侧与第二侧相背;根据第一偏差及预设的权重函数获取第一权重,并根据第二偏差及权重函数获取第二权重;及根据第一权重、第二权重、在第一侧与待转换的全色图像像素最相邻的一个第一颜色图像像素的像素值、在第二侧与待转换的全色图像像素相邻的一个第一颜色图像像素的像素值,获取待转换的全色图像像素转换成第一颜色图像像素后的像素值。
请参阅图2及图20,在某些实施方式中,处理器20还用于在全色图像像素在非平坦区时,获取待转换的全色图像像素的特征方向;在特征方向为第一方向,且在特征方向上最邻近的第一颜色图像像素在待转换的全色图像像素第二侧时,根据待转换的全色图像像素的像素值、在第二侧与待转换的全色图像像素相邻的一个全色图像像素的像素值第三偏差,并根据待转换的全色图像像素的像素值、在第一侧与待转换的全色图像像素相邻的两个全色图像像素的像素值获取第四偏差,第一侧与第二侧相背;根据第三偏差及预设的权重函数获取第三权重,并根据第四偏差及权重函数获取第四权重;及根据第三权重、第四权重、在第一侧与待转换的全色图像像素相邻的一个第一颜色图像像素的像素值、在第二侧与待转换的全色图像像素最相邻的一个第一颜色图像像素的像素值获取待转换的全色图像像素转换成第一颜色图像像素后的像素值。
请参阅图2及图23,在某些实施方式中,处理器20还用于在全色图像像素在非平坦区时,获取待转换的全色图像像素的特征方向;在特征方向为第二方向,预设以待转换的全色图像像素为中心的第二计算窗口;获取第二计算窗口内所有像素的像素值;及根据第二计算窗口内所有像素的像素值、待转化的全色图像像素的像素值、预设的第三权值矩阵、预设的第四权值矩阵以获取待转换的全色图像像素转换成第一颜色图像像素后的像素值。
请参阅图2及图26,在某些实施方式中,处理器20还用于在全色图像像素在非平坦区时,获取待转换的全色图像像素的特征方向;在特征方向为第三方向,预设以待转换的全色图像像素为中心的第三计算窗口;获取第三计算窗口内所有像素的像素值,并根据第一颜色图像像素周围的多个全色图像像素的像素值,以获取在第三计算窗口内所有第一颜色图像像素的转换像素值;根据第一颜色图像像素的转换像素值、待转换的全色图像像素的像素值及预设的权重函数获取第五权值矩阵;及根据第一颜色图像像素的转换像素值、第五权值矩阵及距离权重获取待转换的全色图像像素转换成第一颜色图像像素后的像素值。
请参阅图2及图16,在某些实施方式中,处理器20还用于获取待转换的全色图像像素的多个方向上的梯度值,选择最小的梯度值对应的方向为全色图像像素的特征方向。
在某些实施方式中,处理器20还用于在第二颜色图像像素在平坦区时,根据待转换的第二颜色图像像素多个方向上相邻的第一颜色图像像素的像素值,以获取待转换的第二颜色图像像素转换成第一颜色图像像素后的像素值;和/或在第三颜色图像像素在平坦区时,根据待转换的第三颜色图像像素多个方向上相邻的第一颜色图像像素的像素值,以获取待转换的第三颜色图像像素转换成第一颜色图像像素后的像素值。
请参阅图2及图32,在某些实施方式中,处理器20还用于在第二颜色图像像素在非平坦区时,获取待转换的第二颜色图像素的特征方向;及根据在特征方向上与待转换的第二颜色图像素邻近的两个第一颜色图像像素的像素值,以获取待转换的第二颜色图像像素转换成第一颜色图像像素后的像素值。
请参阅图2及图33,在某些实施方式中,处理器20还用于在第三颜色图像像素在非平坦区时,获取待转换的第三颜色图像素的特征方向;及根据在特征方向上与待转换的第三颜色图像素邻近的两个第一颜色图像像素的像素值,以获取待转换的第三颜色图像像素转换成第一颜色图像像素后的像素值。
在某些实施方式中,处理器20还用于获取待转换的第二颜色图像像素的多个方向上的梯度值,选择最小的梯度值对应的方向为第二颜色图像像素的特征方向;获取待转换的第三颜色图像像素的多个方向上的梯度值,选择最小的梯度值对应的方向为第三颜色图像像素的特征方向。
请参阅图2图36,在某些实施方式中,处理器20还用于根据第一图像对第三图像进行双边滤波处理,以获得第二颜色中间图像及第三颜色中间图像。
请参阅图40,本申请还提供一种电子设备1000。本申请实施方式的电子设备1000包括镜头300、壳体200及上述任意一项实施方式的图像处理系统100。镜头300、图像处理系统100与壳体200结合。镜头300与图像处理系统100的图像传感器10配合成像。
请参阅41,本申请还提供一种包含计算机程序的非易失性计算机可读存储介质400。该计算机程序被处理器60执行时,使得处理器60执行上述任意一个实施方式的图像处理方法。
请参阅图1及图3,本申请提供一种用于图像传感器10的图像处理方法。图像传感器10包括像素阵列11,像素阵列11包括多个全色感光像素W和多个彩色感光像素。彩色感光像素包括具有不同光谱响应的第一颜色感光像素A、第二颜色感光像素B及第三颜色感光像素C,彩色感光像素具有比全色感 光像素W更窄的光谱响应,且第二颜色感光像素B及第三颜色感光像素C均具有比第一颜色感光像素A更窄的光谱响应。图像处理方法包括:
01:像素阵列11曝光得到第一图像,第一图像包括由全色感光像素生成的全色图像像素、由第一颜色感光像素生成的第一颜色图像像素、由第二颜色感光像素生成的第二颜色图像像素及由第三颜色感光像素生成的第三颜色图像像素;
02:对第一图像中的全色图像像素进行处理以转换成第一颜色图像像素,以获得第二图像;
03:对第二图像中的第二颜色图像像素及第三颜色图像像素成进行处理以转换成第一颜色图像像素,以获得第三图像;
04:根据第一图像对第三图像进行处理,以获得第二颜色中间图像及第三颜色中间图像,第二颜色中间图像包含第二颜色图像像素,第三颜色中间图像包含第三颜色图像像素:及
05:融合第三图像、第二颜色中间图像及第三颜色中间图像以获得目标图像,目标图像包含多个彩色图像像素,多个彩色图像像素呈拜耳阵列排布。
请结合图1及图2,本申请还提供一种图像处理系统100。图像处理系统100包括图像传感器10及处理器20。图像传感器10包括像素阵列11(图3所示),像素阵列11包括多个全色感光像素W和多个彩色感光像素。彩色感光像素包括具有不同光谱响应的第一颜色感光像素A、第二颜色感光像素B及第三颜色感光像素C,彩色感光像素具有比全色感光像素W更窄的光谱响应,且第二颜色感光像素B及第三颜色感光像素C均具有比第一颜色感光像素A更窄的光谱响应。步骤01由图像传感器10实现,步骤02、步骤03、步骤04及步骤05均由处理器20实现。也即是说,图像处理器10用于控制像素阵列11曝光得到第一图像,第一图像包括由全色感光像素W生成的全色图像像素、由第一颜色感光像素A生成的第一颜色图像像素A、由第二颜色感光像素B生成的第二颜色图像像素B及由第三颜色感光像素C生成的第三颜色图像像素C,处理器20用于对第一图像中的全色图像像素W进行处理以转换成第一颜色图像像素A,以获得第二图像;对第二图像中的第二颜色图像像素B及第三颜色图像像素C成进行处理以转换成第一颜色图像像素A,以获得第三图像;根据第一图像对第三图像进行处理,以获得第二颜色中间图像及第三颜色中间图像,第二颜色中间图像包含第二颜色图像像素B,第三颜色中间图像包含第三颜色图像像素C:及融合第三图像、第二颜色中间图像及第三颜色中间图像以获得目标图像,目标图像包含多个彩色图像像素,多个彩色图像像素呈拜耳阵列排布。
本申请实施方式的图像处理方法、图像处理系统100、电子设备1000及计算机可读存储介质400通过在在像素阵列11中增加全色感光像素W,将全色图像像素W插值转换为光谱响应较宽的彩色图像像素,以获得第二图像,再对第二图像进行后续处理以获得呈拜耳阵列排布的目标图像。如此,解决了图像处理器不能直接对图像像素呈非拜耳阵列排布的图像进行处理的问题,同时由于像素阵列11中增加了全色感光像素W,能够提高最终获得的图像的解析能力及信噪比,从而可以提升夜晚下的拍照效果。
图3是本申请实施方式中的图像传感器10的示意图。图像传感器10包括像素阵列11、垂直驱动单元12、控制单元13、列处理单元14和水平驱动单元15。
例如,图像传感器10可以采用互补金属氧化物半导体(CMOS,Complementary Metal Oxide Semiconductor)感光元件或者电荷耦合元件(CCD,Charge-coupled Device)感光元件。
例如,像素阵列11包括以阵列形式二维排列(即二维矩阵形式排布)的多个感光像素110(图4所示),每个感光像素110包括光电转换元件1111(图5所示)。每个感光像素110根据入射在其上的光的强度将光转换为电荷。
例如,垂直驱动单元12包括移位寄存器和地址译码器。垂直驱动单元12包括读出扫描和复位扫描功能。读出扫描是指顺序地逐行扫描单位感光像素110,从这些单位感光像素110逐行地读取信号。例如,被选择并被扫描的感光像素行中的每一感光像素110输出的信号被传输到列处理单元14。复位扫描用于复位电荷,光电转换元件的光电荷被丢弃,从而可以开始新的光电荷的积累。
例如,由列处理单元14执行的信号处理是相关双采样(CDS)处理。在CDS处理中,取出从所选感光像素行中的每一感光像素110输出的复位电平和信号电平,并且计算电平差。因而,获得了一行中的感光像素110的信号。列处理单元14可以具有用于将模拟像素信号转换为数字格式的模数(A/D)转换功能。
例如,水平驱动单元15包括移位寄存器和地址译码器。水平驱动单元15顺序逐列扫描像素阵列11。通过水平驱动单元15执行的选择扫描操作,每一感光像素列被列处理单元14顺序地处理,并且被顺序输出。
例如,控制单元13根据操作模式配置时序信号,利用多种时序信号来控制垂直驱动单元12、列处理单元14和水平驱动单元15协同工作。
图4是本申请实施方式中一种感光像素110的示意图。感光像素110包括像素电路111、滤光片112、 及微透镜113。沿感光像素110的收光方向,微透镜113、滤光片112、及像素电路111依次设置。微透镜113用于汇聚光线,滤光片112用于供某一波段的光线通过并过滤掉其余波段的光线。像素电路111用于将接收到的光线转换为电信号,并将生成的电信号提供给图3所示的列处理单元14。
图5是本申请实施方式中一种感光像素110的像素电路111的示意图。图5中像素电路111可应用在图3所示的像素阵列11内的每个感光像素110(图4所示)中。下面结合图3至图5对像素电路111的工作原理进行说明。
如图5所示,像素电路111包括光电转换元件1111(例如,光电二极管)、曝光控制电路(例如,转移晶体管1112)、复位电路(例如,复位晶体管1113)、放大电路(例如,放大晶体管1114)和选择电路(例如,选择晶体管1115)。在本申请的实施例中,转移晶体管1112、复位晶体管1113、放大晶体管1114和选择晶体管1115例如是MOS管,但不限于此。
例如,光电转换元件1111包括光电二极管,光电二极管的阳极例如连接到地。光电二极管将所接收的光转换为电荷。光电二极管的阴极经由曝光控制电路(例如,转移晶体管1112)连接到浮动扩散单元FD。浮动扩散单元FD与放大晶体管1114的栅极、复位晶体管1113的源极连接。
例如,曝光控制电路为转移晶体管1112,曝光控制电路的控制端TG为转移晶体管1112的栅极。当有效电平(例如,VPIX电平)的脉冲通过曝光控制线传输到转移晶体管1112的栅极时,转移晶体管1112导通。转移晶体管1112将光电二极管光电转换的电荷传输到浮动扩散单元FD。
例如,复位晶体管1113的漏极连接到像素电源VPIX。复位晶体管113的源极连接到浮动扩散单元FD。在电荷被从光电二极管转移到浮动扩散单元FD之前,有效复位电平的脉冲经由复位线传输到复位晶体管113的栅极,复位晶体管113导通。复位晶体管113将浮动扩散单元FD复位到像素电源VPIX。
例如,放大晶体管1114的栅极连接到浮动扩散单元FD。放大晶体管1114的漏极连接到像素电源VPIX。在浮动扩散单元FD被复位晶体管1113复位之后,放大晶体管1114经由选择晶体管1115通过输出端OUT输出复位电平。在光电二极管的电荷被转移晶体管1112转移之后,放大晶体管1114经由选择晶体管1115通过输出端OUT输出信号电平。
例如,选择晶体管1115的漏极连接到放大晶体管1114的源极。选择晶体管1115的源极通过输出端OUT连接到图3中的列处理单元14。当有效电平的脉冲通过选择线被传输到选择晶体管1115的栅极时,选择晶体管1115导通。放大晶体管1114输出的信号通过选择晶体管1115传输到列处理单元14。
需要说明的是,本申请实施例中像素电路111的像素结构并不限于图5所示的结构。例如,像素电路111也可以具有三晶体管像素结构,其中放大晶体管1114和选择晶体管1115的功能由一个晶体管完成。例如,曝光控制电路也不局限于单个转移晶体管1112的方式,其它具有控制端控制导通功能的电子器件或结构均可以作为本申请实施例中的曝光控制电路,本申请实施方式中的单个转移晶体管1112的实施方式简单、成本低、易于控制。
具体地,例如,图6为本申请一个实施例的最小重复单元中感光像素110(图4所示)的排布示意图。其中,最小重复单元为4行4列16个感光像素110,子单元为2行2列4个感光像素110。排布方式为:
Figure PCTCN2020120025-appb-000001
其中,W表示全色感光像素;A表示多个彩色感光像素中的第一颜色感光像素;B表示多个彩色感光像素中的第二颜色感光像素;C表示多个彩色感光像素中的第三颜色感光像素。
例如,如图6所示,全色像素W设置在第一对角线方向D1(即图6中左上角和右下角连接的方向),彩色像素设置在第二对角线方向D2(例如图6中左下角和右上角连接的方向),第一对角线方向D1与第二对角线方向D2不同。
需要说明的是,第一对角线方向D1和第二对角线方向D2并不局限于对角线,还包括平行于对角线的方向,下文图7至图11中对第一对角线方向D1及第二对角线方向D2的解释与此处相同。这里的“方向”并非单一指向,可以理解为指示排布的“直线”的概念,可以有直线两端的双向指向。
需要理解的是,此处以及下文中的术语“上”、“下”、“左”、“右”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本申请和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请的限制。
例如,图7为本申请一个实施例的最小重复单元中感光像素110(图4所示)的排布示意图。其中,最小重复单元为4行4列16个感光像素110,子单元为2行2列4个感光像素110。排布方式为:
Figure PCTCN2020120025-appb-000002
Figure PCTCN2020120025-appb-000003
其中,W表示全色感光像素;A表示多个彩色感光像素中的第一颜色感光像素;B表示多个彩色感光像素中的第二颜色感光像素;C表示多个彩色感光像素中的第三颜色感光像素。
例如,如图7所示,全色像素W设置在第二对角线方向D2(即图7中右上角和左下角连接的方向),彩色像素设置在第一对角线方向D1(例如图7中左上角和右下角连接的方向)。例如,第一对角线和第二对角线垂直。
例如,图8为本申请一个实施例的最小重复单元中感光像素110(图4所示)的排布示意图。其中,最小重复单元为6行6列36个感光像素110,子单元为2行2列4个感光像素110。排布方式为:
Figure PCTCN2020120025-appb-000004
其中,W表示全色感光像素;A表示多个彩色感光像素中的第一颜色感光像素;B表示多个彩色感光像素中的第二颜色感光像素;C表示多个彩色感光像素中的第三颜色感光像素。
例如,如图8所示,全色像素W设置在第一对角线方向D1(即图8中左上角和右下角连接的方向),彩色像素设置在第二对角线方向D2(例如图8中左下角和右上角连接的方向),第一对角线方向D1与第二对角线方向D2不同。
例如,图9为本申请一个实施例的最小重复单元中感光像素110(图4所示)的排布示意图。其中,最小重复单元为6行6列36个感光像素110,子单元为2行2列4个感光像素110。排布方式为:
Figure PCTCN2020120025-appb-000005
其中,W表示全色感光像素;A表示多个彩色感光像素中的第一颜色感光像素;B表示多个彩色感光像素中的第二颜色感光像素;C表示多个彩色感光像素中的第三颜色感光像素。
例如,如图9所示,全色像素W设置在第二对角线方向D2(即图9中右上角和左下角连接的方向),彩色像素设置在第一对角线方向D1(例如图9中左上角和右下角连接的方向)。例如,第一对角线和第二对角线垂直。
例如,图10为本申请一个实施例的最小重复单元中感光像素110(图4所示)的排布示意图。其中,最小重复单元为8行8列64个感光像素110,子单元为2行2列4个感光像素110。排布方式为:
Figure PCTCN2020120025-appb-000006
其中,W表示全色感光像素;A表示多个彩色感光像素中的第一颜色感光像素;B表示多个彩色感光像素中的第二颜色感光像素;C表示多个彩色感光像素中的第三颜色感光像素。
例如,如图10所示,全色像素W设置在第一对角线方向D1(即图10中左上角和右下角连接的方向),彩色像素设置在第二对角线方向D2(例如图10中左下角和右上角连接的方向),第一对角线方向D1与第二对角线方向D2不同。
例如,图11为本申请一个实施例的最小重复单元中感光像素110(图4所示)的排布示意图。其中,最小重复单元为8行8列64个感光像素110,子单元为2行2列4个感光像素110。排布方式为:
Figure PCTCN2020120025-appb-000007
Figure PCTCN2020120025-appb-000008
其中,W表示全色感光像素;A表示多个彩色感光像素中的第一颜色感光像素;B表示多个彩色感光像素中的第二颜色感光像素;C表示多个彩色感光像素中的第三颜色感光像素。
例如,如图11所示,全色像素W设置在第二对角线方向D2(即图11中右上角和左下角连接的方向),彩色像素设置在第一对角线方向D1(例如图11中左上角和右下角连接的方向)。例如,第一对角线和第二对角线垂直。
例如,如图6至图11所示的最小重复单元中,第一颜色感光像素A可以为绿色感光像素G;第二颜色感光像素B可以为红色感光像素R;第三颜色感光像素C可以为蓝色感光像素Bu。
例如,如图6至图11所示的最小重复单元中,第一颜色感光像素A可以为黄色感光像素Y;第二颜色感光像素B可以为红色感光像素R;第三颜色感光像素C可以为蓝色感光像素Bu。
例如,如图6至图11所示的最小重复单元中,第一颜色感光像素A可以为青色感光像素Cy,第二颜色感光像素B可以为品红色感光像素M;第三颜色感光像素C可以为黄色感光像素Y。
需要说明的是,在一些实施例中,全色感光像素W的响应波段可为可见光波段(例如,400nm-760nm)。例如,全色感光像素W上设置有红外滤光片,以实现红外光的滤除。在另一些实施例中,全色感光像素W的响应波段为可见光波段和近红外波段(例如,400nm-1000nm),与图像传感器10(图2所示)中的光电转换元件1111(图5所示)的响应波段相匹配。例如,全色感光像素W可以不设置滤光片或者设置可供所有波段的光线通过的滤光片,全色感光像素W的响应波段由光电转换元件1111的响应波段确定,即两者相匹配。本申请的实施例包括但不局限于上述波段范围。
为了方便说明,以下实施例均以第一单颜色感光像素A为绿色感光像素G,第二单颜色感光B为红色感光像素R,第三单颜色感光像素为蓝色感光像素Bu进行说明。
请参阅图17,在某些实施方式中,控制单元13(图3所示)控制像素阵列11(图3所示)曝光,以获得第一图像。其中第一图像包括由全色感光像素W生成的全色图像像素W、有第一颜色感光像素A生成的第一颜色图像像素A、有第二颜色感光像素B生成的第二颜色图像像素B、及由第三颜色感光像素C生成的第三颜色图像像素C。在像素阵列11曝光后,处理器20获取其曝光后获得的第一图像,并对第一图像中的全色图像像素W、第一颜色图像像素A、第二颜色图像像素B及第三颜色图像像素C进行后续处理,以获得目标图像。
具体地,请参阅图12,步骤02:对第一图像中的全色图像像素进行处理以转换成第一颜色图像像素,以获得第二图像包括:
0201:判断待转换的全色图像像素是否在平坦区;
0202:在待转换的全色图像像素在平坦区时,预设以待转换的全色图像像素为中心的第一计算窗口;
0203:获取第一计算窗口内所有像素的像素值;及
0204:根据第一计算窗口内所有像素的像素值、待转换的全色图像像素的像素值、预设的第一权值矩阵、预设的第二权值矩阵以获取待转换的全色图像像素转换成第一颜色图像像素后的像素值。
请结合图2及图12,步骤0201、步骤0202、步骤0203及步骤0204均可以由处理器20实现。也即是说,处理器20还用于:判断待转换的全色图像像素W0是否在平坦区;在待转换的全色图像像素W0在平坦区时,预设以待转换的全色图像像素W0为中心的第一计算窗口C1;获取第一计算窗口C1内所有像素的像素值;及根据第一计算窗口C1内所有像素的像素值、待转换的全色图像像素W0的像素值、预设的第一权值矩阵N1、预设的第二权值矩阵N2以获取待转换的全色图像像素W0转换成第一颜色图像像素A0后的像素值。
在一些实施例中,可以通过预设以待测全色图像像素W为中心的检测窗口,计算检测窗口内的多个图像像素的像素值的标准差,若其标准差大于预设值则认定该待转换全色图像像素W0不在平坦区,即认定该待转换全色图像像素W0在非平坦区;若其标准差小于预设值则认定该待转换全色图像像素W0在平坦区。在一些实施例子,也可以通过计算检测窗口内的多个图像像素的像素值的方差来判断待转换的全色图像像素W0是否在平坦区,若其方差大于预设值则认定该待转换全色图像像素W0不在平坦区,即认定该待转换全色图像像素W0在非平坦区;若其方差小于预设值则认定该待转换全色图像像素W0在平坦区。当然,也可以通过其他方法来判断待转换的全色图像像素W0是否在平坦区,在此不再一一例举。
请参阅图2、图13及图14,当在转换的全色像素W0在平坦区时,预设以待转换的全色图像像素W0为中心的第一计算窗口C1,并获取第一计算窗口C1内所有像素的像素值。例如,假设第一计算窗 口C1为7×7大小的窗口,此时,将待转换的全色图像像素W0置于第一计算窗口C1的中心位置后,即待转换的全色图像像素W0置于第一计算窗口C1的第3行第3列,获取计算窗口C1内所有图像像素的像素值。需要说明的是,第一计算窗口C1是虚拟的计算窗口,并不是实际存在的结构;并且第一计算窗口C1的大小可以根据实际需要任意更改,下文中提及的所有计算窗口均是如此,在此不再赘述。
当处理器20预设第一计算窗口C1并获取到第一计算窗口C1内所有像素值后,处理器20根据第一计算窗口C1内所有像素值、预设第一权值矩阵N1及预设第二权值矩阵N2可获得第一转换值M1及第二转换值M2。具体地,第一转换值M1可通过计算公式M1=sum(sum(I×N1)×sum(N2))获得,其中I表示第一计算窗口C1内各个图像像素的像素值。也即是说,第一计算窗口C1内每一个图像像素的像素值乘以预设的第一权值矩阵N1内对应位置的数值后获得多个新像素值,将多个新像素值相加后乘预设的第二权值矩阵N2内所有数值的总和,以获得第一转换值M1。第二转换值M2可通过计算公式M2=sum(sum(I×N2)×sum(N1))获得,其中I表示第一计算窗口C1内各个图像像素的像素值。也即是说,第一计算窗口C1内每一个图像像素的像素值乘以预设的第二权值矩阵N2内对应位置的数值后获得多个新像素值,将多个新像素值相加后乘以预设的第一权值矩阵N1内所有数值的总和,以获得第二转换值M2。
处理器20根据待转换的全色图像像素W0、及第一转换值M1及第二转换值M2,以获取待转换的全色图像像素W0转换成第一颜色图像像素A0后的像素值。具体的,转换后第一颜色图像像素A0的像素值可以通过计算公式A0’=W0’×(M2/M1)获得,其中A0’表示转换后第一颜色图像像素A0的像素值、W0’表示待转换的全色图像像素W0的像素值。也即是说,通过将第二转换值M2除以第一转换值M1获得第一转换系数后,将待转换的全色图像像素W0的像素值乘第一转换系数,以获得转换后第一颜色图像像素A0的像素值。
需要说明的是,在一些实施例中,处理器20是根据与待转换的全色图像像素W0最接近的第一颜色图像像素A1的位置信息,获取预设的第一权值矩阵N1及预设的第二权值矩阵N2,第一权值矩阵N1及第二权值矩阵N2均是与第一计算窗口C1对应的矩阵。并且当与待转换的全色图像像素W0最接近的第一颜色图像像素A1在不同的位置时,预设的第一权值矩阵N1及预设的第二权值矩阵N2也不同。
在一些实施例中,处理器20根据与待转换的全色图像像素W0在同一行,且最接近待转换的全色图像像素W0的第一颜色图像像素A1的列坐标,以获取第一权值矩阵N1及第二权值矩阵N2。例如,若第一颜色图像像素A1的列坐标小于待转换的全色图像像素W0的列坐标,如图13所示,待转换的全色图像像素W0置于第一计算窗口的第3行第3列,最接近待转换的全色图像像素W0的第一颜色图像像素A1在第一计算窗口C1的第3行第2列,即最接近第一颜色图像像素A1在待转换的全色图像像素W0的左侧。此时,预设的第一权值矩阵
Figure PCTCN2020120025-appb-000009
预设的第二权值矩阵
Figure PCTCN2020120025-appb-000010
若第一颜色图像像素A1的列坐标大于待转换的全色图像像素W0的列坐标,如图14所示,待转换的全色图像像素W0置于第一计算窗口的第3行第3列,最接近待转换的全色图像像素W0的第一颜色图像像素A1在第一计算窗口C1的第3行第4列,即最接近第一颜色图像像素A1在待转换的全色图像像素W0的右侧。此时,预设的第一权值矩阵
Figure PCTCN2020120025-appb-000011
预设的第二权值矩阵
Figure PCTCN2020120025-appb-000012
当然,在一些实施例中,处理器20也可以根据与待转换的全色图像像素W0在同一列,且最接近待转换的全色图像像素W0的第一颜色图像像素A1的行坐标,以获取第一权值矩阵N1及第二权值矩阵N2,在此不作限制。
请参阅图15,步骤02对第一图像中的全色图像像素进行处理以转换成第一颜色图像像素,以获得第二图像还包括:
0201:判断待转换的全色图像像素是否在平坦区;
0205:在全色图像像素在非平坦区时,获取待转换的全色图像像素的特征方向;
0206:在特征方向为第一方向,且在特征方向上最邻近的第一颜色图像像素在待转换的全色图像像 素第一侧时,根据待转换的全色图像像素的像素值、在第一侧与待转换的全色图像像素相邻的一个全色图像像素的像素值获取第一偏差,并根据待转换的全色图像像素的像素值、在第二侧与待转换的全色图像像素相邻的两个全色图像像素的像素值获取第二偏差,第一侧与第二侧相背;
0207:根据第一偏差及预设的权重函数获取第一权重,并根据第二偏差及权重函数获取第二权重;及
0208:根据第一权重、第二权重、在第一侧与待转换的全色图像像素最相邻的一个第一颜色图像像素的像素值、在第二侧与待转换的全色图像像素相邻的一个第一颜色图像像素的像素值获取待转换的全色图像像素转换成第一颜色图像像素后的像素值。
请结合图2及图15,步骤0205、步骤0206、步骤0207及步骤0208均可以由处理器20实现。也即是说,处理器20还用于:在全色图像像素W0在非平坦区时,获取待转换的全色图像像素W0的特征方向;在特征方向为第一方向H,且在特征方向上最邻近的第一颜色图像像素A1在待转换的全色图像像素A0第一侧时,根据待转换的全色图像像素A0的像素值、在第一侧与待转换的全色图像像素A0相邻的一个全色图像像素W的像素值获取第一偏差L1,并根据待转换的全色图像像素W0的像素值、在第二侧与待转换的全色图像像素W0相邻的两个全色图像像素W的像素值获取第二偏差L2,第一侧与第二侧相背;根据第一偏差L1及预设的权重函数F(x)获取第一权重F(L1),并根据第二偏差L2及权重函数F(x)获取第二权重F(L2);及根据第一权重F(L1)、第二权重F(L2)、在第一侧与待转换的全色图像像素W0最相邻的一个第一颜色图像像素A的像素值、在第二侧与待转换的全色图像像素W0相邻的一个第一颜色图像像素A的像素值获取待转换的全色图像像素W0转换成第一颜色图像像素A0后的像素值。
其中,判断待转换的全色图像像素W0是否在平坦区的具体实施方法与图12所示的判断待转换的全色图像像素W0是否在平坦区的具体实施方法相同,在此不再赘述。需要注意的是,若待转换全色图像像素W0不在平坦区,即表示待转换全色图像像素W0在非平坦区。
当待转换全色图像像素W0在非平坦区时,获取待转换的全色图像像素W0的特征方向。具体地,请参阅图16,步骤0205还包括:
02051:获取待转换的全色图像像素的多个方向上的梯度值,选择最小的梯度值对应的方向为全色图像像素的特征方向。
请结合图2及图16,步骤02051可以由处理器20实现。也即是说,处理器20还用于获取待转换的全色图像像素W0的多个方向上的梯度值,选择最小的梯度值对应的方向为全色图像像素W0的特征方向。
具体地,请参阅图17,处理器20分别获取待转换的全色图像像素沿第一方向H、第二方向V及第三方向E的梯度值,选择最小的梯度值对应的方向为全色图像像素的特征方向。其中,第一方向H包括行方向H1及列方向H2;第二方向V与第一方向H之间存在夹角,且第二方向V是眼第一图像的左上角至第一图像的右下角;第三方向E与第二方向V垂直,且第三方向E是沿第一图像的右上角至第一图像的左下角。例如,假设处理器20通过计算获得沿行方向H1对应的第一梯度值g1、沿列方向H2对应的第二梯度值g2、沿第二方向V对应的第三梯度值g3及沿第三方向E对应的第四梯度值g4,并且g1>g2>g3>g4,即沿第三方向E对应的第四梯度值g4最小,则选择第三方向E为全色图像像素的特征方向。
请参阅图2、图17、图18及图19,若特征方向为第一方向H,且在第一方向H上最邻近的第一颜色图像像素A2在待转换的全色图像像素W0第一侧时,获取在第一侧与待转换的全色图像像素W0相邻的第一全色图像像素W1的像素值、及在第二侧与待转换的全色图像像素W0相邻的第二全色图像像素W2及第三全色图像像素W3的像素值。根据待转换的全色图像像素W0的像素值、第一全色图像像素W1的像素值、第二全色图像像素W2的像素值及第三全色图像像素W3的像素值,以获取第一偏差L1及第二偏差L2。具体地,第一偏差L1可通过计算公式L1=abs(W0’-(W0’+W1’)/2)获得,其中W0’表示待转换全色图像像素W0的像素值、W1’表示第一全色图像像素W1的像素值。也即是说,首先计算待转换全色图像像素W0的像素值与第一全色图像像素W1的像素值的均值,再将待转换全色图像像素W0的像素值减去该均值后做绝对值运算,以获得第一偏差L1。第二偏差L2可通过计算公式L2=abs(W0’-(W2’+W3’)/2)获得,其中W0’表示待转换全色图像像素W0的像素值、W2’表示第二全色图像像素W2的像素值、W3’表示第三全色图像像素W3的像素值。也即是说,首先计算第二全色图像像素W2的像素值与第三全色图像像素W3的像素值,再将待转换全色图像像素W0的像素值减去该均值后做绝对值运算,以获得第二偏差L2。
处理器20获得第一偏差L1及第二偏差L2后,根据第一偏差L1及预设的权重函数F(x)获取第一权重F(L1),并根据第二偏差L2及预设的权重函数F(x)获取第二权重F(L2)。需要说明的是,预设的 权重函数F(x)可以是指数函数、对数函数或者幂函数,只需要满足输入值越小,输出的权重越大即可,下文中提及的权重函数F(x)也是如此,在此不再赘述。例如,假设第一偏差L1大于第二偏差L2,则第一权重F(L1)小于第二权重F(L2)。
处理器20在获得第一权重F(L1)及第二权重F(L2)后,获取在第一侧与待转换的全色图像像素W0最相邻的一个第一颜色图像像素A2的像素值、及在第二侧与待转换的全色图像像素W0相邻的一个第一颜色图像像素A3的像素值。根据第一权重F(L1)、第二权重F(L2)、在第一侧与待转换的全色图像像素W0最相邻的一个第一颜色图像像素A2的像素值、及在第二侧与待转换的全色图像像素W0相邻的一个第一颜色图像像素A3的像素值,以获取待转换的全色图像像素W0转换成第一颜色图像像素A0后的像素值。具体地,转换后第一颜色图像像素A0后的像素值可以通过计算公式A0’=(k×A2’×F(L1)+A3’×F(L2))/(k×F(L1)+F(L2)),其中A0’表示转换后第一颜色图像像素A0的像素值、k为预设系数。预设系数k可以根据实际情况去调整,在本申请实施例中预设系数k为4。
需要说明的是,由于第一方向H包括行方向H1及列方向H2,当特征方向为第一方向H中的行方向H1时,请参阅图18,待转换全色图像像素W0的第一侧表示待转换全色图像像素W0的左侧、待转换全色图像像素W0的第二侧表示待转换全色图像像素W0的右侧。当特征方向为第一方向H中的列方向H2时,请参阅图19,待转换全色图像像素W0的第一侧表示待转换全色图像像素W0的下侧、待转换全色图像像素W0的第二侧表示待转换全色图像像素W0的上侧。
请参阅图20,步骤02对第一图像中的全色图像像素进行处理以转换成第一颜色图像像素,以获得第二图像还包括:
0201:判断待转换的全色图像像素是否在平坦区;
0205:在全色图像像素在非平坦区时,获取待转换的全色图像像素的特征方向;
0209:在特征方向为第一方向,且在特征方向上最邻近的第一颜色图像像素在待转换的全色图像像素第二侧时,根据待转换的全色图像像素的像素值、在第二侧与待转换的全色图像像素相邻的一个全色图像像素的像素值第三偏差,并根据待转换的全色图像像素的像素值、在第一侧与待转换的全色图像像素相邻的两个全色图像像素的像素值获取第四偏差,第一侧与所述第二侧相背;
0210:根据第三偏差及预设的权重函数获取第三权重,并根据第四偏差及权重函数获取第四权重;及
0211:根据第三权重、第四权重、在第一侧与待转换的全色图像像素相邻的一个第一颜色图像像素的像素值、在第二侧与待转换的全色图像像素最相邻的一个第一颜色图像像素的像素值获取待转换的全色图像像素转换成第一颜色图像像素后的像素值。
请结合图2及图20,步骤0209、步骤0210及步骤0211均可以由处理器20实现。也即是说,处理器20还用于:在特征方向为第一方向H,且在特征方向上最邻近的第一颜色图像像素A4在待转换的全色图像像素W0第二侧时,根据待转换的全色图像像素W0的像素值、在第二侧与待转换的全色图像像素W0相邻的一个全色图像像素W的像素值第三偏差L3,并根据待转换的全色图像像素W0的像素值、在第一侧与待转换的全色图像像素W0相邻的两个全色图像像素W的像素值获取第四偏差L4,第一侧与所述第二侧相背;根据第三偏差L3及预设的权重函数F(x)获取第三权重F(L3),并根据第四偏差L4及权重函数F(x)获取第四权重F(L4);及根据第三权重F(L3)、第四权重F(L4)、在第一侧与待转换的全色图像像素W0相邻的一个第一颜色图像像素A的像素值、在第二侧与待转换的全色图像像素W0最相邻的一个第一颜色图像像素A的像素值获取待转换的全色图像像素W0转换成第一颜色图像像素A0后的像素值。
其中,判断待转换的全色图像像素W0是否在平坦区、及获取待转换的全色图像像素的特征方向的具体实施方法与上述实施例中所述的判断待转换的全色图像像素W0是否在平坦区、及获取待转换的全色图像像素的特征方向的具体实施方法相同,在此不作赘述。
请参阅图2、图21及图22,若特征方向为第一方向H,且在第一方向H上最邻近的第一颜色图像像素A4在待转换的全色图像像素W0第二侧时,获取在第二侧与待转换的全色图像像素W0相邻的第二全色图像像素W2的像素值、及在第一侧与待转换的全色图像像素W0相邻的第一全色图像像素W1及第四全色图像像素W4的像素值。根据待转换的全色图像像素W0的像素值、第一全色图像像素W1的像素值、第二全色图像像素W2的像素值及第四全色图像像素W4的像素值,以获取第三偏差L3及第四偏差L4。具体地,第三偏差L3可通过计算公式L3=abs(W0’-(W1’+W4’)/2)获得,其中W0’表示待转换全色图像像素W0的像素值、W1’表示第一全色图像像素W1的像素值、W4’表示第四全色图像像素W4的像素值。也即是说,首先计算第一全色图像像素W1的像素值与第四全色图像像素W4的像素值的均值,再将待转换全色图像像素W0的像素值减去该均值后做绝对值运算,以获得第三偏差L3。第四偏差L4可通过计算公式L4=abs(W0’-(W0’+W2’)/2)获得,其中W0’表示待转换全色 图像像素W0的像素值、W2’表示第二全色图像像素W2的像素值。也即是说,首先计算待转换全色图像像素W0的像素值与第二全色图像像素W2的像素值,再将待转换全色图像像素W0的像素值减去该均值后做绝对值运算,以获得第四偏差L4。
处理器20获得第三偏差L3及第四偏差L4后,根据第三偏差L3及预设的权重函数F(x)获取第三权重F(L3),并根据第四偏差L4及预设的权重函数F(x)获取第二权重F(L4)。处理器20在获得第三权重F(L3)及第四权重F(L4)后,获取在第二侧与待转换的全色图像像素W0最相邻的一个第一颜色图像像素A4的像素值、及在第一侧与待转换的全色图像像素W0相邻的一个第一颜色图像像素A5的像素值。根据第三权重F(L3)、第四权重F(L4)、在第二侧与待转换的全色图像像素W0最相邻的一个第一颜色图像像素A4的像素值、及在第一侧与待转换的全色图像像素W0相邻的一个第一颜色图像像素A5的像素值,以获取待转换的全色图像像素W0转换成第一颜色图像像素A0后的像素值。具体地,转换后第一颜色图像像素A0后的像素值可以通过计算公式A0’=(A5’×F(L3)+k×A5’×F(L4))/(F(L3)+k×F(L4)),其中A0’表示转换后第一颜色图像像素A0的像素值、k为预设系数。预设系数k可以根据实际情况去调整,在本申请实施例中预设系数k为4。
需要说明的是,由于第一方向H包括行方向H1及列方向H2,当特征方向为第一方向H中的行方向H1时,请参阅图21,待转换全色图像像素W0的第一侧表示待转换全色图像像素W0的左侧、待转换全色图像像素W0的第二侧表示待转换全色图像像素W0的右侧。当特征方向为第一方向H中的列方向H2时,请参阅图22,待转换全色图像像素W0的第一侧表示待转换全色图像像素W0的下侧、待转换全色图像像素W0的第二侧表示待转换全色图像像素W0的上侧。
请参阅图23,步骤02对第一图像中的全色图像像素进行处理以转换成第一颜色图像像素还包括:
0201:判断待转换的全色图像像素是否在平坦区;
0205:在全色图像像素在非平坦区时,获取待转换的全色图像像素的特征方向;
0212:在所特征方向为第二方向,预设以待转换的全色图像像素为中心的第二计算窗口,;
0213:获取第二计算窗口内所有像素的像素值;及
0214:根据第二计算窗口内所有像素的像素值、待转化的全色图像像素的像素值、预设的第三权值矩阵、预设的第四权值矩阵以获取待转换的全色图像像素转换成第一颜色图像像素后的像素值。
请结合图2及图23,步骤0212、步骤0213及步骤0214均可以由处理器20实现。也即是说,处理器20还用于:所特征方向为第二方向V,预设以待转换的全色图像像素W0为中心的第二计算窗口C2,第二方向V与第一图像的第一方向H之间存在夹角;获取第二计算窗口C2内所有像素的像素值;及根据第二计算窗口C2内所有像素的像素值、待转化的全色图像像素W0的像素值、预设的第三权值矩阵N3、预设的第四权值矩阵N4以获取待转换的全色图像像素W0转换成第一颜色图像像素A0后的像素值。
其中,判断待转换的全色图像像素W0是否在平坦区、及获取待转换的全色图像像素的特征方向的具体实施方法与上述实施例中所述的判断待转换的全色图像像素W0是否在平坦区、及获取待转换的全色图像像素的特征方向的具体实施方法相同,在此不作赘述。
请参阅图2、图24及图25,若特征方向为第二方向V时,预设以待转换的全色图像像素W0为中心的第二计算窗口C2。预设第二计算窗口C2的具体方式与预设第一计算窗口C1的具体方法相同,在此不作赘述。
当处理器20预设第二计算窗口C2并获取到第二计算窗口C2内所有像素值后,处理器20根据第二计算窗口C2内所有像素值、预设第三权值矩阵N3及预设第四权值矩阵N4可获得第三转换值M3及第四转换值M4。具体地,第三转换值M3可通过计算公式M3=sum(sum(I×N3))/sum(sum(N3))或得,其中I表示第二计算窗口C2内各个图像像素的像素值。也即是说,第二计算窗口C2内每一个图像像素的像素值乘以预设的第三权值矩阵N3内对应位置的数值后获得多个新像素值,将多个新像素值相加后除以预设的第三权值矩阵N3内所有数值的总和,以获得第三转换值M3。第四转换值M4可通过计算公式M3=sum(sum(I×N4))/sum(sum(N4))或得,其中I表示第二计算窗口C2内各个图像像素的像素值。也即是说,第二计算窗口C2内每一个图像像素的像素值乘以预设的第四权值矩阵N4内对应位置的数值后获得多个新像素值,将多个新像素值相加后除以预设的第四权值矩阵N4内所有数值的总和,以获得第四转换值M4。
处理器20根据待转换的全色图像像素W0、及第三转换值M3及第四转换值M4,以获取待转换的全色图像像素W0转换成第一颜色图像像素A0后的像素值。具体的,转换后第一颜色图像像素A0的像素值可以通过计算公式A0’=W0’×(M4/M3)获得,其中A0’表示转换后第一颜色图像像素A0的像素值、W0’表示待转换的全色图像像素W0的像素值。
需要说明的是,在一些实施例中,处理器20是根据与待转换的全色图像像素W0最接近的第一颜 色图像像素A1的位置信息,获取预设的第三权值矩阵N3及预设的第四权值矩阵N4,第三权值矩阵N3及第四权值矩阵N4均是与第二计算窗口C2对应的矩阵。并且当与待转换的全色图像像素W0最接近的第一颜色图像像素A1在不同的位置时,预设的第三权值矩阵N3及预设的第四权值矩阵N4也不同。
在一些实施例中,处理器20根据与待转换的全色图像像素W0在同一行,且最接近待转换的全色图像像素W0的第一颜色图像像素A1的列坐标,以获取第三权值矩阵N3及第四权值矩阵N4。例如,若第一颜色图像像素A1的列坐标小于待转换的全色图像像素W0的列坐标,如图24所示,待转换的全色图像像素W0置于第一计算窗口的第3行第3列,最接近待转换的全色图像像素W0的第一颜色图像像素A1在第一计算窗口C1的第3行第2列,即最接近第一颜色图像像素A1在待转换的全色图像像素W0的左侧。此时,预设的第三权值矩阵
Figure PCTCN2020120025-appb-000013
预设的第四权值矩阵
Figure PCTCN2020120025-appb-000014
若第一颜色图像像素A1的列坐标大于待转换的全色图像像素W0的列坐标,如图25所示,待转换的全色图像像素W0置于第一计算窗口的第3行第3列,最接近待转换的全色图像像素W0的第一颜色图像像素A1在第一计算窗口C1的第3行第4列,即最接近第一颜色图像像素A1在待转换的全色图像像素W0的右侧。此时,预设的第三权值矩阵
Figure PCTCN2020120025-appb-000015
预设的第四权值矩阵
Figure PCTCN2020120025-appb-000016
当然,在一些实施例中,处理器20也可以根据与待转换的全色图像像素W0在同一列,且最接近待转换的全色图像像素W0的第一颜色图像像素A1的行坐标,以获取第一权值矩阵N1及第二权值矩阵N2,在此不作限制。
请参阅图26,步骤02对第一图像中的全色图像像素进行处理以转换成第一颜色图像像素还包括:
0201:判断待转换的全色图像像素是否在平坦区;
0205:在全色图像像素在非平坦区时,获取待转换的全色图像像素的特征方向;
0215:在特征方向为第三方向,预设以待转换的全色图像像素为中心的第三计算窗口;
0216:获取第三计算窗口内所有像素的像素值,并根据第一颜色图像像素周围的多个全色图像像素的像素值,以获取在第三计算窗口内所有第一颜色图像像素的转换像素值;
0217:根据第一颜色图像像素的转换像素值、待转换的全色图像像素的像素值及预设的权重函数获取第五权值矩阵;及
0218:根据第一颜色图像像素的转换像素值、第五权值矩阵及距离权重获取待转换的全色图像像素转换成第一颜色图像像素后的像素值。
请结合图2及图26,步骤0215、步骤0216、步骤0217及步骤0218均可以由处理器20实现。也即是说,处理器20还用于:在特征方向为第三方向E,预设以待转换的全色图像像素W0为中心的第三计算窗口C3,第三方向E与所述第一图像的第二方向V垂直;获取第三计算窗口C3内所有像素的像素值,并根据第一颜色图像像素A周围的多个全色图像像素W的像素值,以获取在第三计算窗口C3内所有第一颜色图像像素A的转换像素值;根据第一颜色图像像素A的转换像素值、待转换的全色图像像素W0的像素值及预设的权重函数F(x)获取第五权值矩阵N5;及根据第一颜色图像像素A的转换像素值、第五权值矩阵N5及距离权重获取待转换的全色图像像素W0转换成第一颜色图像像素A0后的像素值。
其中,判断待转换的全色图像像素W0是否在平坦区、及获取待转换的全色图像像素的特征方向的具体实施方法与上述实施例中所述的判断待转换的全色图像像素W0是否在平坦区、及获取待转换的全色图像像素的特征方向的具体实施方法相同,在此不作赘述。
请参阅图2及图27,若特征方向为第三方向E时,预设以待转换的全色图像像素W为中心的第三计算窗口C3。预设第三计算窗口C3的具体方式与预设第一计算窗口C1的具体方法相同,在此不作赘述。
处理器20在获取到第三计算窗口C3内所有像素的像素值后,根据第一颜色图像像素A周围的多个全色图像像素W的像素值,以获取在第三计算窗口C3内所有第一颜色图像像素A的转换像素值。 在一些实施例中,处理器20通过计算第一颜色图像像素A周围多个全色图像像素W的均值来作为第一颜色图像像素A的转换值。下面以计算位于第三窗口C3的第2行第1列的第一颜色图像像素A的转换像素值为例进行说明,其他第一颜色图像像素A的计算方式相同。位于第三窗口C3的第2行第1列的第一颜色图像像素A的转换像素值等于与其相邻的四个全色图像像素W的像素值的均值,即等于位于第三窗口C3第2行第0列的全色图像像素W、位于第三窗口C3第2行第2列的全色图像像素W、位于第三窗口C3第1行第1列的全色图像像素W、及位于第三窗口C3第1行第3列的全色图像像素W的像素值的均值。
处理器20获取到第三窗口C3内多个第一颜色图像像素A的转换像素值后,根据多个第一颜色图像像素A的转换像素值、待转换全色图像像素W的像素值及预设的权重函数F(x),以获取第五权值矩阵N5。具体的,请参阅图28,假设第三窗口C3为7*7的窗口,则第五权值矩阵N5也是7行7列的矩阵。处理器20任意抽取第三窗口C3内的任意一个图像像素,若抽取的图像像素为第一颜色图像像素A,则将该第一颜色图像像素A的转换值减去待转换全色图像像素W0的像素值,以获取第五偏差L5。再根据第五偏差L5及预设的权重函数F(x)以获取第五权重F(L5),并将第五权重F(L5)填入第五权值矩阵N5与抽取的第一颜色图像像素A3的对应位置。例如,假设处理器20抓取的是位于第三计算窗口C3第2行第1列的第一颜色图像像素A3,则该第一颜色图像像素A3的转换值减去待转换全色图像像素W0的像素值,以获取对应的第五偏差L (2,1)5。再根据第五偏差L5及预设的权重函数F(x)以获取对应的第五权重F(L (2,1)5),并将第五权重F(L (2,1)5)填入第五权值矩阵N5的第2行第1列位置,即X21=F(L (2,1)5)。若抽取的图像像素不是第一颜色图像像素A,则在第五权值矩阵N5与抽取的图像像素对应位置填入0。例如,假设处理器20抓取的是位于第三计算窗口C3第0行第1列的第二颜色图像像素B1,则在第五权值矩阵N5的第0行第1列填入0,即X01=0。处理器20完成一个数据的填入后,再抓取下一个图像像素重复上述步骤,直至第三计算窗口C3内所有图像像素均被抓取,如此便获得了第五权值矩阵N5。
处理器20在获得第五权值矩阵N5后,根据第一颜色图像A的转换值、第五权值矩阵N5及预设的距离权重R,以获取第五转换值M5及第六转换值M6。具体地,第五转换值M5可以通过计算公式M5=sum(sum(J×N5)×R)/sum(sum(N5×R))获得,其中J表示第三计算窗口C3内多个第一颜色图像像素A的转换值,距离权重R满足图像像素距离第三窗口C3的中心图像像素越近,其权重值越大。也即是说,第三计算窗口C3内多个第一颜色图像像素A的转换值乘第五权值矩阵N5内对应位置的数值后获得多个新像素值,将多个新像素值相加后乘距离权重R获得第一计算值,对第五权值矩阵N5内数值的求和后乘距离权重R获得第二计算值,再将第一计算值除第二计算值,以获得第五转换值M5。第六转换值M6可以通过计算公式M6=sum(sum(I×N5)×R)/sum(sum(N5×R)),其中I表示第二计算窗口C2内各个图像像素的像素值。也即是说,第三计算窗口C3内多个像素的像素值乘第五权值矩阵N5内对应位置的数值后获得多个新像素值,将多个新像素值相加后乘距离权重R获得第三计算值,对第五权值矩阵N5内数值的求和后乘距离权重R获得第四计算值,再将第三计算值除第四计算值,以获得第六转换值M6。
处理器20根据待转换的全色图像像素W0的像素值、第五转换值M5及第六转换值M6,以获取待转换的全色图像像素W0转换成第一颜色图像像素A0后的像素值。具体的,转换后第一颜色图像像素A0的像素值可以通过计算公式A0’=W0’×(M5/M6)获得,其中A0’表示转换后第一颜色图像像素A0的像素值、W0’表示待转换的全色图像像素W0的像素值。
在一些实施例中,处理器20在获取第一图像后,任意抓取第一图像内的一个图像像素,并判断抓取的图像像素是否为全色图像像素W,若抓取的图像像素是全色图像像素W则处理器20进行图12至图28所述的步骤,以获取待转换的全色图像像素W0转换成第一颜色图像像素A0后的像素值;若抓取的图像像素不是全色图像像素W则直接再抓取另一个图像像素,重复上述步骤直至第一图像中的所有图像像素均被抓取,如此能够对第一图像中的全色图像像素进行处理以转换成第一颜色图像像素。当然,在一些实施例中,处理器20抓取图像是按照一定顺序的,例如先抓取第一图像左上角的第一个图像像素进行判断及处理后,再抓取该图像像素右侧的图像直至第一行图像像素均被抓取后,再抓取下一行的图像像素,重复上述步骤直至第一图像中的所有图像像素均被抓取。
请参阅图2及图29,处理器20将第一图像内所有全色图像像素W进行处理以转换成第一颜色图像像素A,以获得第二图像。第二图像内仅包括第一颜色图像像素A、第二颜色图像像素B及第三颜色图像像素C。处理器20对第二图像内的第二颜色图像像素B及第三颜色图像像素C进行处理以转换成第一颜色图像像素A,以获得第三图像。在第三图像内仅包括多个第一颜色图像像素A。
具体地,请参阅图30,步骤03对第二颜色图像像素及第三颜色图像像素成进行处理以转换成第一 颜色图像像素,以获得第三图像包括:
031:判断待转换的第二颜色图像像素是否在平坦区;
032:在第二颜色图像像素在平坦区时,根据待转换的第二颜色图像像素多个方向上相邻的第一颜色图像像素的像素值,以获取待转换的第二颜色图像像素转换成第一颜色图像像素后的像素值。
请结合图2及图30,步骤031可以由处理器20实现。也即是说处理器20还用于:判断待转换的第二颜色图像像素B0是否在平坦区;在第二颜色图像像素B0在平坦区时,根据待转换的第二颜色图像像素B0多个方向上相邻的第一颜色图像像素A的像素值,以获取待转换的第二颜色图像像素B0转换成第一颜色图像像素A0后的像素值。
其中,判断待转换的第二颜色图像像素B0是否在平坦区的具体方法,与上述实施例中判断待转换的全色图像像素W0是否在平坦区具体方法相同,在此不作赘述。
请参阅图31,当待转换的第二颜色图像像素B0在平坦区时,处理器20获取待转换的第二颜色图像像素B0相邻四周的第一颜色图像像素A的像素值,并对获得的多个第一颜色图像像素A的像素值求均值,以此作为待转换的第二颜色图像像素B0转换成第一颜色图像像素A0后的像素值。例如,假设第二图像由5行5列的图像像素排列而成,待转换的第二颜色图像像素B0在第二图像的第3行第1列,则处理器20获取位于第二图像的第2行第1列的第一颜色图像像素A的像素值、位于第二图像的第4行第1列的第一颜色图像像素A的像素值、位于第二图像的第3行第0列的第一颜色图像像素A的像素值、及位于第二图像的第3行第2列的第一颜色图像像素A的像素值的均值,并将其均值作为转换后的第一颜色图像像素A0的像素值。
请参阅图32,步骤03对第二颜色图像像素及第三颜色图像像素成进行处理以转换成第一颜色图像像素,以获得第三图像包括:
031:判断待转换的第二颜色图像像素是否在平坦区;
033:在第二颜色图像像素在非平坦区时,获取待转换的第二颜色图像素的特征方向;及
034:根据在特征方向上与待转换的第二颜色图像素邻近的两个第一颜色图像像素的像素值,以获取待转换的第二颜色图像像素转换成第一颜色图像像素后的像素值。
请结合图2及图32,步骤033及步骤033均可以由处理器20实现。也即是说处理器20还用于:判断待转换的第二颜色图像像素B0是否在平坦区;在第二颜色图像像素B0在非平坦区时,获取待转换的所述第二颜色图像素B0的特征方向;及根据在特征方向上与待转换的第二颜色图像素B0邻近的两个所述第一颜色图像像素A的像素值,以获取待转换的第二颜色图像像素B0转换成第一颜色图像像素A0后的像素值。
其中,判断待转换的第二颜色图像像素B0是否在平坦区的具体方法,与上述实施例中判断待转换的全色图像像素W0是否在平坦区具体方法相同;获取待转换的第二颜色图像素B0的特征方向的具体方法,也与述实施例中获取待转换的全色图像像素W0的特征方向的具体方法相同,在此不作赘述。
请参阅图2及图31,当待转换的第二颜色图像像素B0在平坦区时,处理器20获取待转换的第二颜色图像像素B0的特征方向后,在其特征方向上获取与待转换的第二颜色图像像素B0相邻的两个第一颜色图像像素A的像素值,并对获得的两个第一颜色图像像素A的像素值求均值,以此作为待转换的第二颜色图像像素B0转换成第一颜色图像像素A0后的像素值。例如,假设第二图像由5行5列的图像像素排列而成,待转换的第二颜色图像像素B0在第二图像的第3行第1列,若特征方向为行方向H2(图17所示),则处理器20位于第二图像的第3行第0列的第一颜色图像像素A的像素值、及位于第二图像的第3行第2列的第一颜色图像像素A的像素值的均值,并将其均值作为转换后的第一颜色图像像素A0的像素值。若特征方向为列方向H1(图*所示),则处理器20位于第二图像的第2行第1列的第一颜色图像像素A的像素值、及位于第二图像的第4行第1列的第一颜色图像像素A的像素值的均值,并将其均值作为转换后的第一颜色图像像素A0的像素值。
请参阅图33,步骤03对第二颜色图像像素及第三颜色图像像素成进行处理以转换成第一颜色图像像素,以获得第三图像包括:
035:判断待转换的第三颜色图像像素是否在平坦区;
036:在第三颜色图像像素在平坦区时,根据待转换的第三颜色图像像素多个方向上相邻的第一颜色图像像素的像素值,以获取待转换的第三颜色图像像素转换成第一颜色图像像素后的像素值。
请结合图2及图33,步骤035及步骤036均可以由处理器20实现。也即是说处理器20还用于:判断待转换的第三颜色图像像素C0是否在平坦区;在第三颜色图像像素C0在平坦区时,根据待转换的第三颜色图像像素C0多个方向上相邻的第一颜色图像像素A的像素值,以获取待转换的第三颜色图像像素C0转换成第一颜色图像像素A后的像素值。
请参阅图34,步骤03对第二颜色图像像素及第三颜色图像像素成进行处理以转换成第一颜色图像 像素,以获得第三图像包括:
035:判断待转换的第三颜色图像像素是否在平坦区;
037:在第三颜色图像像素在非平坦区时,获取待转换的第三颜色图像素的特征方向;及
038:根据在特征方向上与待转换的第三颜色图像素邻近的两个第一颜色图像像素的像素值,以获取待转换的第三颜色图像像素转换成第一颜色图像像素后的像素值。
请结合图2及图34,步骤035、步骤037及步骤038均可以由处理器20实现。也即是说处理器20还用于:在待转换的第三颜色图像像素C0在非平坦区时,获取待转换的所述第三颜色图像素C0的特征方向;及根据在特征方向上与待转换的第三颜色图像素C0邻近的两个所述第一颜色图像像素A的像素值,以获取待转换的第三颜色图像像素C0转换成第一颜色图像像素后A0的像素值。
获取待转换的第三颜色图像像素C0转换成第一颜色图像像素A0后的像素值的具体方法,与上述获取转换的第二颜色图像像素B0转换成第一颜色图像像素A0后的像素值的具体方法相同,在此不作赘述。
在一些实施例中,处理器20在获取第二图像后,任意抓取第二图像内的一个图像像素,并判断抓取的图像像素是否为第二颜色图像像素B或第三颜色图像像素C,若抓取的图像像素是第二颜色图像像素B或第三颜色图像像素C,则处理器20进行图30至图34所述的步骤,以获取待转换的第二颜色图像像素B0或第三颜色图像像素C0转换成第一颜色图像像素A0后的像素值;若抓取的图像像素不是第二颜色图像像素B,也不是第三颜色图像像素C,则直接再抓取另一个图像像素,重复上述步骤直至第一图像中的所有图像像素均被抓取,如此能够对第二图像中的第二颜色图像像素B或第三颜色图像像素C进行处理以转换成第一颜色图像像素。当然,在一些实施例中,处理器20抓取图像是按照一定顺序的,例如先抓取第一图像左上角的第一个图像像素进行判断及处理后,再抓取该图像像素右侧的图像直至第一行图像像素均被抓取后,再抓取下一行的图像像素,重复上述步骤直至第一图像中的所有图像像素均被抓取。
请参阅图2及图35,处理器20在获得仅包括第一颜色图像像素A的第三图像后,根据第一图像对所述第三图像进行处理,以获得第二颜色中间图像及第三颜色中间图像。其中,第二颜色中间图像仅包括第二颜色图像像素B,第三颜色中间图像仅包括第三颜色图像像素C。
具体地,请参阅图36,在一些实施例中,步骤04对根据第一图像对所述第三图像进行处理,以获得第二颜色中间图像及第三颜色中间图像包括:
041:根据第一图像及第三图像进行双边滤波处理,以获得第二颜色中间图像及第三颜色中间图像。
请结合图2及图36,步骤041可以由处理器20实现。也即是说处理器20还用于根据第一图像及第三图像进行双边滤波处理,以获得第二颜色中间图像及第三颜色中间图像。
具体地,请参阅图37,第一图像包括多个第二颜色图像像素B及多个第三颜色图像像素C,多个第二颜色图像像素B排列形成第二颜色原始图像,多个第三颜色图像像素C排列形成第三颜色原始图像。第二颜色原始图像及第三图像进行双边滤波处理,以获得第二颜色中间图像;第三颜色原始图像及第三图像进行双边滤波处理,以获得第三颜色中间图像。
下面以第二颜色原始图像及第三图像进行双边滤波处理,以获得第二颜色中间图像为例进行说明。在一些实施例中,请参阅图38,联合双边滤波算法为
Figure PCTCN2020120025-appb-000017
其中k p=∑ q∈Ωf(||p-q||)g(||I p′-I q′||),J p为输出像素值,k p为权重总和,Ω为滤波窗口,p为待滤波像素点在第二颜色原始图像中的坐标,q为滤波窗口内的像素点在第二颜色原始图像中的坐标,I q为q点对应的像素值,I p′为第三图像中与待滤波像素点对应的像素值,I q′为第三图像中与q点对应的像素值,f、g均为权重分布函数,权重分布函数包括高斯函数。
具体地,联合双边滤波算法通过待滤波像素点p的坐标与滤波窗口内的一个像素点q的坐标的差值确定第一距离权重(f(||p-q||)),图中示例的p点和q点的坐标差值可以为2,通过第三图像中与p点对应的像素值I p′和与q点对应的像素值I q′的差值确定第二距离权重(g(||I p′-I q′||)),根据滤波窗口内的每个像素点的第一距离权重、第二距离权重、第二颜色原始图像中q点对应的像素值I q以及权重总和k p确定输出像素值J p
需要说明的是,在第二颜色原始图像中,没有设置第二颜色图像像素的位置,其像素值为0。输出的像素值J p设置的第二颜色中间图像的与待滤波像素点p对应的位置,并且完成一次输出后,滤波窗口移动至下一个图像像素位置,直至第二颜色原始图像内所有图像像素均完成滤波,如此便获得仅包含第二颜色图像像素的第二颜色中间图像。第三颜色原始图像及第三图像进行双边滤波处理以获得第三颜色中间图像的具体方法,与第二颜色原始图像及第三图像进行双边滤波处理以获得第二颜色中间图像的具体方法相同,在此不作赘述。
请参阅图39,处理器20获得第三图像、第二颜色中间图像及第三颜色中间图像后,处理器20将第三图像、第二颜色中间图像及第三颜色中间图像融合以获得目标图像。具体地,获取第二颜色中间图像及第三颜色中间图像上均没有设置图像像素的位置信息,根据位置信息抽取第三图像对应位置的第一颜色图像像素A,再将多个抽取的第一颜色图像像素A、第二颜色中间图像内的多个第二颜色图像像素B、及第三颜色中间图像内的多个第三颜色图像像素C排列,以获得呈拜耳阵列排布的目标图像。
本申请实施方式的图像处理方法通过在像素阵,11中增加全色感光像素W,将全色图像像素W插值转换为光谱响应较宽的彩色图像像素以获得第二图像,再对第二图像进行后续处理以获得呈拜耳阵列排布的目标图像。如此,解决了图像处理器不能直接对图像像素呈非拜耳阵列排布的图像进行处理的问题,同时由于像素阵列11中增加了全色感光像素W,能够提高最终获得的图像的解析能力及信噪比,从而可以提升夜晚下的拍照效果。
请参阅图40,本申请还提供一种电子设备1000。本申请实施方式的电子设备1000包括镜头300、壳体200及上述任意一项实施方式的图像处理系统100。镜头300、图像处理系统100与壳体200结合。镜头300与图像处理系统100的图像传感器10配合成像。
电子设备1000可以是手机、平板电脑、笔记本电脑、智能穿戴设备(例如智能手表、智能手环、智能眼镜、智能头盔)、无人机、头显设备等,在此不作限制。
本申请实施方式的电子设备1000通过在图像处理系统100内的像素阵列11中增加全色感光像素W,将全色图像像素W插值转换为光谱响应较宽的彩色图像像素以获得第二图像,再对第二图像进行后续处理以获得呈拜耳阵列排布的目标图像。如此,解决了图像处理器10不能直接对图像像素呈非拜耳阵列排布的图像进行处理的问题,同时由于像素阵列11中增加了全色感光像素W,能够提高最终获得的图像的解析能力及信噪比,从而可以提升夜晚下的拍照效果。
请参阅41,本申请还提供一种包含计算机程序的非易失性计算机可读存储介质400。该计算机程序被处理器60执行时,使得处理器60执行上述任意一个实施方式的图像处理方法。
例如,请参阅1及图41,计算机程序被处理器60执行时,使得处理器60执行以下步骤:
01:像素阵列曝光得到第一图像,第一图像包括由全色感光像素生成的全色图像像素、由第一颜色感光像素生成的第一颜色图像像素、由第二颜色感光像素生成的第二颜色图像像素及由第三颜色感光像素生成的第三颜色图像像素;
02:对第一图像中的全色图像像素进行处理以转换成第一颜色图像像素,以获得第二图像;
03:对第二图像中的第二颜色图像像素及第三颜色图像像素成进行处理以转换成第一颜色图像像素,以获得第三图像;
04:根据第一图像对第三图像进行处理,以获得第二颜色中间图像及第三颜色中间图像,第二颜色中间图像包含第二颜色图像像素,第三颜色中间图像包含第三颜色图像像素:及
05:融合第三图像、第二颜色中间图像及第三颜色中间图像以获得目标图像,目标图像包含多个彩色图像像素,多个彩色图像像素呈拜耳阵列排布。
需要说明的是,处理器60可以与设置在图像处理器100内的处理器20为同一个处理器,处理器60也可以设置在设备1000内,即处理器60也可以与设置在图像处理器100内的处理器20不为同一个处理器,在此不作限制。
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
尽管上面已经示出和描述了本申请的实施方式,可以理解的是,上述实施方式是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施方式进行变化、修改、替换和变型。

Claims (24)

  1. 一种图像处理方法,用于图像传感器,其特征在于,所述图像传感器包括像素阵列,所述像素阵列包括多个全色感光像素及多个彩色感光像素,所述彩色感光像素包括具有不同光谱响应的第一颜色感光像素、第二颜色感光像素及第三颜色感光像素,所述彩色感光像素具有比所述全色感光像素更窄的光谱响应,且所述第二颜色感光像素及所述第三颜色感光像素均具有比第一颜色感光像素更窄的光谱响应;所述图像处理方法包括:
    所述像素阵列曝光得到第一图像,所述第一图像包括由所述全色感光像素生成的全色图像像素、由第一颜色感光像素生成的第一颜色图像像素、由第二颜色感光像素生成的第二颜色图像像素及由第三颜色感光像素生成的第三颜色图像像素;
    对所述第一图像中的所述全色图像像素进行处理以转换成第一颜色图像像素,以获得第二图像;
    对所述第二图像中的所述第二颜色图像像素及所述第三颜色图像像素进行处理以转换成第一颜色图像像素,以获得第三图像;
    根据所述第一图像对所述第三图像进行处理,以获得第二颜色中间图像及第三颜色中间图像,所述第二颜色中间图像包含所述第二颜色图像像素,第三颜色中间图像包含所述第三颜色图像像素;及
    融合所述第三图像、所述第二颜色中间图像及所述第三颜色中间图像以获得目标图像,所述目标图像包含多个彩色图像像素,多个所述彩色图像像素呈拜耳阵列排布。
  2. 根据权利要求1所述的图像处理方法,其特征在于,所述对所述第一图像中的所述全色图像像素进行处理以转换成第一颜色图像像素,以获得第二图像,包括:
    在所述全色图像像素在平坦区时,预设以待转换的所述全色图像像素为中心的第一计算窗口;
    获取所述第一计算窗口内所有像素的像素值;及
    根据所述第一计算窗口内所有像素的像素值、待转换的所述全色图像像素的像素值、预设的第一权值矩阵、预设的第二权值矩阵以获取待转换的所述全色图像像素转换成第一颜色图像像素后的像素值。
  3. 根据权利要求1所述的图像处理方法,其特征在于,所述对所述第一图像中的所述全色图像像素进行处理以转换成第一颜色图像像素,以获得第二图像,包括:
    在所述全色图像像素在非平坦区时,获取待转换的所述全色图像像素的特征方向;
    在所述特征方向为第一方向,且在所述特征方向上最邻近的所述第一颜色图像像素在待转换的所述全色图像像素第一侧时,根据待转换的所述全色图像像素的像素值、在所述第一侧与待转换的所述全色图像像素相邻的一个所述全色图像像素的像素值获取第一偏差,并根据待转换的所述全色图像像素的像素值、在第二侧与待转换的所述全色图像像素相邻的两个所述全色图像像素的像素值获取第二偏差,所述第一侧与所述第二侧相背;
    根据所述第一偏差及预设的权重函数获取第一权重,并根据所述第二偏差及所述权重函数获取第二权重;及
    根据所述第一权重、所述第二权重、在所述第一侧与待转换的所述全色图像像素最相邻的一个所述第一颜色图像像素的像素值、在所述第二侧与待转换的所述全色图像像素相邻的一个所述第一颜色图像像素的像素值,获取待转换的所述全色图像像素转换成第一颜色图像像素后的像素值。
  4. 根据权利要求1所述的图像处理方法,其特征在于,所述对所述第一图像中的所述全色图像像素进行处理以转换成第一颜色图像像素,以获得第二图像,包括:
    在所述全色图像像素在非平坦区时,获取待转换的所述全色图像像素的特征方向;
    在所述特征方向为第一方向,且在所述特征方向上最邻近的所述第一颜色图像像素在待转换的所述全色图像像素第二侧时,根据待转换的所述全色图像像素的像素值、在所述第二侧与待转换的所述全色图像像素相邻的一个所述全色图像像素的像素值第三偏差,并根据待转换的所述全色图像像素的像素值、在第一侧与待转换的所述全色图像像素相邻的两个所述全色图像像素的像素值获取第四偏差,所述第一侧与所述第二侧相背;
    根据所述第三偏差及预设的权重函数获取第三权重,并根据所述第四偏差及所述权重函数获取第四权重;及
    根据所述第三权重、所述第四权重、在所述第一侧与待转换的所述全色图像像素相邻的一个所述第一颜色图像像素的像素值、在所述第二侧与待转换的所述全色图像像素最相邻的一个所述第一颜色图像像素的像素值获取待转换的所述全色图像像素转换成第一颜色图像像素后的像素值。
  5. 根据权利要求1所述的图像处理方法,其特征在于,所述对所述第一图像中的所述全色图像像素进行处理以转换成第一颜色图像像素,包括:
    在所述全色图像像素在非平坦区时,获取待转换的所述全色图像像素的特征方向;
    在所述特征方向为第二方向,预设以待转换的所述全色图像像素为中心的第二计算窗口;
    获取所述第二计算窗口内所有像素的像素值;及
    根据所述第二计算窗口内所有像素的像素值、待转化的所述全色图像像素的像素值、预设的第三权值矩阵、预设的第四权值矩阵以获取待转换的所述全色图像像素转换成第一颜色图像像素后的像素值。
  6. 根据权利要求1所述的图像处理方法,其特征在于,所述对所述第一图像中的所述全色图像像素进行处理以转换成第一颜色图像像素,包括:
    在所述全色图像像素在非平坦区时,获取待转换的所述全色图像像素的特征方向;
    在所述特征方向为第三方向,预设以待转换的所述全色图像像素为中心的第三计算窗口;
    获取所述第三计算窗口内所有像素的像素值,并根据所述第一颜色图像像素周围的多个全色图像像素的像素值,以获取在所述第三计算窗口内所有第一颜色图像像素的转换像素值;
    根据所述第一颜色图像像素的所述转换像素值、所述待转换的所述全色图像像素的像素值及预设的权重函数获取第五权值矩阵;及
    根据所述第一颜色图像像素的所述转换像素值、所述第五权值矩阵及距离权重获取待转换的所述全色图像像素转换成第一颜色图像像素后的像素值。
  7. 根据权利要求3-6任意一项所述的图像处理方法,其特征在于,在所述全色图像像素在非平坦区时,获取待转换的所述全色图像像素的特征方向包括:
    获取待转换的所述全色图像像素的多个方向上的梯度值,选择最小的所述梯度值对应的方向为所述全色图像像素的特征方向。
  8. 根据权利要求1所述的图像处理方法,其特征在于,所述对所述第二图像中的所述第二颜色图像像素及所述第三颜色图像像素成进行处理以转换成第一颜色图像像素,以获得第三图像包括:
    在所述第二颜色图像像素在平坦区时,根据待转换的所述第二颜色图像像素多个方向上相邻的所述第一颜色图像像素的像素值,以获取待转换的所述第二颜色图像像素转换成第一颜色图像像素后的像素值;和/或
    在所述第三颜色图像像素在平坦区时,根据待转换的所述第三颜色图像像素多个方向上相邻的所述第一颜色图像像素的像素值,以获取待转换的所述第三颜色图像像素转换成第一颜色图像像素后的像素值。
  9. 根据权利要求1所述的图像处理方法,其特征在于,所述对所述第二图像中的所述第二颜色图像像素及所述第三颜色图像像素成进行处理以转换成第一颜色图像像素,以获得第三图像包括:
    在所述第二颜色图像像素在非平坦区时,获取待转换的所述第二颜色图像素的特征方向;及
    根据在所述特征方向上与待转换的所述第二颜色图像素邻近的两个所述第一颜色图像像素的像素值,以获取待转换的所述第二颜色图像像素转换成第一颜色图像像素后的像素值;和/或
    在所述第三颜色图像像素在非平坦区时,获取待转换的所述第三颜色图像素的特征方向;及
    根据在所述特征方向上与待转换的所述第三颜色图像素邻近的两个所述第一颜色图像像素的像素值,以获取待转换的所述第三颜色图像像素转换成第一颜色图像像素后的像素值。
  10. 根据权利要求9所述的图像处理方法,其特征在于,
    所述在所述第二颜色图像像素在非平坦区时,获取待转换的所述第二颜色图像素的特征方向包括:
    获取待转换的所述第二颜色图像像素的多个方向上的梯度值,选择最小的所述梯度值对应的方向为所述第二颜色图像像素的特征方向;
    所述在所述第三颜色图像像素在非平坦区时,获取待转换的所述第三颜色图像素的特征方向包括:
    获取待转换的所述第三颜色图像像素的多个方向上的梯度值,选择最小的所述梯度值对应的方向为所述第三颜色图像像素的特征方向。
  11. 根据权利要求1所述的图像处理方法,其特征在于,所述对根据所述第一图像对所述第三图像进行处理,以获得第二颜色中间图像及第三颜色中间图像包括:
    根据所述第一图像对所述第三图像进行双边滤波处理,以获得第二颜色中间图像及第三颜色中间图像。
  12. 一种图像处理系统,其特征在于,包括:
    图像传感器包括像素阵列,所述像素阵列包括多个全色感光像素及多个彩色感光像素,所述彩色感光像素包括具有不同光谱响应的第一颜色感光像素、第二颜色感光像素及第三颜色感光像素,所述彩色感光像素具有比所述全色感光像素更窄的光谱响应,且所述第二颜色感光像素及所述第三颜色感光像素均具有比第一颜色感光像素更窄的光谱响应;
    所述像素阵列曝光得到第一图像,所述第一图像包括由所述全色感光像素生成的全色图像像素、由第一颜色感光像素生成的第一颜色图像像素、由第二颜色感光像素生成的第二颜色图像像素及由第三颜色感光像素生成的第三颜色图像像素;及
    处理器,所述处理器用于:
    对所述第一图像中的所述全色图像像素进行处理以转换成第一颜色图像像素,以获得第二图像;
    对所述第二图像中的所述第二颜色图像像素及所述第三颜色图像像素成进行处理以转换成第一颜色图像像素,以获得第三图像;
    根据所述第一图像对所述第三图像进行处理,以获得第二颜色中间图像及第三颜色中间图像,所述第二颜色中间图像包含所述第二颜色图像像素,第三颜色中间图像包含所述第三颜色图像像素;及
    融合所述第三图像、所述第二颜色中间图像及所述第三颜色中间图像以获得目标图像,所述目标图像包含多个彩色图像像素,多个所述彩色图像像素呈拜耳阵列排布。
  13. 根据权利要求12所述的图像处理系统,其特征在于,所述处理器还用于:
    在所述全色图像像素在平坦区时,预设以待转换的所述全色图像像素为中心的第一计算窗口;
    获取所述第一计算窗口内所有像素的像素值;及
    根据所述第一计算窗口内所有像素的像素值、待转换的所述全色图像像素的像素值、预设的第一权值矩阵、预设的第二权值矩阵以获取待转换的所述全色图像像素转换成第一颜色图像像素后的像素值。
  14. 根据权利要求12所述的图像处理系统,其特征在于,所述处理器还用于:
    在所述全色图像像素在非平坦区时,获取待转换的所述全色图像像素的特征方向;
    在所述特征方向为第一方向,且在所述特征方向上最邻近的所述第一颜色图像像素在待转换的所述全色图像像素第一侧时,根据待转换的所述全色图像像素的像素值、在所述第一侧与待转换的所述全色图像像素相邻的一个所述全色图像像素的像素值获取第一偏差,并根据待转换的所述全色图像像素的像素值、在第二侧与待转换的所述全色图像像素相邻的两个所述全色图像像素的像素值获取第二偏差,所述第一侧与所述第二侧相背;
    根据所述第一偏差及预设的权重函数获取第一权重,并根据所述第二偏差及所述权重函数获取第二权重;及
    根据所述第一权重、所述第二权重、在所述第一侧与待转换的所述全色图像像素最相邻的一个所述第一颜色图像像素的像素值、在所述第二侧与待转换的所述全色图像像素相邻的一个所述第一颜色图像像素的像素值,获取待转换的所述全色图像像素转换成第一颜色图像像素后的像素值。
  15. 根据权利要求12所述的图像处理系统,其特征在于,所述处理器还用于:
    在所述全色图像像素在非平坦区时,获取待转换的所述全色图像像素的特征方向;
    在所述特征方向为第一方向,且在所述特征特征方向上最邻近的所述第一颜色图像像素在待转换的所述全色图像像素第二侧时,根据待转换的所述全色图像像素的像素值、在所述第二侧与待转换的所述全色图像像素相邻的一个所述全色图像像素的像素值第三偏差,并根据待转换的所述全色图像像素的像素值、在第一侧与待转换的所述全色图像像素相邻的两个所述全色图像像素的像素值获取第四偏差,所述第一侧与所述第二侧相背;
    根据所述第三偏差及预设的权重函数获取第三权重,并根据所述第四偏差及所述权重函数获取第四权重;及
    根据所述第三权重、所述第四权重、在所述第一侧与待转换的所述全色图像像素相邻的一个所述第一颜色图像像素的像素值、在所述第二侧与待转换的所述全色图像像素最相邻的一个所述第一颜色图像像素的像素值获取待转换的所述全色图像像素转换成第一颜色图像像素后的像素值。
  16. 根据权利要求12所述的图像处理系统,其特征在于,所述处理器还用于:
    在所述全色图像像素在非平坦区时,获取待转换的所述全色图像像素的特征方向;
    在所述特征方向为第二方向,预设以待转换的所述全色图像像素为中心的第二计算窗口;
    获取所述第二计算窗口内所有像素的像素值;及
    根据所述第二计算窗口内所有像素的像素值、待转化的所述全色图像像素的像素值、预设的第三权值矩阵、预设的第四权值矩阵以获取待转换的所述全色图像像素转换成第一颜色图像像素后的像素值。
  17. 根据权利要求12所述的图像处理系统,其特征在于,所述处理器还用于:
    在所述全色图像像素在非平坦区时,获取待转换的所述全色图像像素的特征方向;
    在所述特征方向为第三方向,预设以待转换的所述全色图像像素为中心的第三计算窗口;
    获取所述第三计算窗口内所有像素的像素值,并根据所述第一颜色图像像素周围的多个全色图像像素的像素值,以获取在所述第三计算窗口内所有第一颜色图像像素的转换像素值;
    根据所述第一颜色图像像素的所述转换像素值、所述待转换的所述全色图像像素的像素值及预设的权重函数获取第五权值矩阵;
    根据所述第一颜色图像像素的所述转换像素值、所述第五权值矩阵及距离权重获取待转换的所述全色图像像素转换成第一颜色图像像素后的像素值。
  18. 根据权利要求14-17任意一项所述的图像处理系统,其特征在于,所述处理器还用于:
    获取待转换的所述全色图像像素的多个方向上的梯度值,选择最小的所述梯度值对应的方向为所述全色图像像素的特征方向。
  19. 根据权利要求12所述的图像处理系统,其特征在于,所述处理器还用于:
    在所述第二颜色图像像素在平坦区时,根据待转换的所述第二颜色图像像素多个方向上相邻的所述第一颜色图像像素的像素值,以获取待转换的所述第二颜色图像像素转换成第一颜色图像像素后的像素值;和/或
    在所述第三颜色图像像素在平坦区时,根据待转换的所述第三颜色图像像素多个方向上相邻的所述第一颜色图像像素的像素值,以获取待转换的所述第三颜色图像像素转换成第一颜色图像像素后的像素值。
  20. 根据权利要求12所述的图像处理系统,其特征在于,所述处理器还用于:
    在所述第二颜色图像像素在非平坦区时,获取待转换的所述第二颜色图像素的特征方向;
    根据在所述特征方向上与待转换的所述第二颜色图像素邻近的两个所述第一颜色图像像素的像素值,以获取待转换的所述第二颜色图像像素转换成第一颜色图像像素后的像素值;和/或
    在所述第三颜色图像像素在非平坦区时,获取待转换的所述第三颜色图像素的特征方向;
    根据在所述特征方向上与待转换的所述第三颜色图像素邻近的两个所述第一颜色图像像素的像素值,以获取待转换的所述第三颜色图像像素转换成第一颜色图像像素后的像素值。
  21. 根据权利要求20所述的图像处理系统,其特征在于,所述处理器还用于:
    获取待转换的所述第二颜色图像像素的多个方向上的梯度值,选择最小的所述梯度值对应的方向为所述第二颜色图像像素的特征方向;及
    获取待转换的所述第三颜色图像像素的多个方向上的梯度值,选择最小的所述梯度值对应的方向为所述第三颜色图像像素的特征方向。
  22. 根据权利要求12所述的图像处理系统,其特征在于,所述处理器还用于:
    根据所述第一图像对所述第三图像进行双边滤波处理,以获得第二颜色中间图像及第三颜色中间图像。
  23. 一种电子设备,其特征在于,包括:
    镜头;
    壳体;及
    权利要求12至22意一项所述的图像处理系统,所述镜头、所述图像处理系统与所述壳体结合,所述镜头与所述图像处理系统的图像传感器配合成像。
  24. 一种包含计算机程序的非易失性计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时,使得所述处理器执行权利要求1至11任意一项所述的图像处理方法。
PCT/CN2020/120025 2020-08-18 2020-10-09 图像处理方法、图像处理系统、电子设备及可读存储介质 WO2022036817A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20950021.4A EP4202822A4 (en) 2020-08-18 2020-10-09 IMAGE PROCESSING METHOD, IMAGE PROCESSING SYSTEM, ELECTRONIC DEVICE AND READABLE STORAGE MEDIUM
US18/146,949 US11758289B2 (en) 2020-08-18 2022-12-27 Image processing method, image processing system, electronic device, and readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010833968.8A CN111899178B (zh) 2020-08-18 2020-08-18 图像处理方法、图像处理系统、电子设备及可读存储介质
CN202010833968.8 2020-08-18

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/146,949 Continuation US11758289B2 (en) 2020-08-18 2022-12-27 Image processing method, image processing system, electronic device, and readable storage medium

Publications (1)

Publication Number Publication Date
WO2022036817A1 true WO2022036817A1 (zh) 2022-02-24

Family

ID=73229742

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/120025 WO2022036817A1 (zh) 2020-08-18 2020-10-09 图像处理方法、图像处理系统、电子设备及可读存储介质

Country Status (4)

Country Link
US (1) US11758289B2 (zh)
EP (1) EP4202822A4 (zh)
CN (1) CN111899178B (zh)
WO (1) WO2022036817A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112738493B (zh) * 2020-12-28 2023-03-14 Oppo广东移动通信有限公司 图像处理方法、图像处理装置、电子设备及可读存储介质
CN112822475B (zh) * 2020-12-28 2023-03-14 Oppo广东移动通信有限公司 图像处理方法、图像处理装置、终端及可读存储介质
CN112702543B (zh) * 2020-12-28 2021-09-17 Oppo广东移动通信有限公司 图像处理方法、图像处理系统、电子设备及可读存储介质
CN115767290B (zh) * 2022-09-28 2023-09-29 荣耀终端有限公司 图像处理方法和电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080130991A1 (en) * 2006-11-30 2008-06-05 O'brien Michele Processing images having color and panchromatic pixels
CN110784634A (zh) * 2019-11-15 2020-02-11 Oppo广东移动通信有限公司 图像传感器、控制方法、摄像头组件及移动终端
CN111405204A (zh) * 2020-03-11 2020-07-10 Oppo广东移动通信有限公司 图像获取方法、成像装置、电子设备及可读存储介质
CN111491111A (zh) * 2020-04-20 2020-08-04 Oppo广东移动通信有限公司 高动态范围图像处理系统及方法、电子设备和可读存储介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7095420B2 (en) * 2002-10-17 2006-08-22 Lockheed Martin Corporation System and related methods for synthesizing color imagery
US8253832B2 (en) * 2009-06-09 2012-08-28 Omnivision Technologies, Inc. Interpolation for four-channel color filter array
JP5724185B2 (ja) * 2010-03-04 2015-05-27 ソニー株式会社 画像処理装置、および画像処理方法、並びにプログラム
JP2013066146A (ja) 2011-08-31 2013-04-11 Sony Corp 画像処理装置、および画像処理方法、並びにプログラム
CN104170376B (zh) * 2012-03-27 2016-10-19 索尼公司 图像处理设备、成像装置及图像处理方法
US9697434B2 (en) * 2014-12-10 2017-07-04 Omnivision Technologies, Inc. Edge detection system and methods
CN107045715B (zh) * 2017-02-22 2019-06-07 西南科技大学 一种单幅低动态范围图像生成高动态范围图像的方法
EP3610453B1 (en) * 2017-05-16 2023-05-10 Apple Inc. Synthetic long exposure image with optional enhancement using a guide image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080130991A1 (en) * 2006-11-30 2008-06-05 O'brien Michele Processing images having color and panchromatic pixels
CN110784634A (zh) * 2019-11-15 2020-02-11 Oppo广东移动通信有限公司 图像传感器、控制方法、摄像头组件及移动终端
CN111405204A (zh) * 2020-03-11 2020-07-10 Oppo广东移动通信有限公司 图像获取方法、成像装置、电子设备及可读存储介质
CN111491111A (zh) * 2020-04-20 2020-08-04 Oppo广东移动通信有限公司 高动态范围图像处理系统及方法、电子设备和可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4202822A4 *

Also Published As

Publication number Publication date
EP4202822A1 (en) 2023-06-28
CN111899178A (zh) 2020-11-06
EP4202822A4 (en) 2024-03-27
US20230164450A1 (en) 2023-05-25
CN111899178B (zh) 2021-04-16
US11758289B2 (en) 2023-09-12

Similar Documents

Publication Publication Date Title
WO2022007469A1 (zh) 图像获取方法、摄像头组件及移动终端
WO2021179806A1 (zh) 图像获取方法、成像装置、电子设备及可读存储介质
WO2022036817A1 (zh) 图像处理方法、图像处理系统、电子设备及可读存储介质
WO2021196553A1 (zh) 高动态范围图像处理系统及方法、电子设备和可读存储介质
CN111314592B (zh) 图像处理方法、摄像头组件及移动终端
WO2021212763A1 (zh) 高动态范围图像处理系统及方法、电子设备和可读存储介质
WO2021063162A1 (zh) 图像传感器、摄像头组件及移动终端
WO2021179805A1 (zh) 图像传感器、摄像头组件、移动终端及图像获取方法
CN110784634B (zh) 图像传感器、控制方法、摄像头组件及移动终端
WO2022007215A1 (zh) 图像获取方法、摄像头组件及移动终端
WO2021159944A1 (zh) 图像传感器、摄像头组件及移动终端
US20220336508A1 (en) Image sensor, camera assembly and mobile terminal
CN112738493B (zh) 图像处理方法、图像处理装置、电子设备及可读存储介质
WO2021062661A1 (zh) 图像传感器、摄像头组件及移动终端
CN112822475B (zh) 图像处理方法、图像处理装置、终端及可读存储介质
US20220139974A1 (en) Image sensor, camera assembly, and mobile terminal
CN111835971B (zh) 图像处理方法、图像处理系统、电子设备及可读存储介质
WO2021046691A1 (zh) 图像采集方法、摄像头组件及移动终端
CN111031297A (zh) 图像传感器、控制方法、摄像头组件和移动终端
WO2022141743A1 (zh) 图像处理方法、图像处理系统、电子设备及可读存储介质
CN112738494B (zh) 图像处理方法、图像处理系统、终端设备及可读存储介质
US20220279108A1 (en) Image sensor and mobile terminal
CN112235485A (zh) 图像传感器、图像处理方法、成像装置、终端及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20950021

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020950021

Country of ref document: EP

Effective date: 20230320