WO2022160995A1 - 一种图像传感器、摄像头、电子设备及控制方法 - Google Patents

一种图像传感器、摄像头、电子设备及控制方法 Download PDF

Info

Publication number
WO2022160995A1
WO2022160995A1 PCT/CN2021/138512 CN2021138512W WO2022160995A1 WO 2022160995 A1 WO2022160995 A1 WO 2022160995A1 CN 2021138512 W CN2021138512 W CN 2021138512W WO 2022160995 A1 WO2022160995 A1 WO 2022160995A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
filter
images
frames
camera
Prior art date
Application number
PCT/CN2021/138512
Other languages
English (en)
French (fr)
Inventor
孙岳川
胡佳
郭勇
王妙锋
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21922585.1A priority Critical patent/EP4262184A4/en
Publication of WO2022160995A1 publication Critical patent/WO2022160995A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B5/00Optical elements other than lenses
    • G02B5/20Filters
    • G02B5/201Filters in the form of arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/13Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with multiple sensors
    • H04N23/15Image signal generation with circuitry for avoiding or correcting image misregistration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/684Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time
    • H04N23/6845Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time by combination of a plurality of images sequentially taken
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/133Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing panchromatic light, e.g. filters passing white light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/1462Coatings
    • H01L27/14621Colour filter arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/73Colour balance circuits, e.g. white balance circuits or colour temperature control

Definitions

  • the present application relates to the technical field of image processing, and in particular, to an image sensor, a camera, an electronic device and a control method.
  • the present application provides an image sensor, a camera, an electronic device and a control method, which are used to improve the problems of low signal-to-noise ratio and poor detail resolution of an image obtained by a color camera in a dark light scene.
  • an image sensor includes a first filter and a plurality of photoelectric conversion elements.
  • the first filter includes a plurality of first filter units arranged in an array.
  • the first filter unit includes a plurality of filter blocks arranged in an array.
  • the plurality of filter blocks include a first transparent filter block, a second transparent filter block and at least two color filter components.
  • Each color filter assembly includes at least three color filter blocks for respectively transmitting three primary colors of light.
  • the first transparent filter blocks and the color filter blocks are alternately arranged.
  • the second transparent filter block is disposed around the area where the first transparent filter block and the color filter block are located.
  • each filter block covers a photoelectric conversion element, and the photoelectric conversion element is used to convert the light passing through the filter block into an electrical signal.
  • the above-mentioned image sensor provided by the embodiments of the present application has a first filter, and the first filter unit, which is the smallest repeating unit of the first filter, includes a plurality of color filter blocks for acquiring color information .
  • the above-mentioned plurality of color filter blocks can constitute at least two groups of color filter components, for example, a first filter component and a second filter component. Any one of the first filter component and the second filter component may include a blue filter block, a red filter block, and two green filter blocks, so that color information of the three primary colors can be obtained .
  • the first filter unit also has a plurality of first transparent filter blocks and a plurality of second transparent filter blocks for increasing the light input of the image sensor.
  • the first transparent filter blocks and the color filter blocks are alternately arranged, so as to increase the amount of light entering the area where the color filter blocks are located.
  • the second transparent filter block is arranged at the periphery of the area where the first transparent filter block and the color filter block are located, so that by setting the number of the second transparent filter block, the transparent filter block in the entire first filter block can be increased. The number of blocks to achieve the purpose of increasing the amount of light entering the image sensor.
  • the first camera with the above-mentioned image sensor compared with the black-and-white camera, can obtain more color information by setting the above-mentioned color filter block, so that the photos taken by the first camera can have higher color reproduction Spend.
  • the first camera can obtain a higher amount of light entering the image sensor by arranging the first transparent filter block and the second transparent filter block, so as to improve the first camera.
  • the detail resolution of pictures taken in a dark environment reduces the signal-to-noise ratio of the image.
  • the at least two color filter components include a first filter component and a second filter component.
  • a plurality of first transparent filter blocks and a plurality of color filter blocks are arranged in the form of a 4 ⁇ 4 matrix.
  • Any one of the first filter assembly and the second filter assembly includes a red filter block, a blue filter block and two green filter blocks.
  • the number of green filter blocks in each filter assembly is more than the number of red filter blocks and blue filter blocks, so as to meet the characteristics that human eyes are more sensitive to green.
  • the red filter block and the blue filter block in the first filter assembly and the red filter block and the blue filter block in the second filter assembly Blocks are arranged one after the other.
  • the two green filter blocks in the first filter assembly and the two green filter blocks in the second filter assembly are respectively distributed on both sides of the diagonal line of the 4 ⁇ 4 matrix.
  • the number of green filter blocks in each filter assembly is more than the number of red filter blocks and blue filter blocks, so as to meet the characteristics that human eyes are more sensitive to green.
  • the blue filter block in the first filter assembly, the blue filter block in the second filter assembly, and the red filter block in the first filter assembly are arranged in sequence.
  • the two green filter blocks in the first filter assembly and the two green filter blocks in the second filter assembly are respectively distributed on both sides of the diagonal line of the 4 ⁇ 4 matrix.
  • the number of green filter blocks in each filter assembly is more than the number of red filter blocks and blue filter blocks, so as to meet the characteristics that human eyes are more sensitive to green.
  • the two green filter blocks in the first filter assembly and the two green filter blocks in the second filter assembly are arranged in sequence.
  • the blue filter block in the first filter assembly, the blue filter block in the second filter assembly, the red filter block in the first filter assembly, and the red filter block in the second filter assembly are respectively distributed on both sides of the diagonal of a 4x4 matrix.
  • the number of green filter blocks in each filter assembly is more than the number of red filter blocks and blue filter blocks, so as to meet the characteristics that human eyes are more sensitive to green.
  • the plurality of filter blocks in the first filter unit are arranged in the form of an S ⁇ J matrix, wherein 4 ⁇ S, 4 ⁇ J, and S and J are even numbers.
  • S or J is less than 4
  • the number of transparent filter blocks in the first filter unit is small, which affects the amount of light entering the image sensor and is not conducive to improving the quality of images captured in a dark environment.
  • This application does not limit the upper limit of S and J.
  • the camera includes a lens assembly and any one of the above-mentioned image sensors, and the lens assembly is disposed on the light incident side of the image sensor.
  • the camera has the same technical effect as the image sensor provided in the foregoing embodiment, and details are not described herein again.
  • the electronic device includes a processor and a first camera electrically connected to the processor.
  • the first camera is the aforementioned camera.
  • the electronic device has the same technical effect as the camera provided in the foregoing embodiment, and details are not described herein again.
  • the electronic device further includes a second camera electrically connected to the processor.
  • the second camera includes a second filter.
  • the second filter includes a plurality of second filter units arranged in an array, and the second filter unit includes a red filter block, a blue filter block and two green filter blocks arranged in a 2 ⁇ 2 matrix. light block.
  • the two green filter blocks are arranged along one diagonal of the 2 ⁇ 2 matrix, and the red filter blocks and the blue filter blocks are arranged along the other diagonal of the 2 ⁇ 2 matrix.
  • the method is applied to a processor in any of the above electronic devices.
  • the method includes: first, receiving a user operation, where the user operation is used to trigger the first camera to collect N frames of original Raw domain images. Among them, N ⁇ 2, N is an integer.
  • one frame of images is selected from the N frames of Raw domain images as the first reference image, and the remaining images are the first images to be registered.
  • a first optical flow value between the first image to be registered and the first reference image in each frame is calculated. According to the first optical flow value, the first image to be registered is deformed and registered to the first reference image based on the first reference image.
  • the N frames of Raw domain images are demosaiced to obtain three primary color RGB domain images.
  • the image processing method has the same technical effect as the electronic device provided by the foregoing embodiments, and details are not described herein again.
  • selecting one frame of images from the N frames of Raw domain images as the first reference image, and the remaining images as the first images to be registered include: selecting the first frame of Raw domain images from the N frames of Raw domain images as the first reference image image, and the remaining N-1 frames are the first images to be registered. Since the image that the user sees from the screen of the mobile phone when the user triggers the camera button in the mobile phone, that is, the first frame of the image captured by the first camera, is the image that the user expects to present finally, the first image in the N frames of Raw domain images is The frame of Raw domain image is the first reference image, so that in the subsequent registration process, the remaining N-1 frames of Raw domain image can be aligned with the first frame image that the user expects to present.
  • 2 ⁇ N ⁇ 8 2 ⁇ N ⁇ 8.
  • N the greater the proportion of color filter blocks in the region where the first filter unit is located after the N frames of Raw domain images are superimposed, and the more color information obtained by the image sensor.
  • the value of N can be set within the range of 2 ⁇ N ⁇ 8.
  • the method further includes: performing automatic white balance processing, color correction processing, and distortion correction processing on the RGB domain image, so as to improve the quality of the image.
  • the method is applied to a processor in an electronic device as described above.
  • the method includes: first, receiving a user operation, where the user operation is used to trigger a first camera to collect N frames of first Raw domain images, and trigger a second camera to collect M frames of second Raw domain images; wherein, N ⁇ 1, M ⁇ 1 , where N and M are integers.
  • a second optical flow value between the first RGB domain image and the second RGB domain image is calculated, and a first optical flow confidence map is obtained.
  • the first filter unit includes a color filter block and a transparent filter block. Therefore, the first RGB domain image obtained by the first camera has better detail resolution. and night scene performance.
  • the number of color filter blocks in the first filter unit is limited, so color loss may occur in small objects and areas with rich colors.
  • the second filter unit only has color filter blocks. Therefore, the second RGB domain image obtained by the second camera has more realistic and rich colors, but because no transparent filter block is set, the detail expression and signal-to-noise ratio of the second RGB domain image are lower. In this way, when the first RGB domain image and the second RGB domain image are registered to form the third RGB domain image, it can have the detail resolution and night scene performance of the first RGB domain image, and can have both The color-rich real advantage of the second RGB domain image.
  • the method further includes: extracting from the N frames of the first Raw domain image.
  • One frame of image is selected as the third reference image, and the other images are the third images to be registered.
  • a third optical flow value between the third to-be-registered image and the third reference image in each frame is calculated. According to the third optical flow value, the third image to be registered is deformed and registered to the third reference image based on the third reference image.
  • the method before performing demosaic processing on the M frames of the second Raw domain images to obtain the second RGB domain images, the method further includes: selecting one frame of images from the N frames of the second Raw domain images as the fourth reference image, and the rest of the images as the fourth reference image. Four images to be registered. A fourth optical flow value between the fourth to-be-registered image and the fourth reference image in each frame is calculated. According to the fourth optical flow value, the fourth image to be registered is deformed and registered to the fourth reference image based on the fourth reference image. The technical effect of the registration is the same as described above, and will not be repeated here.
  • selecting one image as the second reference image and the other image as the second to-be-registered image includes: using the first RGB domain image as the second reference image , taking the second RGB domain image as the second image to be registered.
  • the technical effect of the registration is the same as described above, and will not be repeated here.
  • the above method further includes: performing automatic white balance processing, color correction processing, and distortion correction processing on the fourth RGB domain image, so as to improve the quality of the image captured by the electronic device.
  • the method is applied to a processor in an electronic device as described above.
  • the method includes: first, receiving a user operation, where the user operation is used to trigger a first camera to collect N frames of first Raw domain images, and trigger a second camera to collect M frames of second Raw domain images.
  • N ⁇ 1, M ⁇ 1, N and M are integers.
  • the N frames of the first Raw domain image, the M frames of the second Raw domain image, and the second optical flow confidence map are fused, and demosaicing is performed to obtain an RGB domain image.
  • the steps of demosaicing and image fusion are integrated to combine the N frames of first Raw domain images after registration, M frames of second Raw domain images after registration and alignment, and the second optical flow confidence map Together, they serve as the input to a module that has both demosaicing and image fusion functions. Therefore, through this module, the detailed expressive power of the N frames of the first Raw domain images after registration and the accuracy of the colors of the M frames of the second Raw domain images after registration and alignment can be combined, and according to the second optical flow confidence map.
  • the occlusion area caused by the dual cameras is repaired in a targeted manner. Since the N frames of the first Raw domain images after registration and the M frames of the second Raw domain images after registration and alignment are demosaic at the same time, compared with the scheme of demosaicing separately, the demosaicing can be reduced.
  • the mosaic algorithm can reduce errors caused by demosaicing.
  • M ⁇ 2 calculate the fifth light between the fifth reference image in the N frames of the first Raw domain image and the sixth reference image in the M frames of the second Raw domain image.
  • the method further includes: selecting one frame of images from the N frames of the first Raw domain images as the fifth reference image, and the remaining images as the fifth images to be registered.
  • a sixth optical flow value between the fifth image to be registered and the fifth reference image in each frame is calculated.
  • the sixth optical flow value using the fifth reference image as a reference, the fifth image to be registered is deformed and registered to the fifth reference image.
  • one frame of images is selected from the N frames of second Raw domain images as the sixth reference image, and the remaining images are the sixth images to be registered.
  • a seventh optical flow value between the sixth to-be-registered image and the sixth reference image in each frame is calculated.
  • the seventh optical flow value taking the sixth reference image as a reference, the sixth image to be registered is deformed and registered to the sixth reference image.
  • the technical effect of the registration is the same as described above, and will not be repeated here.
  • the above method further includes: performing automatic white balance processing, color correction processing, and distortion correction processing on the RGB domain image, so as to improve the quality of the image captured by the electronic device.
  • FIG. 1A is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 1B is a schematic structural diagram of another electronic device provided by an embodiment of the present application.
  • FIG. 2A is a schematic structural diagram of the first camera in FIG. 1A or FIG. 1B;
  • FIG. 2B is a schematic structural diagram of a first filter of the image sensor in FIG. 2A;
  • FIG. 2C is a schematic structural diagram of the first filter unit in FIG. 2B;
  • FIG. 2D is another schematic structural diagram of the first filter of the image sensor in FIG. 2A;
  • FIG. 3A is a schematic diagram of a partial structure of the first filter unit in FIG. 2C;
  • 3B is another schematic diagram of the partial structure of the first filter unit in FIG. 2C;
  • 3C is another schematic diagram of the partial structure of the first filter unit in FIG. 2C;
  • 3D is another schematic diagram of the partial structure of the first filter unit in FIG. 2C;
  • 3E is another schematic diagram of the partial structure of the first filter unit in FIG. 2C;
  • FIG. 4 is a schematic structural diagram of the image sensor in FIG. 2A;
  • 5A is a schematic diagram of a captured image provided by an embodiment of the present application.
  • 5B is a schematic diagram of another captured image provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 7 is a flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an interface display of an electronic device provided by an embodiment of the present application.
  • 9A is a schematic diagram of a hand shaking trajectory during a user shooting process provided by an embodiment of the present application.
  • 9B is a schematic diagram of a target image to be captured by the first camera according to an embodiment of the present application.
  • 10A, 10B, 10C, and 10D are schematic diagrams of a first frame of images, a second frame of images, a third frame of images, and a fourth frame of images captured by a first camera provided in an embodiment of the present application;
  • 11A is a schematic diagram of a superposition effect of multiple frames of images captured by a first camera according to an embodiment of the present application
  • 11B is a schematic diagram of another superposition effect of multiple frames of images captured by a first camera according to an embodiment of the present application.
  • FIG. 12A is a schematic structural diagram of another electronic device provided by an embodiment of the present application.
  • FIG. 12B is a schematic structural diagram of the image sensor in the second camera in FIG. 12A;
  • FIG. 13 is a schematic diagram of another interface display of the electronic device provided by the embodiment of the present application.
  • FIG. 14 is a flowchart of another image processing method provided by an embodiment of the present application.
  • 15A is a schematic diagram of an image captured by the first camera in FIG. 12A;
  • 15B is a schematic diagram of an image captured by the second camera in FIG. 12A;
  • 15C is a schematic diagram of optical flow values of the image shown in FIG. 15A and the image shown in FIG. 15B ;
  • Fig. 16 is an optical flow confidence map between the image shown in Fig. 15A and the image shown in Fig. 15B;
  • FIG. 17 is a flowchart of another image processing method provided by an embodiment of the present application.
  • first”, second, etc. are only used for descriptive purposes, and should not be understood as indicating or implying relative importance or implying the number of indicated technical features.
  • a feature defined as “first”, “second”, etc. may expressly or implicitly include one or more of that feature.
  • orientation terms such as “upper”, “lower”, “left” and “right” may include, but are not limited to, the orientations relative to the schematic placement of components in the drawings, and it should be understood that these orientations
  • the terminology can be a relative concept, and they are used for relative description and clarification, which can change correspondingly according to the change of the orientation in which the components are placed in the drawings.
  • connection should be understood in a broad sense.
  • connection may be a fixed connection, a detachable connection, or an integrated body; it may be directly connected, or Can be indirectly connected through an intermediary.
  • electrical connection may be a direct electrical connection or an indirect electrical connection through an intermediate medium.
  • An embodiment of the present application provides an electronic device, the electronic device includes a mobile phone (mobile phone), a tablet computer (pad), a TV, a smart wearable product (for example, a smart watch, a smart bracelet), a virtual reality (virtual reality, VR) Electronic devices, augmented reality (AR) and other electronic products with camera functions.
  • a mobile phone mobile phone
  • a tablet computer tablet computer
  • TV TV
  • a smart wearable product for example, a smart watch, a smart bracelet
  • VR virtual reality
  • AR augmented reality
  • the embodiments of the present application do not specifically limit the specific form of the above electronic device.
  • the following description is made by taking the electronic device 01 as a mobile phone as shown in FIG. 1A as an example.
  • the electronic device 01 may include a first camera 11 .
  • the first camera 11 can be used as a rear camera, and is disposed on the side where the housing 10 of the electronic device 01 is located.
  • the first camera 11 may be disposed on the side of the electronic device 01 where the image is displayed as a front camera.
  • the first camera 11 may include an image sensor 20a as shown in FIG. 2A and a lens assembly 30 disposed on the light incident side of the image sensor 20a.
  • the lens assembly 30 may include at least two lenses 301 arranged in layers.
  • the lens assembly 30 is capable of converging the external light (indicated by arrows in the figure) and then incident into the image sensor 20a.
  • the image sensor 20a of the first camera 11 may include a first color filter array (CFA) 200 as shown in FIG. 2B .
  • the above-mentioned first filter 200 may include a plurality of first filter units 210 arranged in an array.
  • the first filter unit 210 may include a plurality of filter blocks 211 arranged in an array as shown in FIG. 2C .
  • the above-mentioned plurality of filter blocks 211 may be arranged in the form of an S ⁇ J matrix.
  • the plurality of filter blocks 211 in the first filter unit 210 may include a first transparent filter block W1, a second transparent filter block W2 as shown in FIG. 2C, and a plurality of Color filter block 212 .
  • the color filter blocks 212 and the first transparent filter blocks W1 are alternately arranged.
  • the second transparent filter blocks W2 are disposed around the areas where the plurality of first transparent filter blocks W1 and the color filter blocks 212 are located.
  • the first transparent filter block W1 and the second transparent filter block W2 are used to directly transmit the light passing through the lens 301 in FIG. 2A .
  • the first filter unit 210 can be used as the minimum repeating unit of the first filter 200 .
  • the staggered arrangement of the color filter blocks 212 and the first transparent filter blocks W1 refers to the horizontal direction X or the vertical direction of the matrix formed by the plurality of filter blocks 211 in the first filter unit 210 .
  • Y between two adjacent color filter blocks 212, there is a first transparent filter block W1.
  • the materials of the first transparent filter block W1 and the second transparent filter block W2 may be the same, in this case, the first transparent filter block W1 and the second transparent filter block W2
  • the transmittance can be the same.
  • the materials of the first transparent filter block W1 and the second transparent filter block W2 may be different, which is not limited in this application.
  • the first filter unit 210 may include two color filter components, which are a first filter component 40a and a second filter component 40b, respectively.
  • any one of the first filter component 40a and the second filter component 40b may include at least three color filter components for respectively transmitting three primary colors, such as red (red, R) light, blue (blue, B) light. ) and green (green, G) color filters. Since human eyes are more sensitive to green, in the same first filter unit 210, the number of green filter blocks G may exceed the number of other color filter blocks.
  • any one of the above-mentioned color filter blocks may include a blue color filter block B, a Red filter block R, and two green filter blocks G.
  • the blue filter block B is used to transmit the blue light from the light from the lens 301 in FIG. 2A , and filter out the remaining light.
  • the red filter block R is used to transmit the red light in the light from the lens 301, and filter out the rest of the light.
  • the green filter block G is used to transmit the green light in the light from the lens 301, and filter out the rest of the light.
  • the light from the lens 301 can be divided into three primary colors (R, G, B) after passing through the above-mentioned color filter components, so that the electronic device 01 can obtain an RGB domain image.
  • each color filter block 212 in the first filter assembly 40a and the second filter assembly 40b is arranged in sequence.
  • the two green filter blocks G in the first filter assembly 40a and the two green filter blocks G in the second filter assembly 40b can be respectively distributed on the diagonal (O1-O1) of the 4 ⁇ 4 matrix on both sides.
  • the blue filter blocks B, The blue filter block B in the second filter assembly 40b, the red filter block R in the first filter assembly 40a, and the red filter block R in the second filter assembly 40b are arranged in sequence.
  • the red filter block R in the first filter assembly 40a and the red filter in the second filter assembly 40b along the direction of the diagonal (O1-O1) of the 4 ⁇ 4 matrix, the red filter block R in the first filter assembly 40a and the red filter in the second filter assembly 40b
  • the block R, the blue color filter block B in the first filter assembly 40a, and the blue color filter block B in the second filter assembly 40b are arranged in sequence.
  • the two green filter blocks G in the first filter assembly 40a and the two green filter blocks G in the second filter assembly 40b can be respectively distributed in a 4 ⁇ 4 matrix on both sides of the diagonal (O1-O1).
  • the two green filter blocks in the first filter assembly 40a G and the two green filter blocks G in the second filter assembly 40b are arranged in sequence.
  • the red filter blocks R in the component 40b may be distributed on both sides of the diagonal line (O1-O1) of the 4 ⁇ 4 matrix, respectively.
  • the red filter block R in the first filter assembly 40a and the red filter block R in the second filter assembly 40b are located above the diagonal (O1-O1) of the 4 ⁇ 4 matrix
  • the first The blue filter block B in the filter assembly 40a and the blue filter block B in the second filter assembly 40b are located below the diagonal line (O1-O1) of the 4 ⁇ 4 matrix as an example for description.
  • the red filter block R in the first filter assembly 40a and the red filter block R in the second filter assembly 40b may be located on the diagonal (O1- Below O1)
  • the blue filter block B in the first filter assembly 40a and the blue filter block B in the second filter assembly 40b may be located on the diagonal (O1-O1) of the 4 ⁇ 4 matrix above.
  • each color filter block 212 in the second filter assembly 40b is given by way of example, and is not intended to limit the configuration of each color filter block 212 in the first filter assembly 40a and the second filter assembly 40b. .
  • the rest of the arrangement of the respective color filter blocks 212 in the first filter element 40a and the second filter element 40b will not be repeated here.
  • the first filter 200 includes two color filter components, for example, the first filter component 40a and the second filter component 40b.
  • the first color filter 200 may further include three or more color filter components 40 .
  • the first filter 200 includes a first filter assembly 40a and a second filter assembly 40b, and each color filter block 212 in the first filter assembly 40a and the second filter assembly 40b The description is made in the manner shown in FIG. 3A .
  • the above-mentioned image sensor 20 a further includes a plurality of photoelectric conversion elements 41 .
  • the position of each filter block 211 may correspond to the position of one photoelectric conversion element 41 , so that each filter block 211 can cover one photoelectric conversion element 41 .
  • the photoelectric conversion element 41 can be a photodiode, which is used to convert the light filtered by the light filter block 211 into an electrical signal.
  • the above-mentioned photodiode can be fabricated by a charge coupled device (CCD) process, and the above-mentioned photodiode can convert the collected optical signal (represented by an arrow in FIG. 4 ) into an electrical signal, and then use amplification and The analog-to-digital conversion circuit converts the digital image signal.
  • the photodiode can be fabricated using a complementary metal oxide semiconductor (CMOS) process.
  • CMOS complementary metal oxide semiconductor
  • the photodiode fabricated by CMOS process has both N-type and P-type semiconductors. The current generated by these two semiconductors through complementary effects can be recorded and interpreted by the processing chip, and converted into digital image signals through a digital-to-analog conversion circuit.
  • the image sensor 20a may include the first optical filter 200 and a plurality of photoelectric conversion elements 41 , and the position of each photoelectric conversion element 41 corresponds to a filter block 211 in the first optical filter 200 .
  • the filter block 211 is the first transparent filter block W1 or the second transparent filter block W2
  • the light incident on the filter block 211 such as white (W) light, may be all or nearly all It passes through and enters the photoelectric conversion element 41 corresponding to the position of the filter block 211 , and is converted into an electrical signal under the action of the photoelectric conversion element 41 .
  • the filter block 211 is the above-mentioned color filter block, for example, the above-mentioned blue filter block B, red filter block R or green filter block G
  • the light incident on the filter block 211 For example, in the W light, only the light with the same color as the color filter block 212 can pass through and enter the photoelectric conversion element 41 corresponding to the position of the filter block 211 , and is converted under the action of the photoelectric conversion element 41 . into an electrical signal.
  • the one filter block 211 and the photoelectric conversion element 41 corresponding to the position of the filter block 211 may constitute a pixel of the image sensor 20a.
  • the digital signal obtained after analog-to-digital conversion of the electrical signal obtained by each pixel is the raw (Raw) domain image output by the image sensor 20a.
  • the color filter block 212 for example, the blue color filter block B only allows light of one color, such as B light, to pass through, in the Raw domain image, the color filter block 212 of the image sensor 20a, for example, the blue light
  • the color filter block 212 of the image sensor 20a for example, the blue light
  • blue is consistent. So the Raw domain image is a single channel image.
  • any one of the first transparent filter block W1 or the second transparent filter block W2 can transmit the incident light, for example, the W light, all or almost all of it, therefore, in the Raw domain image, by
  • the data obtained by the pixels of the image sensor 20a formed by the first transparent filter block W1 or the second transparent filter block W2 of the image sensor 20a and the photoelectric conversion element 41 corresponding to the position of the transparent filter block have no color information.
  • the first transparent filter block W1 or the second transparent filter block W2 can transmit all or nearly all the incident light, in the image sensor 20a, the first transparent filter block W1 or the second transparent filter block W2 The greater the number of blocks W2, the more the amount of incoming light can be obtained by the image sensor 20a through the above-mentioned transparent filter block, thereby effectively improving the photoelectric conversion efficiency of the image sensor 20a.
  • the quality of the captured image such as the signal-to-noise ratio and image detail resolution, can be improved when the image is captured in a dimly lit environment. For example, when the amount of incoming light of the image sensor 20a is small, the image captured by the image sensor 20a is as shown in FIG.
  • the line path in the area B may appear blurred.
  • the image captured by the image sensor 20a is shown in FIG. 5B .
  • the line strips in the B area are clearer, the signal-to-noise ratio is lower, and the image detail resolution is better. powerful.
  • the color filter block 212 can transmit light with the same color as the color filter block 212 , the more the above-mentioned color filter block 212 is, the more color the image processor 20a can obtain through the color filter block 212 The more information, the more color information in the Raw domain image captured by the first camera 11 having the above-mentioned image sensor 20a. In this way, in the process of converting the Raw domain image into the RGB domain image, it is necessary to perform interpolation estimation on the Raw domain image to calculate the other two based on the known color information, such as blue information, in the pixels of the Raw domain image. color information, such as red information and green information.
  • the restoration of the color of the object in the objective world in the image captured by the first camera 11 can be improved. to reduce color casts and chromatic aberrations.
  • the above-mentioned image sensor 20a has the first filter 200, which is the smallest repeating unit of the first filter 200.
  • the first filter unit 210 has a structure as shown in FIG. 2C including: A plurality of color filter blocks 212 for obtaining color information.
  • the above-mentioned plurality of color filter blocks 212 may constitute at least two groups of color filter components as shown in FIG. 3A , for example, a first filter component 40a and a second filter component 40b.
  • Any one of the first filter assembly 40a and the second filter assembly 40b may include a blue filter block B, a red filter block R, and two green filter blocks G, so that it is possible to obtain Color information to the three primary colors (R, G, B).
  • the greater the number of the above-mentioned color filter components the more color information can be obtained by the image sensor 20a.
  • the first filter unit 210 also has a plurality of first transparent filter blocks W1 and a plurality of second transparent filter blocks W2 for increasing the light input amount of the image sensor 20a.
  • the first transparent filter blocks W1 and the color filter blocks 212 are alternately arranged to increase the amount of light entering the area where the color filter blocks 212 are located.
  • the second transparent filter block W2 is disposed around the area where the first transparent filter block W1 and the color filter block 212 are located, so that the entire first filter block can be increased by setting the number of the second transparent filter block W2
  • the number of transparent filter blocks in 200 achieves the purpose of increasing the amount of light entering the image sensor 20a.
  • the first camera 11 having the above-mentioned image sensor 20a can obtain more color information by setting the above-mentioned color filter block 212, compared with the black-and-white camera, so that the photos taken by the first camera 11 can have relatively high color information. High color reproduction.
  • the first camera 11 can make the image sensor 20a obtain higher input by setting the first transparent filter block W1 and the second transparent filter block W2. The amount of light is used to improve the detail resolution of the picture taken by the first camera 11 in a dark environment and reduce the signal-to-noise ratio of the image.
  • the first filter unit 210 includes S ⁇ J filter blocks 211 arranged in an array.
  • the S ⁇ J filter blocks 211 include a plurality of color filter blocks and a plurality of transparent filter blocks.
  • the values of S and J may be 4 ⁇ S, 4 ⁇ J, and S and J are even numbers.
  • S or J is less than 4, the number of transparent filter blocks in the first filter unit 210 is small, which affects the amount of light entering the image sensor 20a and is not conducive to improving the quality of images captured in a dark environment. This application does not limit the upper limit of S and J.
  • the value of S or J can be selected as a value less than or equal to 16, for example, the value of S or J can be 6, 8, 10, 12, 14 or 16. Of course, in other embodiments of the present application, the value of S or J may be a value greater than 16.
  • the number of color filter blocks cannot be set too much.
  • Steps S101 to S106 shown in FIG. 7 are to improve the degree of restoration of the actual color of the image captured by the first camera 11 when the number of color filter blocks 212 in the first filter unit 210 is limited.
  • the above-mentioned user operation may be that the user triggers the camera button 03 in the electronic device 01 .
  • the processor 02 may issue a control instruction to the first camera 11 to trigger the first camera 11 to collect N frames of Raw domain images.
  • N ⁇ 2 N is an integer.
  • the first filter unit 210 in the first camera 11 includes a red filter block R, a green filter block G, a blue filter block B and a transparent filter block (including the first transparent filter block W1 and The second transparent filter block W2), so the Raw domain image captured by the first camera 11 can be referred to as an RGBW Raw domain image.
  • the hand When a user holds a mobile phone for shooting, the hand usually shakes when the user's finger touches the camera button 03.
  • the hand shaking trajectory can be shown in Figure 9A, from position A1 to position A2.
  • FIG. 9B is a target image to be captured by the first camera 11 .
  • FIG. 10A is a first frame of image captured by the first camera 11 .
  • the transparent filter block in the first filter unit 210 for example, The first transparent filter block W1 and the second transparent filter block W2 are not shown.
  • FIG. 10A since the number of color filter blocks 212 in each first filter unit 210 is limited, it can be seen from FIG. 10A that some areas in the first frame image, such as the bodies of the butterflies and ladybugs Part and the middle and right half of the flower, because there is no color filter block 212, the color of the above-mentioned areas is not captured in the first frame image.
  • the second frame image can capture the colors of the middle of the flower, the left half of the ladybug's body, etc.
  • FIG. 10C for the third frame of images captured by the first camera 11 , it can be known from the motion estimation of the user's hand shaking shown in FIG. 9A that the first camera 11 will be displaced to the lower left.
  • the third frame image can capture the colors of the right half of the flower, the right half of the ladybug's body, and the body of the butterfly number 1.
  • the fourth frame image can capture the colors of the upper half of the flower, the upper half of the ladybug's body, and the wings of the butterfly number 1.
  • S102 Select one frame of images from the N frames of Raw domain images as the first reference image, and the other images are the first images to be registered.
  • the first frame of the Raw domain image may be selected from the N frames of Raw domain images as the above-mentioned first reference image, and the remaining N ⁇ 1 frames are the first images to be registered.
  • N-1 frames of the first image to be registered are aligned with the above-mentioned first reference image.
  • one frame of the Raw domain image may be arbitrarily selected from the N frames of Raw domain images as the above-mentioned first reference image, and the remaining N ⁇ 1 frames are the first images to be registered. This application does not limit this.
  • S103 Calculate the first optical flow value between the first image to be registered and the first reference image in each frame.
  • the optical flow value is used to represent the displacement between the reference image and the image to be registered.
  • the first frame of the Raw domain image of the first reference image can be used as the benchmark, and an optical flow algorithm, such as Lucas-Kanade, etc., can be used to successively calculate the first frame of Raw domain image and the second to fourth frames of Raw domain images.
  • the first optical flow value in between.
  • the second frame to the fourth frame of the Raw domain image may be deformed respectively.
  • the displacement between the deformed second to fourth frames of Raw domain images and the first frame of Raw domain images may be equal to or approximately zero.
  • the position where the image 3 captures the color and the position where the color filter block 212 captures the color of the second frame of Raw domain image 4 are complementary to each other, so that the number of color filter blocks 212 in the region where the first filter unit 210 is located is increased.
  • the first filter unit 210 includes 16 ⁇ 16 (256 in total) filter blocks 211 , each first filter unit 210 has 8 color filter blocks 212 , and the first camera 11 continuously exposes 4 frames
  • the value of N can be set within the range of 2 ⁇ N ⁇ 8.
  • the value of N may be 2, 3, 4, 5, 6, 7 or 8.
  • the image processor 20a in the first camera 11 can obtain more color information through the color filter block 212 .
  • the second frame of Raw domain images can be used to compensate the colors of the middle of the flower and other regions in the first frame of Raw domain images.
  • the colors of the right half of the flower in the first frame of the Raw domain image can be compensated by the third frame of the Raw domain image.
  • the colors of the upper half of the flower and other regions in the first frame of the Raw domain image can be compensated by the fourth frame of the Raw domain image.
  • the compensation for some colors of the remaining areas, such as ladybugs and butterflies can be obtained in the same way, and will not be repeated here.
  • the demosaic (Demosaic) module in the processor 02 can perform demosaic processing on the registered 4 frames of single-channel Raw domain images (size H ⁇ W ⁇ 4).
  • "4" in (size H ⁇ W ⁇ 4) represents 4 frames.
  • interpolation estimation can be performed on the Raw domain image based on the neural network model in the processor 02 to calculate the other two-channel color information based on the known single-channel color information, such as blue information, in the pixels of the Raw domain image, For example, red information and green information.
  • a three-channel RGB domain map (dimensions H ⁇ W ⁇ 3) can be obtained.
  • "3" in (dimension H ⁇ W ⁇ 3) represents 3 channels.
  • H ⁇ 1, W ⁇ 1, H and W are positive integers.
  • the solution of the present application can obtain more colors compared to the way of only demosaicing the single frame image or the RGB domain image.
  • the reconstructed RGB domain image is closer to the target image shown in FIG. 9B , so that the image captured by the first camera 11 can improve the degree of restoration of the color of objects in the objective world, and reduce color shift and color difference.
  • the first filter unit 210 in the image processor 20a has more transparent filter blocks (including the first transparent filter block W1 and the second transparent filter block W2), therefore, when shooting each frame with a single channel More light can be obtained in the raw domain image, so that the reconstructed RGB domain image can take into account the color information and have good details.
  • the image signal processing (image signal processor, ISP) module in the processor 02 can execute the above S106, and transmit the processed image to the display screen of the electronic device 01 for display, so that the user can see the final captured image .
  • ISP image signal processor
  • the electronic device 01 only has the first camera 11 .
  • the electronic device may include the above-mentioned first camera 11 and second camera 12 .
  • the first camera 11 and the second camera 12 may both be rear cameras, and both are electrically connected to the above-mentioned processor 02 .
  • the second camera 12 may also include a lens assembly and an image sensor.
  • the structure of the lens assembly is the same as that described above, and will not be repeated here.
  • the image sensor of the second camera 12 is represented by "20b"
  • the image sensor of the first camera 11 is represented by "20a".
  • the image sensor 20b of the second camera may include a second filter.
  • the second filter includes a plurality of second filter units 310 arranged in an array.
  • the second filter unit 310 includes a red filter block R, a blue filter block B, and two green filter blocks G arranged in a 2 ⁇ 2 matrix.
  • the two green filter G blocks are arranged along one diagonal of the 2 ⁇ 2 matrix
  • the red filter R and the blue filter B are arranged along the other diagonal of the 2 ⁇ 2 matrix.
  • the second filter in the image sensor 20b of the second camera 12 may be a Bayer CFA arranged in RGGB.
  • the user may set the first camera 11 or the second camera 12 as the main camera in the operation interface of the electronic device 01 .
  • the above-mentioned first camera 11 or second camera 12 can also be set to be turned on or off.
  • the main camera refers to the content that the camera captures during the shooting process is the content that the user sees in the viewfinder frame in the display screen of the electronic device 01 .
  • the processor 02 electrically connected to the first camera 11 can execute the above steps S101 to S106, which will not be repeated here.
  • the image processing process of the processor 02 when the first camera 11 and the second camera 12 are both turned on will be described in detail below.
  • the user selects the dual-camera mode through the interface to turn on the first camera 11 and the second camera 12 at the same time, and the first camera 11 is used as the main camera.
  • the processor 02 electrically connected to the first camera 11 and the second camera 12 can receive user operations.
  • the user operation may trigger the camera button 03 in the electronic device 01 as shown in FIG. 8 for the user.
  • the processor 02 After the processor 02 receives the user's operation, it can send control instructions to the first camera 11 and the second camera 12 shown in FIG. 14 .
  • the next image processing method may include S201 to S210 as shown in FIG. 14 .
  • the processor 02 may select one frame of images from the N frames of the first Raw domain images captured by the first camera 11 as the third reference image (for example, the first frame image), and the other images are The third image to be registered.
  • the above optical flow algorithm is used to calculate the third optical flow value between the third image to be registered and the third reference image in each frame.
  • the third optical flow value using the third reference image as a reference, the third image to be registered is deformed and registered to the third reference image.
  • the registration of N frames of RGBW first Raw domain images is completed.
  • the processor 02 may perform demosaic processing on a single RGBW Raw image captured by the first camera 11 to generate a three-channel first RGB image RGB1.
  • the RGB1 image may be as shown in Figure 15A.
  • the processor 02 may perform demosaic processing on the N frames of RGBW first Raw domain images after registration and alignment to generate the above-mentioned first RGB domain image RGB1 image.
  • the processor 02 may select one frame of images from the M frames of the second Raw domain images captured by the second camera 12 as the fourth reference image (for example, the first frame image), and the other images are The fourth image to be registered.
  • the above optical flow algorithm is used to calculate the fourth optical flow value between the fourth image to be registered and the fourth reference image in each frame.
  • the fourth optical flow value using the fourth reference image as a reference, the fourth image to be registered is deformed and registered to the fourth reference image.
  • the registration of M frames of RGBW second Raw domain images is completed.
  • the processor 02 can perform demosaic processing on a single RGBW Raw domain image captured by the second camera 12 to generate a three-channel second RGB domain image RGB2, the RGB2 image can be as shown in FIG. 15B . Show.
  • the processor 02 may perform demosaic processing on the M frames of RGBW Raw domain images after registration to generate the above-mentioned second RGB domain image RGB2.
  • the first RGB domain image RGB1 shown in FIG. 15A obtained by the first camera 11 is different from the The position of the content in the second RGB domain image RGB2 obtained by the second camera 12 is deviated.
  • the position of the canvas at the left end in the first RGB domain image RGB1 is deviated from the position of the canvas at the left end in the second RGB domain image RGB2.
  • the bear at the right end in the first RGB domain image RGB1 deviates from the preferred position of the bear bear in the second RGB domain image RGB2. Therefore, the following steps are required to align the above RGB1 and RGB2.
  • the processor 02 may use the above optical flow algorithm to calculate the second optical flow value between the first RGB domain image RGB1 and the second RGB domain image RGB2.
  • the second optical flow value can represent the displacement between RGB1 and RGB2.
  • the second optical flow value may be represented in the manner of 15C. In Fig. 15C, the darker the position, the larger the optical flow value, and the larger the position shift. Conversely, the lighter the color, the smaller the optical flow value and the smaller the position offset.
  • the displacement between RGB1 and RGB2 is small at the position C1 where the color is lighter.
  • the displacement between RGB1 and RGB2 is larger.
  • the processor 02 can also obtain the first optical flow confidence map as shown in FIG. 16 .
  • the optical flow confidence of the darker position C3 is smaller than that of the light-colored position C4 .
  • the higher the optical flow confidence the more realistic the image captured in this part.
  • the positional difference between RGB1 and RGB2 is mainly due to the parallax existing between the first camera 11 and the second camera 12, which causes the edge part of the object to be photographed to have an area that is not photographed due to occlusion. The content of this part of the area will be lost, so it can be seen from Figure 16 that the areas with low optical flow confidence are mainly distributed at the edge of the photographed object.
  • the processor 02 may select one image from the first RGB domain image RGB1 and the second RGB domain image RGB2 as the second reference image, and the other image as the second image to be registered.
  • the second image to be registered is deformed based on the second reference image, and registered to the second reference image, so that the first RGB domain image RGB1 and The content of the second RGB domain image RGB2 is substantially the same, so that the displacement between the first RGB domain image RGB1 and the second RGB domain image RGB2 may be equal to or approximately zero.
  • a third RGB domain image RGB3 can be obtained.
  • the first filter unit 210 includes a color filter block and a transparent filter block. Therefore, the RGB1 image obtained by the first camera 11 has better detail analysis strength and night scene performance.
  • the number of color filter blocks in the first filter unit 210 is limited, so the phenomenon of color loss may exist in small objects and areas with rich colors.
  • the second filter unit 310 only has color filter blocks. Therefore, the RGB2 image obtained by the second camera 12 has more realistic and rich colors, but because no transparent filter block is set, the detail expression and signal-to-noise ratio of the RGB2 image are lower.
  • the first RGB domain image RGB1 and the second RGB domain image RGB2 are registered to form the third RGB domain image RGB3, it can have the detail resolution and night scene performance capability of the first RGB domain image RGB1, It can also have the advantages of rich and real colors of the second RGB domain image RGB2.
  • the first RGB domain image RGB1 obtained by the first camera 11 can be used as the second reference image
  • the second RGB domain image obtained by the second camera 12 can be used as the second reference image
  • the image RGB2 is used as the second image to be registered.
  • the final registered image is the same as the image the user expects to be captured by the main camera.
  • the second RGB domain image RGB2 may also be used as the second reference image
  • the first RGB domain image RGB1 may be used as the second to-be-registered image.
  • the processor 02 may use a fusion algorithm, such as a Poisson fusion algorithm, according to the second optical flow value obtained in S207, to combine the first RGB domain image RGB1, the third RGB domain image RGB3 and the first RGB domain image RGB1, the third RGB domain image RGB3 and the first RGB domain image as the second reference image.
  • a fusion algorithm such as a Poisson fusion algorithm
  • Optical flow confidence maps are fused to obtain the fourth RGB domain image RGB4.
  • the optical flow calculation obtains the first optical flow confidence map to constrain and guide the information loss area, so as to use the first RGB domain image RGB1 as a benchmark to focus on repairing the color and texture of the information loss area.
  • the obtained fourth RGB domain image RGB4 can take into account the rich details of the first RGB domain image RGB1 and the accurate color of the second RGB domain image RGB2, and can effectively restore the texture and color of the real object at the edge position.
  • the ISP module in the processor 02 may execute the above-mentioned S210 to perform automatic white balance processing, color correction processing, and distortion correction processing on the fourth RGB domain image RGB4. Afterwards, the processed image can be transmitted to the display screen of the electronic device 01 for display, so that the user can see the final captured image.
  • the processor 02 can send control instructions to the first camera 11 and the second camera 12 shown in FIG. 17 after the above-mentioned user operation.
  • the next image processing method may include S301 to S307 as shown in FIG. 17 .
  • S301 is the same as the above-mentioned S201, and will not be repeated here.
  • the processor 02 may perform the following S301 to register N frames of RGBW Raw domain images.
  • the processor 02 may skip the above S302 and directly execute the following S306.
  • the processor 02 may select one frame of images from the N frames of the first Raw domain images as the fifth reference image (for example, the first frame image), and the remaining images are the fifth images to be registered .
  • the above optical flow algorithm is used to calculate the sixth optical flow value between the fifth image to be registered and the fifth reference image in each frame.
  • the fifth image to be registered is deformed and registered to the fifth reference image.
  • S303 is the same as the above-mentioned S204, and will not be repeated here.
  • the processor 02 may perform the following S304 to register N frames of RGBW Raw domain images.
  • the processor 02 may skip the above S304 and directly execute the following S305.
  • the processor 02 may select one frame of images from the N frames of the second Raw domain images as the sixth reference image (eg, the first frame image), and the remaining images are the sixth images to be registered.
  • the above optical flow algorithm is used to calculate the seventh optical flow value between the sixth to-be-registered image and the sixth reference image in each frame.
  • the sixth image to be registered is deformed and registered to the sixth reference image.
  • the processor 02 may use the above optical flow algorithm to calculate the fifth reference image (for example, the first frame) in the N frames of RGBW first Raw domain images, and the M frames of RGBW second Raw domain images.
  • a fifth optical flow value between a sixth reference image (eg, the first frame) in the images, and a second optical flow confidence map is obtained. Therefore, the displacement between the fifth reference image and the above-mentioned sixth reference image can be obtained through the fifth optical flow value.
  • the second optical flow confidence map the unphotographed area that exists at the edge of the photographed object due to the parallax between the first camera 11 and the second camera 12 is obtained.
  • the processor 02 registers the N frames of the first Raw domain images after registration, the M frames of the second Raw domain images after registration and alignment, and the second Raw domain images after registration and alignment.
  • Optical flow confidence maps are fused and demosaiced to obtain RGB domain images.
  • the fifth reference image in the first Raw domain image captured by the main camera can be used as the benchmark, and the sixth reference image in the second Raw domain image can be deformed and registered to Fifth reference image.
  • the N frames of the first Raw domain images are registered with the M frames of the second Raw domain images.
  • the second optical flow confidence map can also be obtained through optical flow calculation to constrain the information loss area, so that the color of the information loss area is based on the fifth reference image in the first Raw domain image. and textures for key fixes.
  • the final RGB domain image can take into account the rich details of N frames of the first Raw domain image and the accurate color of M frames of the second Raw domain image, and can effectively restore the texture and color of the real object at the edge position.
  • the ISP module in the processor 02 may execute the above-mentioned S307 to perform automatic white balance processing, color correction processing, and distortion correction processing on the RGB domain image. Afterwards, the processed image can be transmitted to the display screen of the electronic device 01 for display, so that the user can see the final captured image.
  • the difference between the solution shown in FIG. 17 and the solution shown in FIG. 14 is that in the solution shown in FIG.
  • the N frames of the first Raw domain image, the M frames of the registered and aligned second Raw domain images, and the second optical flow confidence map are jointly used as the input of the module with both demosaicing and image fusion functions. Therefore, through this module, the detailed expressive power of the N frames of the first Raw domain images after registration and the accuracy of the colors of the M frames of the second Raw domain images after registration and alignment can be combined, and according to the second optical flow confidence map.
  • the occlusion area caused by the dual cameras is repaired in a targeted manner.
  • the demosaicing can be reduced.
  • the mosaic algorithm can reduce errors caused by demosaicing.

Abstract

本申请提供一种图像传感器、摄像头、电子设备及控制方法,涉及图像处理技术领域,用于改善彩色摄像头在暗光场景下获得的图像信噪比低和细节解析力差的问题。该图像传感器包括第一滤光片以及多个光电转换元件。其中,第一滤光片包括阵列排布的多个第一滤光单元。第一滤光单元包括阵列排布的多个滤光块。多个滤光块包括第一透明滤光块、第二透明滤光块以及至少两个彩色滤光组件。每个彩色滤光组件至少包括三个分别用于透过三原色光的彩色滤光块。第一透明滤光块与彩色滤光块交错排布。第二透明滤光块设置于第一透明滤光块与彩色滤光块所在区域的周边。此外,每个滤光块覆盖一个光电转换元件,光电转换元件用于将经过滤光块的光线转换成电信号。

Description

一种图像传感器、摄像头、电子设备及控制方法
本申请要求于2021年01月26日提交国家知识产权局、申请号为202110106228.9、申请名称为“一种图像传感器、摄像头、电子设备及控制方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,尤其涉及一种图像传感器、摄像头、电子设备及控制方法。
背景技术
随着手机摄影逐渐成为用户首选的拍照方式,用户对手机摄影的需求不再满足于简单的拍摄记录,而是对图像细节和光影艺术感的极致追求。然而,目前手机中采用的彩色摄像头虽然能够真实刻画客观世界的物体颜色,但是由于彩色摄像头中每个像素只吸收入射光中特定颜色的光线,使得彩色摄像头在暗光场景下获得的图像信噪比低和细节解析力差。
发明内容
本申请提供一种图像传感器、摄像头、电子设备及控制方法,用于改善彩色摄像头在暗光场景下获得的图像信噪比低和细节解析力差的问题。
为达到上述目的,本申请采用如下技术方案:
本申请的一方面,提供一种图像传感器。该图像传感包括第一滤光片以及多个光电转换元。其中,第一滤光片包括阵列排布的多个第一滤光单元。第一滤光单元包括阵列排布的多个滤光块。多个滤光块包括第一透明滤光块、第二透明滤光块以及至少两个彩色滤光组件。每个彩色滤光组件至少包括三个分别用于透过三原色光的彩色滤光块。第一透明滤光块与彩色滤光块交错排布。第二透明滤光块设置于第一透明滤光块与彩色滤光块所在区域的周边。此外,每个滤光块覆盖一个光电转换元件,光电转换元件用于将经过滤光块的光线转换成电信号。综上所述,本申请实施例提供的上述图像传感器具有第一滤光片,作为该第一滤光片的最小重复单元第一滤光单元包括用于获取彩色信息的多个彩色滤光块。上述多个彩色滤光块可以构成至少两组彩色滤光组件,例如,第一滤光组件和第二滤光组件。该第一滤光组件和第二滤光组件中的任意一个滤光组件可以包括一个蓝色滤光块,一个红色滤光块,以及两个绿色滤光块,从而能够获取到三原色的颜色信息。上述彩色滤光组件的数量越多,可以使得图像传感器获得的彩色信息越多。此外,第一滤光单元还具有用于提高图像传感器进光量的多个第一透明滤光块和多个第二透明滤光块。其中,第一透明滤光块与彩色滤光块交替设置,以提升彩色滤光块所在区域的进光量。此外,第二透明滤光块设置于第一透明滤光块与彩色滤光块所在区域的周边,这样可以通过设置第二透明滤光块的数量,增加整个第一滤光片中透明滤光块的数量,达到提高图像传感器进光量的目的。这样 一来,具有上述图像传感器的第一摄像头,相对于黑白摄像头而言,通过设置上述彩色滤光块能够获得较多的彩色信息,以使得第一摄像头拍摄的照片能够具有较高的色彩还原度。此外,该第一摄像头相对于采用拜耳阵列的彩色摄像头而言,通过设置上述第一透明滤光块和第二透明滤光块,能够使得图像传感器获得较高的进光量,以提升第一摄像头在较暗环境下拍摄图片的细节解析度,降低图像信噪比。
可选的,至少两个彩色滤光组件包括第一滤光组件和第二滤光组件。同一个第一滤光单元中,多个第一透明滤光块和多个彩色滤光块呈4×4矩阵的形式排布。第一滤光组件和第二滤光组件中的任意一个滤光组件包括一个红色滤光块、一个蓝色滤光块以及两个绿色滤光块。每个滤光组件中绿色滤光块的数量多于红色滤光块和蓝色滤光块的数量,以符合人眼对绿色较为敏感的特点。
可选的,沿4×4矩阵的对角线的方向,第一滤光组件中的红色滤光块、蓝色滤光块以及第二滤光组件中的红色滤光块、蓝色滤光块依次排列。第一滤光组件中的两个绿色滤光块与第二滤光组件中的两个绿色滤光块分别分布于4×4矩阵的对角线的两侧。每个滤光组件中绿色滤光块的数量多于红色滤光块和蓝色滤光块的数量,以符合人眼对绿色较为敏感的特点。
可选的,沿4×4矩阵的对角线的方向,第一滤光组件中的蓝色滤光块、第二滤光组件中的蓝色滤光块以及第一滤光组件中的红色滤光块、第二滤光组件中的红色滤光块依次排列。第一滤光组件中的两个绿色滤光块与第二滤光组件中的两个绿色滤光块分别分布于4×4矩阵的对角线的两侧。每个滤光组件中绿色滤光块的数量多于红色滤光块和蓝色滤光块的数量,以符合人眼对绿色较为敏感的特点。
可选的,沿4×4矩阵的对角线的方向,第一滤光组件中的两个绿色滤光块以及第二滤光组件中的两个绿色滤光块依次排列。第一滤光组件中的蓝色滤光块、第二滤光组件中的蓝色滤光块与第一滤光组件中的红色滤光块、第二滤光组件中的红色滤光块分别分布于4×4矩阵的对角线的两侧。每个滤光组件中绿色滤光块的数量多于红色滤光块和蓝色滤光块的数量,以符合人眼对绿色较为敏感的特点。
可选的,第一滤光单元中的多个滤光块呈S×J矩阵的形式排布;其中,4<S,4<J,S、J为偶数。当S或J小于4时,第一滤光单元中的透明滤光块的数量较少,从而会影响图像传感器的进光量,不利于提高暗态环境下拍摄图像的质量。本申请对S和J的上限不做限定。
本申请的另一方面,提供一种摄像头。该摄像头包括镜头组件以及如上所述的任意一种图像传感器,镜头组件设置于图像传感器的入光侧。该摄像头具有与前述实施例提供的图像传感器相同的技术效果,此处不再赘述。
本申请的另一方面,提供一种电子设备。该电子设备包括处理器以及与处理器电连接的第一摄像头。第一摄像头为上述摄像头。该电子设备具有与前述实施例提供的摄像头相同的技术效果,此处不再赘述。
可选的,电子设备还包括与处理器电连接的第二摄像头。第二摄像头包括第二滤光片。第二滤光片包括阵列排布的多个第二滤光单元,第二滤光单元包括呈2×2矩阵形式排布的一个红色滤光块、一个蓝色滤光块以及两个绿色滤光块。其中,两个绿色滤光块沿2×2矩阵的一条对角线排列,红色滤光块和蓝色滤光块沿2×2矩阵的另一 条对角线排列。这样一来,当第一摄像头和第二摄像头同时工作时,电子设备最终显示的图像能够兼顾第一摄像头拍摄图像丰富的细节和第二摄像头拍摄图像精准的颜色。
本申请的另一方面,提供一种图像处理方法。该方法应用于上述任意一种电子设备中的处理器。该方法包括:首先,接收用户操作,用户操作用于触发第一摄像头采集N帧原始Raw域图像。其中,N≥2,N为整数。接下来,从N帧Raw域图像中选取一帧图像为第一参考图像,其余图像为第一待配准图像。接下来,计算每一帧第一待配准图像与第一参考图像之间的第一光流值。根据第一光流值,以第一参考图像为基准,将第一待配准图像进行变形,并配准至第一参考图像。接下来,将N帧Raw域图像进行去马赛克处理,得到三原色RGB域图像。该图像处理方法具有与前述实施例提供的电子设备相同的技术效果,此处不再赘述。
可选的,从N帧Raw域图像中选取一帧图像为第一参考图像,其余图像为第一待配准图像包括:从N帧Raw域图像中选取第一帧Raw域图像为第一参考图像,其余N-1帧为第一待配准图像。由于用户在触发手机中的相机按钮时,从手机的屏幕中看到的图像,即第一摄像头拍摄的第一帧图像为用户期望最终呈现的图像,因此将N帧Raw域图像中的第一帧Raw域图像为第一参考图像,使得后续的配准过程中其余N-1帧Raw域图像均能够与用户所期望呈现的第一帧图像对准。
可选的,2≤N≤8。N的数量越大,N帧Raw域图像叠加后,第一滤光单元所在的区域中彩色滤光块的比例越大,图像传感器获得的彩色信息越多。然而,N的数量太大时,在去马赛克的过程中,会增加算法的复杂程度,使得计算量和计算偏差急剧增大。因此,N的数值可以设置于2≤N≤8的范围之内。
可选的,方法还包括:对RGB域图像进行自动白平衡处理、颜色校正处理以及畸变校正等处理,以提高图像的质量。
本申请的另一方面,提供一种图像处理方法。该方法应用于如上所述的电子设备中的处理器。该方法包括:首先,接收用户操作,用户操作用于触发第一摄像头采集N帧第一Raw域图像,并触发第二摄像头采集M帧第二Raw域图像;其中,N≥1,M≥1,N、M为整数。将N帧第一Raw域图像进行去马赛克处理,得到第一RGB域图像。将M帧第二Raw域图像进行去马赛克处理,得到第二RGB域图像。然后,计算第一RGB域图像与第二RGB域图像之间的第二光流值,并获得第一光流置信图。从第一RGB域图像与第二RGB域图像中,选取一个图像为第二参考图像,另一个图像为第二待配准图像。根据第二光流值,以第二参考图像为基准,将第二待配准图像进行变形,并配准至第二参考图像,以获得第三RGB域图像。将第二参考图像、第三RGB域图像以及第一光流置信图融合,以获得第四RGB域图像。由上述可知,第一摄像头的图像处理器中,第一滤光单元包括彩色滤光块和透明滤光块,因此,由第一摄像头获得的第一RGB域图像,具有更好的细节解析力和夜景表现能力。但是第一滤光单元中彩色滤光块的数量有限,所以在细小物体和颜色丰富区域可能存在颜色丢失现象。此外,第二摄像头的图像处理器中,第二滤光单元只具有彩色滤光块。所以由第二摄像头获得的第二RGB域图像,具有更真实丰富的颜色,但是因为未设置透明滤光块,使得第二RGB域图像的细节表现力以及信噪比更低。这样一来,当将第一RGB域图像与第二RGB域图像配准对其形成的第三RGB域图像,即可以具有第一RGB 域图像的细节解析力和夜景表现能力,又可以兼具第二RGB域图像的颜色丰富真实的优势。
可选的,在N≥2,M≥2的情况下,将N帧第一Raw域图像进行去马赛克处理,得到第一RGB域图像之前,方法还包括:从N帧第一Raw域图像中选取一帧图像为第三参考图像,其余图像为第三待配准图像。计算每一帧第三待配准图像与第三参考图像之间的第三光流值。根据第三光流值,以第三参考图像为基准,将第三待配准图像进行变形,并配准至第三参考图像。此外,将M帧第二Raw域图像进行去马赛克处理,得到第二RGB域图像之前,方法还包括:从N帧第二Raw域图像中选取一帧图像为第四参考图像,其余图像为第四待配准图像。计算每一帧第四待配准图像与第四参考图像之间的第四光流值。根据第四光流值,以第四参考图像为基准,将第四待配准图像进行变形,并配准至第四参考图像。配准的技术效果同上所述,此处不再赘述。
可选的,从第一RGB域图像与第二RGB域图像中,选取一个图像为第二参考图像,另一个图像为第二待配准图像包括:将第一RGB域图像作为第二参考图像,将第二RGB域图像作为第二待配准图像。配准的技术效果同上所述,此处不再赘述。
可选的,上述方法还包括:对第四RGB域图像进行自动白平衡处理、颜色校正处理以及畸变校正等处理,以提高电子设备拍摄图像的质量。
本申请的另一方面,提供一种图像处理方法。该方法应用于如上所述的电子设备中的处理器。该方法包括:首先,接收用户操作,用户操作用于触发第一摄像头采集N帧第一Raw域图像,并触发第二摄像头采集M帧第二Raw域图像。其中,N≥1,M≥1,N、M为整数。计算N帧第一Raw域图像中的第五参考图像与M帧第二Raw域图像中的第六参考图像之间的第五光流值,并获得第二光流置信图。根据第五光流值,将N帧第一Raw域图像、M帧第二Raw域图像以及第二光流置信图融合,并进行去马赛克处理,以获得RGB域图像。这样一来,将去马赛克与图像融合的步骤合为一体,以将配准对齐后的N帧第一Raw域图像、配准对齐后的M帧第二Raw域图像以及第二光流置信图共同作为兼具去马赛克与图像融合功能的模块的输入。从而可以通过该模块将配准对齐后的N帧第一Raw域图像的细节表现力、配准对齐后的M帧第二Raw域图像颜色的精准度结合,并根据第二光流置信图对双摄像头引起的遮挡区域进行针对性的修复。由于配准对齐后的N帧第一Raw域图像、配准对齐后的M帧第二Raw域图像同时进行去马赛克处理,因此相比于分别进行去马赛克处于的方案而言,能够减小去马赛克算法的此处,从而可以减小由于去马赛克处于引起的误差。
可选的,在N≥2,M≥2的情况下,计算N帧第一Raw域图像中的第五参考图像与M帧第二Raw域图像中的第六参考图像之间的第五光流值,并获得第二光流置信图之前,方法还包括:从N帧第一Raw域图像中选取一帧图像为第五参考图像,其余图像为第五待配准图像。计算每一帧第五待配准图像与第五参考图像之间的第六光流值。根据第六光流值,以第五参考图像为基准,将第五待配准图像进行变形,并配准至第五参考图像。此外,从N帧第二Raw域图像中选取一帧图像为第六参考图像,其余图像为第六待配准图像。计算每一帧第六待配准图像与第六参考图像之间的第七光流值。根据第七光流值,以第六参考图像为基准,将第六待配准图像进行变形,并配 准至第六参考图像。配准的技术效果同上所述,此处不再赘述。
可选的,上述方法还包括:对RGB域图像进行自动白平衡处理、颜色校正处理以及畸变校正等处理,以提高电子设备拍摄图像的质量。
附图说明
图1A为本申请实施例提供的一种电子设备的结构示意图;
图1B为本申请实施例提供的另一种电子设备的结构示意图;
图2A为图1A或图1B中第一摄像头的一种结构示意图;
图2B为图2A中图像传感器的第一滤光片的一种结构示意图;
图2C为图2B中第一滤光单元的一种结构示意图;
图2D为图2A中图像传感器的第一滤光片的另一种结构示意图;
图3A为图2C中第一滤光单元的局部结构的一种示意图;
图3B为图2C中第一滤光单元的局部结构的另一种示意图;
图3C为图2C中第一滤光单元的局部结构的另一种示意图;
图3D为图2C中第一滤光单元的局部结构的另一种示意图;
图3E为图2C中第一滤光单元的局部结构的另一种示意图;
图4为图2A中图像传感器的一种结构示意图;
图5A为本申请实施例提供的一种拍摄图像示意图;
图5B为本申请实施例提供的另一种拍摄图像示意图;
图6为本申请实施例提供的一种电子设备的结构示意图;
图7为本申请实施例提供的一种图像处理方法流程图;
图8为本申请实施例提供的电子设备的一种界面显示示意图;
图9A为本申请实施例提供的一种用户拍摄过程中手部抖动轨迹示意图;
图9B为本申请实施例提供的第一摄像头待拍摄的一种目标图像示意图;
图10A、图10B、图10C以及图10D分别为本申请实施例提供的第一摄像头拍摄的第一帧图像、第二帧图像、第三帧图像以及第四帧图像示意图;
图11A为本申请实施例提供的第一摄像头拍摄多帧图像的一种叠加效果示意图;
图11B为本申请实施例提供的第一摄像头拍摄多帧图像的另一种叠加效果示意图;
图12A为本申请实施例提供的另一种电子设备的结构示意图;
图12B为图12A中第二摄像头中的图像传感器的结构示意图;
图13为本申请实施例提供的电子设备的另一种界面显示示意图;
图14为本申请实施例提供的另一种图像处理方法流程图;
图15A为图12A中第一摄像头拍摄的图像示意图;
图15B为图12A中第二摄像头拍摄的图像示意图;
图15C为图15A所示图像与图15B所示的图像的光流值示意图;
图16为图15A所示图像与图15B所示的图像之间的光流置信图;
图17为本申请实施例提供的另一种图像处理方法流程图。
附图标记:
01-电子设备;10-壳体;11-第一摄像头;20a-第一摄像头的图像传感器;30-镜头组件;301-透镜;200-第一滤光片;210-第一滤光单元;211-滤光块;212-彩色滤光块; 40a-第一滤光组件;40b-第二滤光组件;40-彩色滤光组件;41-光电转换元件;02-处理器;03-相机按钮;12-第二摄像头;20b-第二摄像头的图像传感器;310-第二滤光单元。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。
以下,术语“第一”、“第二”等仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”等的特征可以明示或者隐含地包括一个或者更多个该特征。
此外,本申请中,“上”、“下”、“左”以及“右”等方位术语可以包括但不限于相对附图中的部件示意置放的方位来定义的,应当理解到,这些方向性术语可以是相对的概念,它们用于相对于的描述和澄清,其可以根据附图中部件附图所放置的方位的变化而相应地发生变化。
在本申请中,除非另有明确的规定和限定,术语“连接”应做广义理解,例如,“连接”可以是固定连接,也可以是可拆卸连接,或成一体;可以是直接相连,也可以通过中间媒介间接相连。此外,术语“电连接”可以是直接的电性连接,也可以是通过中间媒介间接的电性连接。
本申请实施例提供一种电子设备,该电子设备包括手机(mobile phone)、平板电脑(pad)、电视、智能穿戴产品(例如,智能手表、智能手环)、虚拟现实(virtual reality,VR)电子设备、增强现实(augmented reality AR)等具有拍照功能的电子产品。本申请实施例对上述电子设备的具体形式不做特殊限制。以下为了方便说明,是以电子设备01为如图1A所示的手机为例进行的说明。
该电子设备01可以包括第一摄像头11。在本申请的一些实施例中,如图1A所示,第一摄像头11可以作为后置摄像头,设置于电子设备01的壳体10所在的一侧。或者,在本申请的另一些实施例中,如图1B所示,第一摄像头11可以作为前置摄像头设置于电子设备01显示图像的一侧。
该第一摄像头11可以包括如图2A所示的图像传感器20a以及设置于图像传感器20a入光侧的镜头组件30。该镜头组件30可以包括至少两个层叠设置的透镜301。该镜头组件30能够对外界光线(图中采用箭头所示)汇聚后,入射至图像传感器20a中。
第一摄像头11的图像传感器20a可以包括如图2B所示的第一滤光片(color filter array,CFA)200。上述第一滤光片200可以包括阵列排布的多个第一滤光单元210。第一滤光单元210可以包括如图2C所示的阵列排布的多个滤光块211。上述多个滤光块211可以呈S×J矩阵的形式排布。图2C是以S=J=16为例进行的说明。
在本申请的一些实施例中,该第一滤光单元210中的多个滤光块211可以包括如图2C所示的第一透明滤光块W1、第二透明滤光块W2以及多个彩色滤光块212。彩色滤光块212和第一透明滤光块W1交错排布。第二透明滤光块W2设置于多个第一透明滤光块W1和彩色滤光块212所在区域的周边。第一透明滤光块W1和第二透明滤光块W2用于将经过图2A中的透镜301的光线直接透过。如图2D所示,该第一滤光单元210可以作为第一滤光片200的最小重复单元。
需要说明的是,彩色滤光块212和第一透明滤光块W1交错排布是指,沿第一滤光单元210中由多个滤光块211构成的矩阵的水平方向X或者竖直方向Y,相邻两个彩色滤光块212之间,具有一个第一透明滤光块W1。
在本申请的一些实施例中,上述第一透明滤光块W1和第二透明滤光块W2的材料可以相同,在此情况下,第一透明滤光块W1和第二透明滤光块W2的透光率可以相同。或者,在本申请的另一些实施例中,第一透明滤光块W1和第二透明滤光块W2的材料可以不同,本申请对此不做限定。
此外,在本申请的一些实施例中,如图3A所示,第一滤光单元210可以包括两个彩色滤光组件,分别为第一滤光组件40a和第二滤光组件40b。其中,第一滤光组件40a和第二滤光组件40b中的任意一个彩色滤光组件可以至少包括三个分别用于透过三原色光,例如红(red,R)光、蓝(blue,B)光以及绿(green,G)光的彩色滤光块。由于人眼对绿色较为敏感,在同一个第一滤光单元210中,绿色滤光块G的数量可以多余其他颜色滤光块的数量。例如,在多个第一透明滤光块W和多个彩色滤光块呈4×4矩阵的形式排布的情况下,上述任意一个彩色滤光组件可以包括一个蓝色滤光块B,一个红色滤光块R,以及两个绿色滤光块G。
基于此,蓝色滤光块B用于将来自图2A中的透镜301的光线中的蓝色光线透过,其余光线滤除。红色滤光块R用于将来自透镜301的光线中的红色光线透过,其余光线滤除。绿色滤光块G用于将来自透镜301的光线中的绿色光线透过,其余光线滤除。这样一来,来自透镜301的光线经过上述彩色滤光组件后能够被分成三原色(R、G、B)光,从而使得电子设备01能够获得RGB域图像。
以下对第一滤光组件40a和第二滤光组件40b中各个彩色滤光块212的设置方式进行说明。在本申请的一些实施例中,如图3A所示,沿4×4矩阵的对角线(O1-O1)的方向,第一滤光组件40a中的红色滤光块R、蓝色滤光块B以及第二滤光组件40b中的红色滤光块R、蓝色滤光块B依次排列。此外,第一滤光组件40a中的两个绿色滤光块G与第二滤光组件40b中的两个绿色滤光块G可以分别分布于4×4矩阵的对角线(O1-O1)的两侧。
或者,在本申请的另一些实施例中,如图3B所示,沿4×4矩阵的对角线(O1-O1)的方向,第一滤光组件40a中的蓝色滤光块B、第二滤光组件40b中的蓝色滤光块B以及第一滤光组件40a中的红色滤光块R、第二滤光组件40b中的红色滤光块R依次排列。或者,如图3C所示,沿4×4矩阵的对角线(O1-O1)的方向,第一滤光组件40a中的红色滤光块R、第二滤光组件40b中的红色滤光块R以及第一滤光组件40a中的蓝色滤光块B、第二滤光组件40b中的蓝色滤光块B依次排列。此外,如图3B或者图3C所示,第一滤光组件40a中的两个绿色滤光块G与第二滤光组件40b中的两个绿色滤光块G可以分别分布于4×4矩阵的对角线(O1-O1)的两侧。
又或者,在本申请的另一些实施例中,如图3D所示,沿4×4矩阵的对角线(O1-O1)的方向,第一滤光组件40a中的两个绿色滤光块G以及第二滤光组件40b中的两个绿色滤光块G依次排列。此外,第一滤光组件40a中的蓝色滤光块B、第二滤光组件40b中的蓝色滤光块B与第一滤光组件40a中的红色滤光块R、第二滤光组件40b中的红色滤光块R可以分别分布于4×4矩阵的对角线(O1-O1)的两侧。
图3D是以第一滤光组件40a中的红色滤光块R、第二滤光组件40b中的红色滤光块R位于4×4矩阵的对角线(O1-O1)的上方,第一滤光组件40a中的蓝色滤光块B、第二滤光组件40b中的蓝色滤光块B位于4×4矩阵的对角线(O1-O1)的下方为例进行的说明。在本申请的另一些实施例中,第一滤光组件40a中的红色滤光块R、第二滤光组件40b中的红色滤光块R可以位于4×4矩阵的对角线(O1-O1)的下方,第一滤光组件40a中的蓝色滤光块B、第二滤光组件40b中的蓝色滤光块B可以位于4×4矩阵的对角线(O1-O1)的上方。
需要说明的是,上述是以第一滤光片200中的任意一个第一滤光单元210包括第一滤光组件40a和第二滤光组件40b的情况下,对该第一滤光组件40a和第二滤光组件40b中各个彩色滤光块212的设置方式进行举例说明,并不是对第一滤光组件40a和第二滤光组件40b中各个彩色滤光块212的设置方式进行的限定。该第一滤光组件40a和第二滤光组件40b中各个彩色滤光块212的其余设置方式在此不再一一赘述。
此外,上述是以第一滤光片200包括两个彩色滤光组件,例如,第一滤光组件40a和第二滤光组件40b为例进行的说明。在本申请的另一些实施例中,如图3E所示,第一滤光片200还可以包括三个或三个以上的彩色滤光组件40。以下为了方便说明,均是以第一滤光片200包括第一滤光组件40a和第二滤光组件40b,且第一滤光组件40a和第二滤光组件40b中各个彩色滤光块212采用如图3A所示的方式进行说明。
在此基础上,如图4所示,上述图像传感器20a还包括多个光电转换元件41。该第一滤光单元210中,每个滤光块211的位置可以与一个光电转换元件41的位置相对应,从而使得每个滤光块211可以覆盖一个光电转换元件41。该光电转换元件41可以为光电二极管,用于将经过滤光块211的光线转换成电信号。
示例的,上述光电二极管可以采用电荷耦合器件(charge coupled device,CCD)工艺进行制备,上述光电二极管可以将采集到的光信号(图4中采用箭头表示)转换成电信号,然后经过采用放大以及模数转换电路转换成数字图像信号。或者,光电二极管可以采用互补金属氧化物半导体(complementary metal oxide semiconductor,CMOS)工艺进行制备。CMOS工艺制备的光电二极管同时具有N型和P型的半导体,这两种半导体通过互补效应产生的电流可以被处理芯片记录和解读,并通过数模转换电路转换成数字图像信号。
由上述可知,图像传感器20a可以包括第一滤光片200以及多个光电转换元件41,每个光电转换元件41的位置与该第一滤光片200中的一个滤光块211相对应。在该滤光块211为上述第一透明滤光块W1或第二透明滤光块W2的情况下,入射至该滤光块211的光线,例如白(white,W)光可以全部或近似全部透过,并进入与该滤光块211位置对应的光电转换元件41,在光电转换元件41的作用下转换成电信号。此外,在该滤光块211为上述彩色滤光块,例如,上述蓝色滤光块B、红色滤光块R或者绿色滤光块G的情况下,入射至该滤光块211的光线,例如W光中只有与该彩色滤光块212的颜色一致的光线可以透过,并进入到与该滤光块211的位置相对应的光电转换元件41,在该光电转换元件41的作用下转换成电信号。
在此情况下,上述一个滤光块211可以和与该滤光块211位置相对应的光电转换元件41构成上述图像传感器20a的像素(pixel)。各个像素获得的电信号经过模数转 换后获得的数字信号为该图像传感器20a输出的原始(Raw)域图像。由于彩色滤光块212,例如,蓝色滤光块B只允许一种颜色的光线,例如B光透过,所以该Raw域图像中,由图像传感器20a的彩色滤光块212,例如,蓝色滤光块B和与该彩色滤光块212位置对应的光电转换元件41构成的图像传感器20a的像素获得的数据中,只具有一种颜色信息(单通道),例如蓝色信息,该颜色信息与该彩色滤光块212的颜色。例如,蓝色一致。因此Raw域图像为单通道图像。
此外,由于第一透明滤光块W1或第二透明滤光块W2中的任意一个透明滤光块可以将入射光,例如W光全部或近似全部透过,因此,该Raw域图像中,由图像传感器20a的第一透明滤光块W1或第二透明滤光块W2和与该透明滤光块位置对应的光电转换元件41构成的图像传感器20a的像素获得的数据中,没有颜色信息。
基于此,由于第一透明滤光块W1或第二透明滤光块W2能够将入射光线全部或近似全部透过,所以图像传感器20a中,上述第一透明滤光块W1或第二透明滤光块W2的数量越多,该图像传感器20a可以通过上述透明滤光块获得较多的进光量,从而有效提升图像传感器20a的光电转换效率。从而在光线较暗的环境下拍摄图像时,可以提升拍摄图像的质量,例如,信噪比和图像细节解析力。示例的,当图像传感器20a的进光量较少时,图像传感器20a拍摄的图像如图5A所示,B区域中的线条纹路会出现模糊的现象。当图像传感器20a的进光量提高后,图像传感器20a拍摄的图像如图5B所示,B区域中的线条纹路相对于图5A的图像而言更加清晰,信噪比更低,图像细节解析力更强。
此外,由于彩色滤光块212能够将与该彩色滤光块212颜色一致的光线透过,因此上述彩色滤光块212的数量越多,图像处理器20a可以通过彩色滤光块212获得的颜色信息越多,从而使得具有上述图像传感器20a的第一摄像头11拍摄到的Raw域图中的颜色信息越多。这样一来,由于将Raw域图像转换成RGB域图像的过程中,需要在对Raw域图像进行插值估计,以根据Raw域图像的像素中已知的颜色信息,例如蓝色信息,计算另外两个颜色信息,例如红色信息和绿色信息。因此,当上述彩色滤光块212的数量越多时,具有上述图像传感器20a的第一摄像头11对客观世界的物体进行拍摄时,能够提升该第一摄像头11拍摄图像对客观世界的物体颜色的还原度,减小色偏和色差。
综上所述,本申请实施例提供的上述图像传感器20a具有第一滤光片200,作为该第一滤光片200的最小重复单元第一滤光单元210,其结构如图2C所示包括用于获取彩色信息的多个彩色滤光块212。上述多个彩色滤光块212可以构成如图3A所示的至少两组彩色滤光组件,例如,第一滤光组件40a和第二滤光组件40b。该第一滤光组件40a和第二滤光组件40b中的任意一个滤光组件可以包括一个蓝色滤光块B,一个红色滤光块R,以及两个绿色滤光块G,从而能够获取到三原色(R、G、B)的颜色信息。上述彩色滤光组件的数量越多,可以使得图像传感器20a获得的彩色信息越多。
此外,第一滤光单元210还具有用于提高图像传感器20a进光量的多个第一透明滤光块W1和多个第二透明滤光块W2。其中,第一透明滤光块W1与彩色滤光块212交替设置,以提升彩色滤光块212所在区域的进光量。此外,第二透明滤光块W2设 置于第一透明滤光块W1与彩色滤光块212所在区域的周边,这样可以通过设置第二透明滤光块W2的数量,增加整个第一滤光片200中透明滤光块的数量,达到提高图像传感器20a进光量的目的。
这样一来,具有上述图像传感器20a的第一摄像头11,相对于黑白摄像头而言,通过设置上述彩色滤光块212能够获得较多的彩色信息,以使得第一摄像头11拍摄的照片能够具有较高的色彩还原度。此外,该第一摄像头11相对于采用拜耳(Bayer)阵列的彩色摄像头而言,通过设置上述第一透明滤光块W1和第二透明滤光块W2,能够使得图像传感器20a获得较高的进光量,以提升第一摄像头11在较暗环境下拍摄图片的细节解析度,降低图像信噪比。
由上述可知,图像传感器20a中第一滤光片200的最小重复单元第一滤光单元210包括阵列排布的S×J个滤光块211。其中,该S×J个滤光块211包括多个彩色滤光块和多个透明滤光块。上述是以S=J=16为例进行的说明,在本申请的实施例中S和J的取值可以为4<S,4<J,S、J为偶数。当S或J小于4时,第一滤光单元210中的透明滤光块的数量较少,从而会影响图像传感器20a的进光量,不利于提高暗态环境下拍摄图像的质量。本申请对S和J的上限不做限定。在本申请的一些实施例中当S或J大于16时,第一滤光单元210中的透明滤光块的数量太多,会减小图像传感器20a拍摄到的图像中的彩色信息。因此,上述S或J的数值可以选取小于或等于16的数值,例如S或J的数值可以为6、8、10、12、14或16。当然,在本申请的另一些实施例中,S或J的数值可以选取大于16的数值。
此外,在第一滤光单元210的尺寸一定的情况下,为了确保图像传感器20a的进光量,彩色滤光块的数量不能设置的过多,例如,如图2C所示,当第一滤光单元210包括S×J=16×16(共256)个滤光块211的情况下,该第一滤光单元210中部位置的4×4个滤光块211所在的区域内具有两个蓝色滤光块B,两个红色滤光块R,以及四个绿色滤光块G,共8个彩色滤光块212。在此情况下,第一滤光单元210中彩色滤光块212的比例为3.125%(8/256=3.125%)。基于此,在电子设备01包括如图6所示的与该第一摄像头11电连接的处理器02的情况下,在采用上述第一摄像头11进行拍摄的过程中,该处理器02可以执行如图7所示的步骤S101~S106,以在第一滤光单元210中彩色滤光块212的数量有限的情况下,提高第一摄像头11拍摄的图像对实物色彩的还原度。
S101、接收用户操作。
在本申请的一些实施例中,以电子设备01为如图8所示的手机为例,上述用户操作可以为用户触发电子设备01中的相机按钮03。当处理器02接收到用户操作后,可以向第一摄像头11发出控制指令,以触发第一摄像头11采集N帧Raw域图像。其中,N≥2,N为整数。以下为了方便说明,以N=4为例。其中,由于第一摄像头11中的第一滤光单元210中包括红色滤光块R、绿色滤光块G、蓝色滤光块B以及透明滤光块(包括第一透明滤光块W1和第二透明滤光块W2),所以第一摄像头11拍摄的Raw域图像可以称为RGBW Raw域图像。
在用户手持手机进行拍摄的情况下,用户手指触摸上述相机按钮03的过程中,手部通常会发生抖动,例如,手部抖动轨迹可以如图9A所示,由位置A1到位置A2。 在此情况下,第一摄像头11连续曝光的N(例如,N=4)帧图像之间的位置会存在轻微的偏移。
图9B为第一摄像头11待拍摄的目标图像。图10A为第一摄像头11拍摄的第一帧图像。为了方便说明将第一摄像头11连续拍摄的4帧图像中,仅表示出第一滤光单元210中的彩色滤光块212的部分,该第一滤光单元210中的透明滤光块(例如第一透明滤光块W1和第二透明滤光块W2)未表示出。由上述可知,由于每个第一滤光单元210中彩色滤光块212的数量有限,因此,由图10A可以看出,第一帧图像中的部分区域,例如①号蝴蝶、瓢虫的身体部分以及花朵的中部以及右半部分,由于没有彩色滤光块212,所以第一帧图像中没有捕捉到上述区域的颜色。
如图10B所示,为第一摄像头11拍摄的第二帧图像,由图9A所示用户手抖的运动估计可知,第一摄像头11会向右下方发生位移。因此,由图10B可以看出,第二帧图像可以捕捉到花朵的中部、瓢虫身体的左半部分等区域的颜色。如图10C所示,为第一摄像头11拍摄的第三帧图像,由图9A所示用户手抖的运动估计可知,第一摄像头11会向左下方发生位移。因此,由图10C可以看出,第三帧图像可以捕捉到花朵的右半部分、瓢虫身体的右半部分以及①号蝴蝶的身体等区域的颜色。此外,如图10D所示,为第一摄像头11拍摄的第四帧图像,由图9A所示用户手抖的运动估计可知,第一摄像头11会向右上方发生位移。因此,由图10D可以看出,第四帧图像可以捕捉到花朵的上半部分、瓢虫身体的上半部分以及①号蝴蝶的翅膀等区域的颜色。
S102、从N帧Raw域图像中选取一帧图像为第一参考图像,其余图像为第一待配准图像。
由上述可知,用户在拍摄的过程中,因为手部抖动的原因使得拍摄的N帧图像之间存在一定的位置偏差,因此需要将上述N帧Raw域图像对齐。此外,由于用户在触发手机中的相机按钮03时,从手机的屏幕中看到的图像,即第一摄像头11拍摄的第一帧图像为用户期望最终呈现的图像。在本申请的一些实施例中,可以从N帧Raw域图像中,选取第一帧Raw域图像为上述第一参考图像,其余N-1帧为第一待配准图像。从而通过执行以下步骤,将N-1帧第一待配准图像与上述第一参考图像对齐。当然在本申请的另一些实施例中,可以从N帧Raw域图像中,任意选取一帧Raw域图像为上述第一参考图像,其余N-1帧为第一待配准图像。本申请对此不做限定。
S103、计算每一帧第一待配准图像与第一参考图像之间的第一光流值。
其中,光流值用于表示参考图像与待配准图像之间的位移。在此情况下,可以作为第一参考图像的第一帧Raw域图像为基准,采用光流算法,例如Lucas-Kanade等逐次计算第一帧Raw域图像与第二帧至第四帧Raw域图像之间的第一光流值。
S104、根据第一光流值,以第一参考图像为基准,将第一待配准图像进行变形,并配准至第一参考图像。
在执行S104的过程中,可以根据第二帧至第四帧Raw域图像与第一帧Raw域图像之间的第一光流值,将第二帧至第四帧Raw域图像分别进行变形,以使得第二帧至第四帧Raw域图像的内容基本相同,达到配准对齐的目的。变形后的第二帧至第四帧Raw域图像与第一帧Raw域图像之间的位移可以等于或近似为零。从而可以减小由于用户在拍摄过程中手部发生抖动而导致拍摄图像出现纹理扭曲和鬼影。
此外,如图11A所示,第一帧Raw域图像①中的彩色滤光块212捕捉颜色的位置、第二帧Raw域图像②的彩色滤光块212捕捉颜色的位置、第三帧Raw域图像③捕捉颜色的位置以及第二帧Raw域图像④的彩色滤光块212捕捉颜色的位置彼此互补,使得第一滤光单元210所在的区域中彩色滤光块212的数量有所增加。
例如,在第一滤光单元210包括16×16(共256)个滤光块211,每个第一滤光单元210具有8个彩色滤光块212,且第一摄像头11连续曝光的4帧Raw域图像的情况下,将4帧Raw域图像配准后,第一滤光单元210所在的区域中彩色滤光块212的比例可以由单帧为3.125%(8/256=3.125%),增加到N×3.125%=4×3.125%。由此可知,N的数量越大,N帧Raw域图像叠加后,第一滤光单元210所在的区域中彩色滤光块212的比例越大,图像传感器20a获得的彩色信息越多。然而,N的数量太大时,在以下执行S105的过程中,会增加算法的复杂程度,使得计算量和计算偏差急剧增大。因此,N的数值可以设置于2≤N≤8的范围之内。示例的,N的取值可以为2、3、4、5、6、7或8。
在此情况下,第一摄像头11中的图像处理器20a可以通过彩色滤光块212获得更多的颜色信息。例如,将上述图11B所示,当上述4帧Raw域图像配准后,可以通过第二帧Raw域图像对第一帧Raw域图像中,花朵的中部等区域的颜色进行补偿。此外,可以通过第三帧Raw域图像对第一帧Raw域图像中,花朵的右半部分等区域的颜色进行补偿。此外,可以通过第四帧Raw域图像对第一帧Raw域图像中,花朵的上半部分等区域的颜色进行补偿。第一帧Raw域图像中,其余区域,例如瓢虫以及蝴蝶等部分颜色的补偿同理可得,此处不再一一赘述。
S105、将N帧Raw域图像进行去马赛克处理,得到三原色RGB域图像。
当上述4帧Raw域图像配准后,处理器02中的去马赛克(Demosaic)模块可以对配准后的4帧单通道Raw域图像(尺寸H×W×4)进行去马赛克处理。其中,(尺寸H×W×4)中的“4”表示4帧。例如,可以基于处理器02中的神经网络模型在对Raw域图像进行插值估计,以根据Raw域图像的像素中已知的单通道颜色信息,例如蓝色信息,计算另外两个通道颜色信息,例如红色信息和绿色信息。从而可以获得三通道RGB域图(尺寸H×W×3)。其中,(尺寸H×W×3)中的“3”表示3通道。H≥1,W≥1,H和W为正整数。
由上述可知,由于4帧单通道Raw域图像分别获得的颜色可以互补,因此相对于仅将单帧图像进行去马赛克处理或者RGB域图像的方式而言,本申请的方案可以获得更多的颜色细节,重建后的RGB域图像与图9B所示的目标图像更加接近,从而能够提升该第一摄像头11拍摄图像对客观世界的物体颜色的还原度,减小色偏和色差。此外,由于图像处理器20a中第一滤光单元210具有较多的透明滤光块(包括第一透明滤光块W1和第二透明滤光块W2),因此,在拍摄每一帧单通道Raw域图像时都能够获得较多的进光量,从而使得重建后的RGB域图像能够兼顾颜色信息的同时,能够具有良好的细节体现。
S106、对RGB域图像进行自动白平衡处理、颜色校正处理以及畸变校正等处理。
具体的,处理器02中的图像信号处理(image signal processor,ISP)模块可以执行上述S106,并将处理后的图像传输至电子设备01的显示屏进行显示,使得用户看 到最终拍摄到的图像。
上述是以电子设备01只具有第一摄像头11为例进行的说明。在本申请的另一些实施例中,如图12A所示,电子设备可以包括上述第一摄像头11以及第二摄像头12。该第一摄像头11和第二摄像头12可以均为后置摄像头,且均与上述处理器02电连接。第二摄像头12同样可以包括镜头组件和图像传感器。其中,镜头组件的结构同上所述,此处不再赘述。本申请为了方便说明,将第二摄像头12的图像传感器采用“20b”进行表示,将第一摄像头11的图像传感器采用“20a”进行表示。
如图12B所示,该第二摄像头的图像传感器20b可以包括第二滤光片。第二滤光片包括阵列排布的多个第二滤光单元310。第二滤光单元310包括呈2×2矩阵形式排布的一个红色滤光块R、一个蓝色滤光块B以及两个绿色滤光块G。其中,其中,两个绿色滤光G块沿2×2矩阵的一条对角线排列,红色滤光块R和蓝色滤光块B沿2×2矩阵的另一条对角线排列。在此情况下,第二摄像头12的图像传感器20b中的第二滤光片可以为以RGGB排列的Bayer CFA。
用户在拍摄的过程中,如图13所示,可以在电子设备01的操作界面中,设置第一摄像头11或第二摄像头12为主摄像头。此外,还可以设置开启或关闭上述第一摄像头11或第二摄像头12。其中,主摄像头是指该摄像头拍摄过程中取景的内容为用户在电子设备01的显示屏中的取景框中看到的内容。当用户关闭第二摄像头12,并采用第一摄像头11进行拍摄时,与该第一摄像头11电连接的处理器02可以执行上述步骤S101~S106,此处不再一一赘述。以下对第一摄像头11和第二摄像头12均处于开启的状态,处理器02的图像处理过程进行详细的举例说明。
例如,在本申请的一些实施例中,用户通过界面选择双摄模式,以同时开启第一摄像头11和第二摄像头12,且以第一摄像头11为主摄像头。此时,与该第一摄像头11和第二摄像头12电连接的处理器02可以接收用户操作。例如,该用户操作可以为用户触发如图8所示的电子设备01中的相机按钮03。当处理器02接收到用户操作后,可以向图14所示的第一摄像头11和第二摄像头12发出控制指令。接下来的图像处理方法可以包括如图14所示的S201~S210。
S201、拍摄N帧RGBW Raw域图。
当第一摄像头11接收到处理器02发送的控制指令后,该第一摄像头11可以拍摄N帧RGBW Raw域图像。其中,N≥1。在本申请的一些实施例中,当N>1时,处理器02可以执行以下S201以将N帧RGBW Raw域图像配准。或者,在本申请的另一些实施例中,当N=1时,处理器02可以跳过上述S202直接执行以下S203。
S202、光流计算、图像配准。
具体的,在执行上述S202的过程中,处理器02可以从第一摄像头11拍摄的N帧第一Raw域图像中,选取一帧图像为第三参考图像(例如首帧图像),其余图像为第三待配准图像。接下来,采用上述光流算法计算每一帧第三待配准图像与第三参考图像之间的第三光流值。然后,根据第三光流值,以第三参考图像为基准,将第三待配准图像进行变形,并配准至第三参考图像。从而完整了N帧RGBW第一Raw域图像的配准。
S203、去马赛克。
示例的,当N=1时,处理器02可以将第一摄像头11拍摄的单张RGBW Raw域图像进行去马赛克处理以生成三通道的第一RGB域图像RGB1。该RGB1图像可以如图15A所示。或者,当N>1时,处理器02可以将配准对齐后的N帧RGBW第一Raw域图像,进行去马赛克处理以生成上述第一RGB域图像RGB1图像。
S204、拍摄M帧RGBW Raw域图。
当第二摄像头12接收到处理器02发送的控制指令后,该第二摄像头12可以拍摄M帧RGBW Raw域图像。其中,M≥1。在本申请的一些实施例中,当M>1时,处理器02可以执行以下S206以将N帧RGBW Raw域图像配准。或者,在本申请的另一些实施例中,当M=1时,处理器02可以跳过上述S205直接执行以下S206。
S205、光流计算、图像配准。
具体的,在执行上述S205的过程中,处理器02可以从第二摄像头12拍摄的M帧第二Raw域图像中,选取一帧图像为第四参考图像(例如首帧图像),其余图像为第四待配准图像。接下来,采用上述光流算法计算每一帧第四待配准图像与第四参考图像之间的第四光流值。然后,根据第四光流值,以第四参考图像为基准,将第四待配准图像进行变形,并配准至第四参考图像。从而完整了M帧RGBW第二Raw域图像的配准。
S206、去马赛克。
示例的,当M=1时,处理器02可以将第二摄像头12拍摄的单张RGBW Raw域图像进行去马赛克处理以生成三通道的第二RGB域图像RGB2,该RGB2图像可以如图15B所示。或者,当M>1时,处理器02可以将配准对齐后的M帧RGBW Raw域图像,进行去马赛克处理以生成上述第二RGB域图像RGB2。
由于第一摄像头11和第二摄像头12之间存在视差,从而在拍摄过程中两个摄像头存在一定的遮挡关系,使得第一摄像头11获得的如图15A所示的第一RGB域图像RGB1,与第二摄像头12获得的第二RGB域图像RGB2中的内容的位置有所偏差。例如,第一RGB域图像RGB1中左端的画布,与在第二RGB域图像RGB2中左端的画布位置有所偏差。此外,第一RGB域图像RGB1中右端小熊,与在第二RGB域图像RGB2中优选的小熊位置有所偏差。因此需要进行以下步骤将上述RGB1与RGB2对齐。
S207、光流计算。
具体的,处理器02在执行上述S207的过程中,可以采用上述光流算法计算第一RGB域图像RGB1与第二RGB域图像RGB2之间的第二光流值。该第二光流值可以表示出RGB1与RGB2之间的位移。示例的,第二光流值可以采用15C的方式进行表示。该图15C中颜色越深的位置光流值越大,位置偏移越大。反之,颜色越浅的位置,光流值越小,位置偏移越小。例如,图15C所示的图片中,颜色较浅的位置C1处,RGB1与RGB2之间的位移较小。相反,图片中颜色较深的位置C2处,RGB1与RGB2之间的位移较大。
此外,处理器02在执行上述S207的过程中,还可以获得如图16所示的第一光流置信图。其中,图16中,颜色较深的位置C3的光流置信度小于,颜色较浅的位置C4的光流置信度。光流置信度越高表示该部分拍摄的图像越真实。RGB1与RGB2之间 的位置差异,主要是因为第一摄像头11和第二摄像头12之间存在的视差,导致被拍摄物体边缘部分由于遮挡而存在未被拍摄的区域。该部分区域的内容会丢失,因此由图16可以看出,光流置信度较小的区域主要分布于被拍摄物体的边缘。
S208、图像配准。
具体的,处理器02可以从第一RGB域图像RGB1与第二RGB域图像RGB2中,选取一个图像为第二参考图像,另一个图像为第二待配准图像。接下来,根据上述S207获得的第二光流值,以第二参考图像为基准,将第二待配准图像进行变形,并配准至第二参考图像,以使得第一RGB域图像RGB1与第二RGB域图像RGB2的内容基本相同,使得第一RGB域图像RGB1与第二RGB域图像RGB2之间的位移可以等于或近似为零。第一RGB域图像RGB1与第二RGB域图像RGB2配准后,可以获得第三RGB域图像RGB3。
由上述可知,第一摄像头11的图像处理器20a中,第一滤光单元210包括彩色滤光块和透明滤光块,因此,由第一摄像头11获得的RGB1图像,具有更好的细节解析力和夜景表现能力。但是第一滤光单元210中彩色滤光块的数量有限,所以在细小物体和颜色丰富区域可能存在颜色丢失现象。此外,第二摄像头12的图像处理器20b中,第二滤光单元310只具有彩色滤光块。所以由第二摄像头12获得的RGB2图像,具有更真实丰富的颜色,但是因为未设置透明滤光块,使得RGB2图像的细节表现力以及信噪比更低。这样一来,当将第一RGB域图像RGB1与第二RGB域图像RGB2配准对其形成的第三RGB域图像RGB3,即可以具有第一RGB域图像RGB1的细节解析力和夜景表现能力,又可以兼具第二RGB域图像RGB2的颜色丰富真实的优势。
在本申请的一些实施例中,由于第一摄像头11为主摄像头,所以可以将由第一摄像头11获得的第一RGB域图像RGB1作为第二参考图像,由第二摄像头12获得的第二RGB域图像RGB2作为第二待配准图像。从而使得最终配准的图像与用户期望通过主摄像头拍摄到的图像相同。当然,在本申请的另一些实施例中,也可以将第二RGB域图像RGB2作为第二参考图像,将第一RGB域图像RGB1作为第二待配准图像。
S209、图像融合。
具体的,处理器02可以根据S207获得的第二光流值,采用融合算法,例如泊松融合算法,将作为第二参考图像的第一RGB域图像RGB1、第三RGB域图像RGB3以及第一光流置信图融合,以获得第四RGB域图像RGB4。
这样一来,通过将第一RGB域图像RGB1、第三RGB域图像RGB3以及第一光流置信图融合,不仅可以解决双摄像头拍摄过程中因视差引起的信息丢失和融合鬼影,还可以通过光流计算获得第一光流置信图对信息丢失区域进行约束指导,从而以第一RGB域图像RGB1为基准,对信息丢失区域的颜色和纹理进行重点修复。此外,获得的第四RGB域图像RGB4能够兼顾第一RGB域图像RGB1丰富的细节和第二RGB域图像RGB2精准的颜色,并且在边缘位置处能够有效还原真实物体的纹理和色彩。
S210、图像信号处理。
具体的,处理器02中的ISP模块可以执行上述S210,以对第四RGB域图像RGB4进行自动白平衡处理、颜色校正处理以及畸变校正等处理。之后可以将处理后的图像传输至电子设备01的显示屏进行显示,使得用户看到最终拍摄到的图像。
或者例如,在本申请的一些实施例中,在用户通过界面选择双摄模式,以同时开启第一摄像头11和第二摄像头12的情况下,当用户触发如图8所示的电子设备01中的相机按钮03时,处理器02可以根据上述用户操作后,向图17所示的第一摄像头11和第二摄像头12发出控制指令。接下来的图像处理方法可以包括如图17所示的S301~S307。
S301、拍摄N帧RGBW Raw域图。
S301与上述S201同理,此处不再赘述。同理,在本申请的一些实施例中,当N>1时,处理器02可以执行以下S301以将N帧RGBW Raw域图像配准。或者,在本申请的另一些实施例中,当N=1时,处理器02可以跳过上述S302直接执行以下S306。
S302、光流计算、图像配准。
具体的,在执行上述S302的过程中,处理器02可以从N帧第一Raw域图像中,选取一帧图像为第五参考图像(例如首帧图像),其余图像为第五待配准图像。接下来,采用上述光流算法计算每一帧第五待配准图像与第五参考图像之间的第六光流值。然后,根据第六光流值,以第五参考图像为基准,将第五待配准图像进行变形,并配准至第五参考图像。从而完整了N帧RGBW第一Raw域图像的配准。
S303、拍摄M帧RGBW Raw域图。
S303与上述S204同理,此处不再赘述。同理,在本申请的一些实施例中,当M>1时,处理器02可以执行以下S304以将N帧RGBW Raw域图像配准。或者,在本申请的另一些实施例中,当M=1时,处理器02可以跳过上述S304直接执行以下S305。
S304、光流计算、图像配准。
具体的,在执行上述S304的过程中,处理器02可以从N帧第二Raw域图像中选取一帧图像为第六参考图像(例如首帧图像),其余图像为第六待配准图像。接下来,采用上述光流算法计算每一帧第六待配准图像与第六参考图像之间的第七光流值。然后,根据第七光流值,以第六参考图像为基准,将第六待配准图像进行变形,并配准至第六参考图像。从而完整了M帧RGBW第二Raw域图像的配准。
S305、光流计算。
具体的,处理器02在执行上述S305的过程中,可以采用上述光流算法计算N帧RGBW第一Raw域图像中的第五参考图像(例如,首帧),与M帧RGBW第二Raw域图像中的第六参考图像(例如,首帧)之间的第五光流值,并获得第二光流置信图。从而可以通过第五光流值,获得第五参考图像与上述第六参考图像之间的位移。并通过第二光流置信图获得由于第一摄像头11和第二摄像头12之间的视差,导致被拍摄物体边缘部分存在的未被拍摄的区域。
S306、去马赛克与图像融合。
具体的,处理器02在执行上述S306的过程中,根据第五光流值,将配准对齐后的N帧第一Raw域图像、配准对齐后的M帧第二Raw域图像以及第二光流置信图融合,并进行去马赛克处理,以获得RGB域图像。
这样一来,在融合的过程中,可以将作为主摄像头拍摄的第一Raw域图像中的第五参考图像为基准,对第二Raw域图像中的第六参考图像进行变形,并配准至第五参考图像。从而使得N帧第一Raw域图像与M帧第二Raw域图像配准。此外,通过将 配准对齐后的N帧第一Raw域图像、配准对齐后的M帧第二Raw域图像以及第二光流置信图融合,不仅可以解决双摄像头拍摄过程中因视差引起的信息丢失和融合鬼影,还可以通过光流计算获得第二光流置信图对信息丢失区域进行约束指导,从而以第一Raw域图像中的第五参考图像为基准,对信息丢失区域的颜色和纹理进行重点修复。此外,最终获得的RGB域图像能够兼顾N帧第一Raw域图像丰富的细节和M帧第二Raw域图像精准的颜色,并且在边缘位置处能够有效还原真实物体的纹理和色彩。
S307、图像信号处理。
具体的,处理器02中的ISP模块可以执行上述S307,以对RGB域图像进行自动白平衡处理、颜色校正处理以及畸变校正等处理。之后可以将处理后的图像传输至电子设备01的显示屏进行显示,使得用户看到最终拍摄到的图像。
由上述可知,图17所示的方案与图14所示的方案的不同之处在于,图17所示的方案中,将去马赛克与图像融合的步骤合为一体,以将配准对齐后的N帧第一Raw域图像、配准对齐后的M帧第二Raw域图像以及第二光流置信图共同作为兼具去马赛克与图像融合功能的模块的输入。从而可以通过该模块将配准对齐后的N帧第一Raw域图像的细节表现力、配准对齐后的M帧第二Raw域图像颜色的精准度结合,并根据第二光流置信图对双摄像头引起的遮挡区域进行针对性的修复。由于配准对齐后的N帧第一Raw域图像、配准对齐后的M帧第二Raw域图像同时进行去马赛克处理,因此相比于分别进行去马赛克处于的方案而言,能够减小去马赛克算法的此处,从而可以减小由于去马赛克处于引起的误差。
以上,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (20)

  1. 一种图像传感器,其特征在于,包括:
    第一滤光片,包括阵列排布的多个第一滤光单元,所述第一滤光单元包括阵列排布的多个滤光块;所述多个滤光块包括第一透明滤光块、第二透明滤光块以及至少两个彩色滤光组件;每个所述彩色滤光组件至少包括三个分别用于透过三原色光的彩色滤光块;所述第一透明滤光块与所述彩色滤光块交错排布;所述第二透明滤光块设置于所述第一透明滤光块与所述彩色滤光块所在区域的周边;
    多个光电转换元件;每个所述滤光块覆盖一个所述光电转换元件,所述光电转换元件用于将经过所述滤光块的光线转换成电信号。
  2. 根据权利要求1所述的图像传感器,其特征在于,所述至少两个彩色滤光组件包括第一滤光组件和第二滤光组件;
    同一个所述第一滤光单元中,多个所述第一透明滤光块和多个所述彩色滤光块呈4×4矩阵的形式排布;所述第一滤光组件和第二滤光组件中的任意一个滤光组件包括一个红色滤光块、一个蓝色滤光块以及两个绿色滤光块。
  3. 根据权利要求2所述的图像传感器,其特征在于,
    沿所述4×4矩阵的对角线的方向,所述第一滤光组件中的红色滤光块、蓝色滤光块以及所述第二滤光组件中的红色滤光块、蓝色滤光块依次排列;
    所述第一滤光组件中的两个绿色滤光块与所述第二滤光组件中的两个绿色滤光块分别分布于所述4×4矩阵的对角线的两侧。
  4. 根据权利要求2所述的图像传感器,其特征在于,
    沿所述4×4矩阵的对角线的方向,所述第一滤光组件中的蓝色滤光块、所述第二滤光组件中的蓝色滤光块以及所述第一滤光组件中的红色滤光块、所述第二滤光组件中的红色滤光块依次排列;
    所述第一滤光组件中的两个绿色滤光块与所述第二滤光组件中的两个绿色滤光块分别分布于所述4×4矩阵的对角线的两侧。
  5. 根据权利要求2所述的图像传感器,其特征在于,
    沿所述4×4矩阵的对角线的方向,所述第一滤光组件中的两个绿色滤光块以及所述第二滤光组件中的两个绿色滤光块依次排列;
    所述第一滤光组件中的蓝色滤光块、所述第二滤光组件中的蓝色滤光块与所述第一滤光组件中的红色滤光块、所述第二滤光组件中的红色滤光块分别分布于所述4×4矩阵的对角线的两侧。
  6. 根据权利要求1-5任一项所述的图像传感器,其特征在于,所述第一滤光单元中的所述多个滤光块呈S×J矩阵的形式排布;其中,4<S,4<J,S、J为偶数。
  7. 一种摄像头,其特征在于,包括镜头组件以及如权利要求1-6任一项所述的图像传感器,所述镜头组件设置于所述图像传感器的入光侧。
  8. 一种电子设备,其特征在于,包括处理器以及与所述处理器电连接的第一摄像头;所述第一摄像头为如权利要求7所示的摄像头。
  9. 根据权利要求8所述的电子设备,其特征在于,所述电子设备还包括与所述处理器电连接的第二摄像头;
    所述第二摄像头包括第二滤光片;所述第二滤光片包括阵列排布的多个第二滤光单元,所述第二滤光单元包括呈2×2矩阵形式排布的一个红色滤光块、一个蓝色滤光块以及两个绿色滤光块;
    其中,所述两个绿色滤光块沿所述2×2矩阵的一条对角线排列,所述红色滤光块和所述蓝色滤光块沿所述2×2矩阵的另一条对角线排列。
  10. 一种图像处理方法,其特征在于,所述方法应用于如权利要求8或9所述的电子设备中的处理器;所述方法包括:
    接收用户操作,所述用户操作用于触发所述第一摄像头采集N帧原始Raw域图像;其中,N≥2,N为整数;
    从所述N帧Raw域图像中选取一帧图像为第一参考图像,其余图像为第一待配准图像;
    计算每一帧所述第一待配准图像与所述第一参考图像之间的第一光流值;
    根据所述第一光流值,以所述第一参考图像为基准,将所述第一待配准图像进行变形,并配准至所述第一参考图像;
    将N帧Raw域图像进行去马赛克处理,得到三原色RGB域图像。
  11. 根据权利要求10所述的图像处理方法,其特征在于,
    所述从所述N帧Raw域图像中选取一帧图像为第一参考图像,其余图像为第一待配准图像包括:从所述N帧Raw域图像中选取第一帧Raw域图像为所述第一参考图像,其余N-1帧为所述第一待配准图像。
  12. 根据权利要求10-11任一项所述的图像处理方法,其特征在于,2≤N≤8。
  13. 根据权利要求10-12任一项所述的图像处理方法,其特征在于,所述方法还包括:对所述RGB域图像进行自动白平衡处理、颜色校正处理以及畸变校正处理。
  14. 一种图像处理方法,其特征在于,所述方法应用于如权利要求9所述的电子设备中的处理器;所述方法包括:
    接收用户操作,所述用户操作用于触发所述第一摄像头采集N帧第一Raw域图像,并触发所述第二摄像头采集M帧第二Raw域图像;其中,N≥1,M≥1,N、M为整数;
    将N帧所述第一Raw域图像进行去马赛克处理,得到第一RGB域图像;
    将M帧所述第二Raw域图像进行去马赛克处理,得到第二RGB域图像;
    计算所述第一RGB域图像与所述第二RGB域图像之间的第二光流值,并获得第一光流置信图;
    从所述第一RGB域图像与所述第二RGB域图像中,选取一个图像为第二参考图像,另一个图像为第二待配准图像;
    根据所述第二光流值,以所述第二参考图像为基准,将所述第二待配准图像进行变形,并配准至所述第二参考图像,以获得第三RGB域图像;
    将所述第二参考图像、所述第三RGB域图像以及所述第一光流置信图融合,以获得第四RGB域图像。
  15. 根据权利要求14所述的图像处理方法,其特征在于,N≥2,M≥2;
    所述将N帧所述第一Raw域图像进行去马赛克处理,得到第一RGB域图像之前, 所述方法还包括:
    从所述N帧所述第一Raw域图像中选取一帧图像为第三参考图像,其余图像为第三待配准图像;
    计算每一帧所述第三待配准图像与所述第三参考图像之间的第三光流值;
    根据所述第三光流值,以所述第三参考图像为基准,将所述第三待配准图像进行变形,并配准至所述第三参考图像;
    所述将M帧所述第二Raw域图像进行去马赛克处理,得到第二RGB域图像之前,所述方法还包括:
    从所述N帧所述第二Raw域图像中选取一帧图像为第四参考图像,其余图像为第四待配准图像;
    计算每一帧所述第四待配准图像与所述第四参考图像之间的第四光流值;
    根据所述第四光流值,以所述第四参考图像为基准,将所述第四待配准图像进行变形,并配准至所述第四参考图像。
  16. 根据权利要求14或15所述的图像处理方法,其特征在于,
    所述从所述第一RGB域图像与所述第二RGB域图像中,选取一个图像为第二参考图像,另一个图像为第二待配准图像包括:将所述第一RGB域图像作为所述第二参考图像,将所述第二RGB域图像作为所述第二待配准图像。
  17. 根据权利要求14-16任一项所述的图像处理方法,其特征在于,所述方法还包括:对所述第四RGB域图像进行自动白平衡处理、颜色校正处理以及畸变校正处理。
  18. 一种图像处理方法,其特征在于,所述方法应用于如权利要求9所述的电子设备中的处理器;所述方法包括:
    接收用户操作,所述用户操作用于触发所述第一摄像头采集N帧第一Raw域图像,并触发所述第二摄像头采集M帧第二Raw域图像;其中,N≥1,M≥1,N、M为整数;
    计算N帧所述第一Raw域图像中的第五参考图像与M帧所述第二Raw域图像中的第六参考图像之间的第五光流值,并获得第二光流置信图;
    根据所述第五光流值,将N帧所述第一Raw域图像、M帧所述第二Raw域图像以及所述第二光流置信图融合,并进行去马赛克处理,以获得RGB域图像。
  19. 根据权利要求18所述的图像处理方法,其特征在于,N≥2,M≥2;
    所述计算N帧所述第一Raw域图像中的第五参考图像与M帧所述第二Raw域图像中的第六参考图像之间的第五光流值,并获得第二光流置信图之前,所述方法还包括:
    从所述N帧所述第一Raw域图像中选取一帧图像为所述第五参考图像,其余图像为第五待配准图像;
    计算每一帧所述第五待配准图像与所述第五参考图像之间的第六光流值;
    根据所述第六光流值,以所述第五参考图像为基准,将所述第五待配准图像进行变形,并配准至所述第五参考图像;
    从所述N帧所述第二Raw域图像中选取一帧图像为所述第六参考图像,其余图像为第六待配准图像;
    计算每一帧所述第六待配准图像与所述第六参考图像之间的第七光流值;
    根据所述第七光流值,以所述第六参考图像为基准,将所述第六待配准图像进行变形,并配准至所述第六参考图像。
  20. 根据权利要求18或19所述的图像处理方法,其特征在于,所述方法还包括:对所述RGB域图像进行自动白平衡处理、颜色校正处理以及畸变校正处理。
PCT/CN2021/138512 2021-01-26 2021-12-15 一种图像传感器、摄像头、电子设备及控制方法 WO2022160995A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP21922585.1A EP4262184A4 (en) 2021-01-26 2021-12-15 IMAGE SENSOR, CAMERA, ELECTRONIC DEVICE AND CONTROL METHODS

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110106228.9A CN114793262A (zh) 2021-01-26 2021-01-26 一种图像传感器、摄像头、电子设备及控制方法
CN202110106228.9 2021-01-26

Publications (1)

Publication Number Publication Date
WO2022160995A1 true WO2022160995A1 (zh) 2022-08-04

Family

ID=82459520

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/138512 WO2022160995A1 (zh) 2021-01-26 2021-12-15 一种图像传感器、摄像头、电子设备及控制方法

Country Status (3)

Country Link
EP (1) EP4262184A4 (zh)
CN (1) CN114793262A (zh)
WO (1) WO2022160995A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170257584A1 (en) * 2014-08-27 2017-09-07 Sony Semiconductor Solutions Corporation Image processing device, image processing method, and image processing system
CN109979953A (zh) * 2019-03-26 2019-07-05 福州鑫图光电有限公司 一种图像传感器
CN110463197A (zh) * 2017-03-26 2019-11-15 苹果公司 增强立体相机成像系统中的空间分辨率
CN110996077A (zh) * 2019-11-25 2020-04-10 Oppo广东移动通信有限公司 图像传感器、摄像头组件和移动终端
CN111447380A (zh) * 2020-05-22 2020-07-24 Oppo广东移动通信有限公司 控制方法、摄像头组件和移动终端

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104581100A (zh) * 2015-02-12 2015-04-29 张李静 色彩滤镜阵列和图像处理方法
EP3618430B1 (en) * 2017-04-25 2022-02-09 Sony Group Corporation Solid-state image capturing device and electronic instrument
CN109064504B (zh) * 2018-08-24 2022-07-15 深圳市商汤科技有限公司 图像处理方法、装置和计算机存储介质
CN111131798B (zh) * 2019-10-18 2021-06-01 华为技术有限公司 图像处理方法、图像处理装置以及摄像装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170257584A1 (en) * 2014-08-27 2017-09-07 Sony Semiconductor Solutions Corporation Image processing device, image processing method, and image processing system
CN110463197A (zh) * 2017-03-26 2019-11-15 苹果公司 增强立体相机成像系统中的空间分辨率
CN109979953A (zh) * 2019-03-26 2019-07-05 福州鑫图光电有限公司 一种图像传感器
CN110996077A (zh) * 2019-11-25 2020-04-10 Oppo广东移动通信有限公司 图像传感器、摄像头组件和移动终端
CN111447380A (zh) * 2020-05-22 2020-07-24 Oppo广东移动通信有限公司 控制方法、摄像头组件和移动终端

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHAKRABARTI AYAN; FREEMAN WILLIAM T.; ZICKLER TODD: "Rethinking color cameras", 2014 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL PHOTOGRAPHY (ICCP), IEEE, 2 May 2014 (2014-05-02), pages 1 - 8, XP032605762, DOI: 10.1109/ICCPHOT.2014.6831801 *
See also references of EP4262184A4

Also Published As

Publication number Publication date
CN114793262A (zh) 2022-07-26
EP4262184A1 (en) 2023-10-18
EP4262184A4 (en) 2024-04-03

Similar Documents

Publication Publication Date Title
WO2021051996A1 (zh) 一种图像处理的方法和装置
JP5543616B2 (ja) 色フィルタアレイ画像反復デノイズ
JP6461372B2 (ja) 撮影方法、撮影装置、及び端末
JP5749741B2 (ja) ノイズ低減方法
WO2021196554A1 (zh) 图像传感器、处理系统及方法、电子设备和存储介质
CN103688536B (zh) 图像处理装置、图像处理方法
JP5853166B2 (ja) 画像処理装置及び画像処理方法並びにデジタルカメラ
US20060139468A1 (en) Digital camera
JP2001275029A (ja) デジタルカメラ、その画像信号処理方法及び記録媒体
KR20100111614A (ko) 화상처리장치, 화상처리방법 및 기록매체
US20080122946A1 (en) Apparatus and method of recovering high pixel image
EP2720455A1 (en) Image pickup device imaging three-dimensional moving image and two-dimensional moving image, and image pickup apparatus mounting image pickup device
TWI599809B (zh) 鏡頭模組陣列、影像感測裝置與數位縮放影像融合方法
CN113170061B (zh) 图像传感器、成像装置、电子设备、图像处理系统及信号处理方法
US20190335110A1 (en) Imaging element and imaging apparatus
WO2023016146A1 (zh) 图像传感器、图像采集装置、图像处理方法及图像处理器
WO2013027507A1 (ja) 撮像装置
WO2013018471A1 (ja) 撮像装置
WO2012169301A1 (ja) 立体動画像及び平面動画像を撮像する撮像素子及びこの撮像素子を搭載する撮像装置
CN115280766B (zh) 图像传感器、成像装置、电子设备、图像处理系统及信号处理方法
CN107613183A (zh) 一种摄像头系统和摄像头系统的应用方法
US9743007B2 (en) Lens module array, image sensing device and fusing method for digital zoomed images
WO2022160995A1 (zh) 一种图像传感器、摄像头、电子设备及控制方法
WO2014091706A1 (ja) 撮像装置
EP4270931A1 (en) Image processing method, image processing system, electronic device, and readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21922585

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021922585

Country of ref document: EP

Effective date: 20230714

NENP Non-entry into the national phase

Ref country code: DE