WO2022016385A1 - Method of generating corrected pixel data, electrical device and non-transitory computer readable medium - Google Patents

Method of generating corrected pixel data, electrical device and non-transitory computer readable medium Download PDF

Info

Publication number
WO2022016385A1
WO2022016385A1 PCT/CN2020/103319 CN2020103319W WO2022016385A1 WO 2022016385 A1 WO2022016385 A1 WO 2022016385A1 CN 2020103319 W CN2020103319 W CN 2020103319W WO 2022016385 A1 WO2022016385 A1 WO 2022016385A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel value
color
pixel
color pixel
image sensor
Prior art date
Application number
PCT/CN2020/103319
Other languages
French (fr)
Inventor
Hirotake Cho
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp., Ltd. filed Critical Guangdong Oppo Mobile Telecommunications Corp., Ltd.
Priority to PCT/CN2020/103319 priority Critical patent/WO2022016385A1/en
Publication of WO2022016385A1 publication Critical patent/WO2022016385A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/58Control of the dynamic range involving two or more exposures
    • H04N25/581Control of the dynamic range involving two or more exposures acquired simultaneously
    • H04N25/583Control of the dynamic range involving two or more exposures acquired simultaneously with different integration times

Definitions

  • the present disclosure relates to a method of generating a corrected pixel data, an electrical device implementing such method and a non-transitory computer readable medium including program instructions stored thereon for performing such method.
  • Electrical devices such as smartphones and tablet terminals are widely used in our daily life.
  • many of the electrical devices are provided with a camera assembly to capture images.
  • Some of the electrical devices are portable and are thus easy to carry. Therefore, a user of the electrical device can easily take a picture of an object by using the camera assembly of the electrical device anytime, anywhere.
  • the camera assembly has an image sensor to capture images by changing a light having passed through a color filter to an electrical signal. However, if the intensity of the light is high, the electrical signal is saturated, and it results in the halation of a target image displayed on a display.
  • the present disclosure aims to solve at least one of the technical problems mentioned above. Accordingly, the present disclosure needs to provide a method of generating a corrected pixel data, an electrical device implementing such method and a non-transitory computer readable medium including program instructions stored thereon for performing such method.
  • a method of generating a corrected pixel data may include:
  • the captured pixel data includes a plurality of pixel positions, each of the pixel positions includes two signal values, one of the two signal values is one of a first color pixel value, a second color pixel value and a third color pixel value, the other of the two signal values is a fourth color pixel value, a color of a color filter of the image sensor to obtain the first color pixel value is equal to a color of the color filter of the image sensor to obtain the fourth color pixel value, and an exposure time to obtain the fourth color pixel value is shorter than an exposure time to obtain the first color pixel value and/or an analog gain to obtain the fourth color pixel value in the image sensor is lower than an analog gain to obtain the first color pixel value in the image sensor;
  • the reference correlations may be calculated based on the unsaturated second color pixel values and the fourth color pixel values at the neighboring pixel positions.
  • the target correlation may be an average of the reference correlations.
  • the calculating the target correlation may include:
  • the selecting one of the averages may include selecting one of the averages of which difference between the two reference correlations is smallest.
  • each of the reference correlations at the neighboring pixel positions may be a ratio between the second color pixel value and the fourth color pixel value at the same pixel position.
  • the target correlation at the target pixel position may be a ratio between the second pixel position to be restored and the fourth pixel value at the target pixel position.
  • a color image data may include the first color pixel value, the second color pixel value and the third color pixel value, and a dark image data may include the fourth color pixel value.
  • the method may further include resuming the first color pixel value in the color image data for the pixel position which does not include the first color pixel value, based on the fourth color pixel value in the dark image data at the same pixel position.
  • the first color pixel value may be resumed based on a ratio between an exposure time to obtain the fourth color pixel value and an exposure time to obtain the first color pixel value in the image sensor.
  • the first color pixel value may be resumed based on a ratio between an analog gain to obtain the fourth color pixel value and an analog gain to obtain the first color pixel value in the image sensor.
  • the color image data may further include a fifth color pixel value and a sixth color pixel value
  • a color of the color filter of the image sensor to obtain the second color pixel value is equal to a color of the color filter of the image sensor to obtain the fifth color pixel value, wherein an exposure time to obtain the fifth color pixel value is shorter than an exposure time to obtain the second color pixel value and/or an analog gain to obtain the fifth color pixel value in the image sensor is lower than an analog gain to obtain the second color pixel value in the image sensor, and
  • a color of the color filter of the image sensor to obtain the third color pixel value is equal to a color of the color filter of the image sensor to obtain the sixth color pixel value, wherein an exposure time to obtain the sixth color pixel value is shorter than an exposure time to obtain the third color pixel value and/or an analog gain to obtain the sixth color pixel value in the image sensor is lower than an analog gain to obtain the third color pixel value in the image sensor.
  • the method may further include:
  • the second color pixel value may be resumed based on a ratio between an exposure time to obtain the fifth color pixel value and an exposure time to obtain the second color pixel value in the image sensor, and
  • the third color pixel value is resumed based on a ratio between an exposure time to obtain the sixth color pixel value and an exposure time to obtain the third color pixel value in the image sensor.
  • the second color pixel value may be resumed based on a ratio between an analog gain to obtain the fifth color pixel value and an analog gain to obtain the second color pixel value in the image sensor, and
  • the third color pixel value may be resumed based on a ratio between an analog gain to obtain the sixth color pixel value and an analog gain to obtain the third color pixel value in the image sensor.
  • a color of the first color pixel value may be green, and a color of the fourth color pixel value may be dark green.
  • a color of the second color pixel value may be one of red and blue
  • a color of the third color pixel value may be the other of the red and the blue
  • a color of the fifth color pixel value may be one of dark red and dark blue, and the sixth color pixel value may be the other of the dark red and the dark blue.
  • an arrangement of the first color pixel value, the second color pixel value and the third color pixel value may be in conformity to a Bayer format.
  • an electrical device may include:
  • a camera assembly configured to be provided with an image sensor to generate a captured pixel data including a plurality of pixel positions, wherein each of the pixel positions includes two signal values, one of the two signal values is one of a first color pixel value, a second color pixel value and a third color pixel value, the other of the two signal values is a fourth color pixel value, a color of a color filter of the image sensor to obtain the first color pixel value is equal to a color of the color filter of the image sensor to obtain the fourth color pixel value, and an exposure time to obtain the fourth color pixel value is shorter than an exposure time to obtain the first color pixel value and/or an analog gain to obtain the fourth color pixel value in the image sensor is lower than an analog gain to obtain the first color pixel value in the image sensor; and
  • a main processor configured to:
  • a non-transitory computer readable medium may include program instructions stored thereon for performing, at least, the following:
  • the captured pixel data includes a plurality of pixel positions, wherein each of the pixel positions includes two signal values, one of the two signal values is one of a first color pixel value, a second color pixel value and a third color pixel value, the other of the two signal values is a fourth color pixel value, a color of a color filter of the image sensor to obtain the first color pixel value is equal to a color of the color filter of the image sensor to obtain the fourth color pixel value, and an exposure time to obtain the fourth color pixel value is shorter than an exposure time to obtain the first color pixel value and/or an analog gain to obtain the fourth color pixel value in the image sensor is lower than an analog gain to obtain the first color pixel value in the image sensor;
  • FIG. 1 is a plan view of a first side of an electrical device according to a first embodiment of the present disclosure
  • FIG. 2 is a plan view of a second side of the electrical device according to the first embodiment of the present disclosure
  • FIG. 3 is a block diagram of the electrical device according to the first embodiment of the present disclosure.
  • FIG. 4 is a diagram for showing a color image data in a captured pixel data according to the first embodiment of the present disclosure
  • FIG. 5 is a diagram for showing a dark image data in the captured pixel data according to the first embodiment of the present disclosure
  • FIG. 6 is a diagram for showing a part of a pixel array of an image sensor according to the first embodiment of the present disclosure
  • FIG. 7 is a flowchart of a dynamic range correction process performed by the electrical device according to the first embodiment of the present disclosure (Part 1) ;
  • FIG. 8 is a flowchart of a dynamic range correction process performed by the electrical device according to the first embodiment of the present disclosure (Part 2) ;
  • FIG 9 is a diagram for showing an example of an arrangement of regular green pixel values and an arrangement of dark green pixel values according to the first embodiment of the present disclosure, before the regular green pixel values are resumed;
  • FIG. 10 is a diagram for showing an example of an arrangement of the regular green pixel values and an arrangement of the dark green pixel values according to the first embodiment of the present disclosure, after the regular green pixel values are resumed;
  • FIG. 11 is a diagram for showing an example of an arrangement of regular red pixel values and an arrangement of dark red pixel values according to the first embodiment of the present disclosure, before the regular red pixel values are resumed;
  • FIG. 12 is a diagram for showing an example of an arrangement of the regular red pixel values according to the first embodiment of the present disclosure, after the regular red pixel values are resumed;
  • FIG. 13 is a diagram for showing an example of an arrangement of the dark green pixel value and an arrangement of the regular red pixel values where several regular pixel values are saturated (Part 1) ;
  • FIG. 14 is a diagram for showing the example of the arrangement of the dark green pixel value and the arrangement of the regular red pixel values where several regular pixel values are saturated (Part 2) ;
  • FIG. 15 is a diagram for showing an example of an explanation bar graph to explain a relationship of the dark green pixel value, the regular red pixel value and reference correlations;
  • FIG. 16 is a diagram for showing the example of the arrangement of the dark green pixel value and the arrangement of the regular red pixel values where several regular pixel values are saturated (Part 3) ;
  • FIG. 17 is a diagram for showing the example of the arrangement of the dark green pixel value and the arrangement of the regular red pixel values where several regular pixel values are saturated (Part 4) ;
  • FIG. 18 shows a sample image in a standard dynamic range in which low brightness pixel values are cut off, and a histogram between the brightness and the number of the pixels in this sample image;
  • FIG. 19 shows another sample image in the standard dynamic range in which high brightness pixel values are cut off, and a histogram between the brightness and the number of the pixels in this sample image;
  • FIG. 20 shows still another sample image in a high dynamic range in which neither low brightness pixel values nor high brightness pixel values are cut off, and a histogram between the brightness and the number of the pixels in this sample image;
  • FIG. 21 is a diagram for showing an arrangement of the regular red pixel values and an arrangement of the dark green pixel values according to a second embodiment of the present disclosure.
  • FIG. 1 illustrates a plan view of a first side of an electrical device 10 according to a first embodiment of the present disclosure
  • FIG. 2 illustrates a plan view of a second side of the electrical device 10 according to the first embodiment of the present disclosure.
  • the first side may be referred to as a back side of the electrical device 10 whereas the second side may be referred to as a front side of the electrical device 10.
  • the electrical device 10 may include a display 20 and a camera assembly 30.
  • the camera assembly 30 includes a first main camera 32, a second main camera 34 and a sub camera 36.
  • the first main camera 32 and the second main camera 34 can capture an image in the first side of the electrical device 10 and the sub camera 36 can capture an image in the second side of the electrical device 10. Therefore, the first main camera 32 and the second main camera 34 are so-called out-cameras whereas the sub camera 36 is a so-called in-camera.
  • the electrical device 10 can be a mobile phone, a tablet computer, a personal digital assistant, and so on.
  • Each of the first main camera 32, the second main camera 34 and the sub camera 36 has an image sensor which converts a light which has passed a color filter to an electrical signal.
  • a signal value of the electrical signal depends on an amount of the light which has passed the color filter.
  • the electrical device 10 may have less than three cameras or more than three cameras.
  • the electrical device 10 may have two, four, five, and so on, cameras.
  • FIG. 3 is a block diagram of the electrical device 10 according to the present embodiment.
  • the electrical device 10 may include a main processor 40, an image signal processor 42, a memory 44, a power supply circuit 46 and a communication circuit 48.
  • the display 20, the camera assembly 30, the main processor 40, the image signal processor 42, the memory 44, the power supply circuit 46 and the communication circuit 48 are connected each other via a bus 50.
  • the main processor 40 executes one or more program instructions stored in the memory 44.
  • the main processor 40 implements various applications and data processing of the electrical device 10 by executing the program instructions.
  • the main processor 40 may be one or more computer processors.
  • the main processor 40 is not limited to one CPU core, but it may have a plurality of CPU cores.
  • the main processor 40 may be a main CPU of the electrical device 10, an image processing unit (IPU) or a DSP provided with the camera assembly 30.
  • the image signal processor 42 controls the camera assembly 30 and processes various kinds of image data captured by the camera assembly 30 to generate a target image data.
  • the image signal processor 42 can execute a de-mosaic process, a noise reduction process, an auto exposure process, an auto focus process, an auto white balance process, a high dynamic range process and so on, for the image data captured by the camera assembly 30.
  • the main processor 40 and the image signal processor 42 collaborate with each other to generate a target image data of the object captured by the camera assembly 30. That is, the main processor 40 and the image signal processor 42 are configured to capture the image of the object by means of the camera assembly 30 and execute various kinds of image processing to the captured image data.
  • the memory 44 stores program instructions to be executed by the main processor 40, and various kinds of data. For example, data of the captured image are stored in the memory 44.
  • the memory 44 may include a high-speed RAM memory, and/or a non-volatile memory such as a flash memory and a magnetic disk memory. That is, the memory 44 may include a non-transitory computer readable medium in which the program instructions are stored.
  • the power supply circuit 46 may have a battery such as a lithium-ion rechargeable battery and a battery management unit (BMU) for managing the battery.
  • BMU battery management unit
  • the communication circuit 48 is configured to receive and transmit data to communicate with base stations of the telecommunication network system, the Internet or other devices via wireless communication.
  • the wireless communication may adopt any communication standard or protocol, including but not limited to GSM (Global System for Mobile communication) , CDMA (Code Division Multiple Access) , LTE (Long Term Evolution) , LTE-Advanced, 5th generation (5G) .
  • the communication circuit 48 may include an antenna and a RF (radio frequency) circuit.
  • FIG. 4 shows a color image data in a captured pixel data
  • FIG. 5 shows a dark image data in the captured pixel data.
  • the captured pixel data is an image data captured by the camera assembly 30.
  • the captured pixel data is generated by the image sensor of the first main camera 32.
  • the captured pixel data may be generated by the image sensor of the second main camera 34 or the sub camera 36.
  • the color image data in the captured pixel data includes green pixel values G1, red pixel values R1, and blue pixel values B1 as well as dark red pixel values R2 and dark blue pixel values B2.
  • the dark image data in the captured image includes dark green pixel values G2.
  • the green pixel values G1, the red pixel values R1 and the blue pixel values B1 may also be referred to as the regular green pixel values G1, the regular red pixel values R1 and the regular blue pixel values B1, respectively.
  • the regular green pixel values G1 and the dark green pixel values G2 are simply referred to as green pixel values G
  • the regular red pixel values R1 and the dark red pixel values R2 are simply referred to as red pixel values R
  • the regular blue pixel values B1 and the dark blue pixel values B2 are simply referred to as blue pixel values B.
  • the color image data includes a plurality of pixel positions, and one pixel position includes one signal value from the image sensor of the first main camera 32.
  • Each of the pixel positions of the color image data includes one of the red pixel value R1, the dark red pixel value R2, the blue pixel value B1 and the dark blue pixel value B2.
  • the color image data is in conformity to a Bayer format. That is, an arrangement of the regular green pixel values G1, the red pixel values R and the blue pixel values B is in conformity to the Bayer format, and therefore the number of green pixels is twice as many as the number of red pixels or blue pixels in the color filter of the image sensor. As a result, the arrangement of the green pixel values G1, the red pixel values R and the blue pixel values B is also in conformity to the Bayer format.
  • the dark red pixel values R2 are sparsely inlaid in the regular red pixel values R1
  • the dark blue pixel values B2 are sparsely inlaid in the regular blue pixel values B1. This is because, even if the regular red pixel value R1 is saturated, the saturated red pixel value R1 can be restored by using the dark red pixel values R2 neighbored to the saturated red pixel value R1. Similarly, even if the regular blue pixel value B1 is saturated, the saturated blue pixel value B1 can be restored by using the dark blue pixel values B2 neighbored to the saturated blue pixel value B1.
  • the dark image data also includes a plurality of the pixel positions, and the number of the pixel positions of the dark image data is equal to the number of the pixel positions of the color image data. That is, each of the pixel positions of the dark image data corresponds to one pixel position of the color image data which is located at the same position of the target image data to be displayed on the display 20. In other words, the arrangement of the pixel positions in the color image data is matched with the arrangement of the pixel positions in the dark image data.
  • one pixel position includes one dark green pixel value G2. Therefore, the number of the dark green pixel values G2 in the dark image data is double the number of the green pixel values G1 in the color image data.
  • FIG. 4 and FIG. 5 a part of the pixel positions is illustrated, and the color image data and the dark image data may include more pixel positions than those shown in FIG. 4 and FIG. 5.
  • FIG. 6 shows a part of a pixel array of the image sensor of the first main camera 32 of the camera assembly 30 in the present embodiment.
  • four pixel positions are illustrated, but the number of the pixel positions of the pixel array of the image sensor is equal to the number of the pixel positions of the color image data shown in FIG. 4 or the dark image data shown in FIG. 5.
  • one pixel position is composed of four physical pixel elements. That is, the pixel array of the present embodiment employs 2X2 binning technology.
  • each pixel position includes two dark green physical pixel elements PG2 which are arranged at diagonal corners in each of the pixel positions. That is, a signal value of the dark green pixel value G2 is generated by combing two electric charges in the two dark green physical pixel elements PG2.
  • each pixel position includes two regular green physical pixel elements PG1, two red physical pixel elements PR, or two blue physical pixel elements PB, which are inversely arranged at diagonal corners in each of the pixel positions. That is, a signal value of the regular green pixel value G1, a signal value of the red pixel value R, and a signal value of the blue pixel value B are generated by combing two electric charges in the two regular green physical pixel elements PG1, the two red physical pixel elements PR, and the two blue physical pixel elements PB, respectively.
  • One of the two signal values is one of the green pixel value G1, the red pixel value R1 (R) and the blue pixel value B1 (B) , and this signal value is for the color image data.
  • the other of the two signal values is the dark green pixel value G2, and this signal value is for the dark image data.
  • a ratio of the green physical pixel elements PG1 to the red physical pixel elements PR or the blue physical pixel elements PB is 2.
  • dark red physical pixel elements and dark blue physical pixel elements are sparsely inlaid in the red physical pixel elements PR and the blue physical pixel elements BR, respectively.
  • a color of the color filter of the image sensor to obtain the regular green pixel value G1 is equal to a color of the color filter of the image sensor to obtain the dark green pixel value G2.
  • an exposure time to obtain the dark green pixel value G2 is shorter than an exposure time to obtain the regular green pixel value G1, and/or an analog gain to obtain the dark green pixel value G2 is lower than an analog gain to obtain the regular green pixel value G1 in the image sensor.
  • a color of the color filter of the image sensor to obtain the regular red pixel value R1 is equal to a color of the color filter of the image sensor to obtain the dark red pixel value R2.
  • an exposure time to obtain the dark red pixel value R2 is shorter than an exposure time to obtain the regular red pixel value R1, and/or an analog gain to obtain the dark red pixel value R2 is lower than an analog gain to obtain the regular red pixel value R1 in the image sensor.
  • a color of the color filter of the image sensor to obtain the regular blue pixel value B1 is equal to a color of the color filter of the image sensor to obtain the dark blue pixel value B2.
  • an exposure time to obtain the dark blue pixel value B2 is shorter than an exposure time to obtain the regular blue pixel value B1, and/or an analog gain to obtain the dark blue pixel value B2 is lower than an analog gain to obtain the regular blue pixel value B1 in the image sensor.
  • FIG. 7 and FIG. 8 show a flowchart of a dynamic range correction process performed by the electrical device 10 according to the present embodiment.
  • the dynamic range correction process is executed, for example, by the main processor 40 in order to correct a dynamic range of the captured pixel data from the image sensor.
  • the main processor 40 may cooperate with the image signal processor 42 to execute the dynamic range correction process. Therefore, the main processor 40 and the image signal processor 42 may constitute an image processor in the present embodiment.
  • program instructions of the dynamic range correction process are stored in the non-transitory computer readable medium of the memory 44. Therefore, for example, when the program instructions are read out from the memory 44 and executed in the main processor 40, the main processor 40 implements the dynamic range correction process illustrated in FIG. 7 and FIG. 8.
  • the main processor 40 obtains the captured pixel data from the image sensor (Step S10) .
  • the main processor 40 obtains the captured pixel data from the camera assembly 30. That is, the camera assembly 30 captures an image of an object and generates its captured pixel data including the color image data and the dark image data.
  • each of the pixel positions of the captured pixel data includes two signal values.
  • One of the two signals constitutes the color image data and the other of the two signals constitutes the dark image data.
  • the captured pixel data may be temporarily stored in the memory 44.
  • the main processor 40 obtains the color image data and the dark image data from the captured pixel data (Step S12) . That is, the color image data and the dark image data are extracted from the captured pixel data. The color image data and the dark image data are temporarily stored in the memory 44.
  • the main processor 40 generates a full regular green pixel data based on the regular green pixel values G1 of the color image data and the dark green pixel values G2 of the dark image data (Step S14) .
  • FIG. 9 shows an example of the arrangement of the regular green pixel values G1 and the arrangement of the dark green pixel values G2.
  • each of the pixel positions includes one dark green pixel value G2. That is, in this embodiment, each of the pixel positions has one dark green pixel value G2.
  • every other pixel position includes one regular green pixel value G1.
  • the data format of the color image data is the Bayer format. Therefore, the number of the regular green pixel values G1 is half of the number of the dark green pixel values G2.
  • the regular green pixel values G1 at the pixel positions which do not include the regular green pixel values G1 in the color image data can be resumed based on the dark green pixel values G2 in the dark image data. For example, if the exposure time ETG2 to obtain the dark green pixel values G2 is 1/80 seconds whereas the exposure time ETG1 to obtain the regular green pixel values G1 is 1/10 seconds, the exposure time ETG1 is eight times as long as the exposure time ETG2. Therefore, the regular green pixel values G1 at the pixel positions which do not include the regular green pixel values G1 can be calculated as the dark green pixel value G2 X 8.
  • each of the regular green pixel values G1 and each of the dark green pixel values G2 have 10 bits. Therefore, each of gradation values of the regular green pixel values G1 and gradation values of the dark green pixel values G2 is 1024 (210) . Therefore, the minimum regular green pixel value G1 and the minimum dark green pixel value G2 are 0, and the maximum regular green pixel value G1 and the maximum dark green pixel value G2 are 1023.
  • gradation values of the resumed regular green pixel values G1RES are 8192 (213) .
  • 1023 X 8 8184, the maximum resumed regular green pixel value G1RES is 8184.
  • the resumed regular green pixel value G1RES is 240 (30 X 8) .
  • the resumed regular green pixel value G1RES is 4080 (510 X 8) .
  • the full regular green pixel data as shown in FIG. 10 is generated. That is, the resumed regular green pixel values G1RES are calculated based on the dark green pixel values B2 at the pixel positions which do not include the regular green pixel values B1. As a result, each of the pixel positions has the regular green pixel value G1 or the resumed regular green pixel value G1RES.
  • the resumed regular green pixel values G1RES when it is not necessary to distinguish the resumed regular green pixel values G1RES from the regular green pixel values G1, the resumed regular green pixel values G1RES and the regular green pixel values G1 are simply referred to as the regular green pixels G1.
  • the main processor 40 resumes the regular red pixel values R1 from the dark red pixel values R2 at the same pixel positions (Step S16) . That is, the regular red pixel values R1 are resumed based on the dark red pixel values R2 in the same manner as the resumed regular green pixel values G1RES.
  • FIG. 11 shows an example of the arrangement of the regular red pixel values R1 and the arrangement of the dark red pixel values R2 in the present embodiment.
  • the dark red pixel values R2 are sparsely inlaid in the red pixel values R.
  • the red pixel values R are allocated once in every four pixel positions, and the dark red pixel values R2 are allocated once in every four red pixel values R.
  • the same exposure time as the green pixels is also applied to the red pixels. That is, for example, if the exposure time ETR2 to obtain the dark red pixel values R2 is 1/80 seconds whereas the exposure time ETR1 to obtain the regular red pixel values R1 is 1/10 seconds, the exposure time ETR1 is eight times as long as the exposure time ETR2. Therefore, the resumed regular red pixel values R1RES in the pixel positions which include the dark red pixel values R2 can be calculated as the dark red pixel values R2 X 8.
  • the resumed regular red pixel value R1RES is 240 (30 X 8) .
  • the resumed regular red pixel value R1RES is 4080 (510 X 8) .
  • the dark red pixel values R2 are replaced by the resumed regular red pixel values R1RES as shown in FIG. 12. That is, the resumed regular red pixel values R1RES are calculated based on the dark red pixel values R2 at the same pixel positions. As a result, every fourth pixel position has the regular red pixel value R1 or the resumed regular red pixel value R1RES.
  • the resumed regular red pixel values R1RES when it is not necessary to distinguish the resumed regular red pixel values R1RES from the regular red pixel values R1, the resumed regular red pixel values R1RES and the regular red pixel values R1 are simply referred to as the regular red pixel values R1.
  • the main processor 40 resumes the regular blue pixel values B1 from the dark blue pixel values B2 (Step S18) . That is, the regular blue pixel values B1 are resumed based on the dark blue pixel values B2 at the same pixel positions in the same manner as the resumed regular red pixel values R1RES.
  • Step S18 Since the process for the resumed regular blue pixel values B1RES is substantially the same as the process for the resumed regular red pixel values R1RES, the detailed explanation in the Step S18 is omitted.
  • the main processor 40 judges whether any saturated red pixel value R1 exists in the color image data which has been processed in the Step S16 (Step S20) .
  • this regular red pixel value R1 if the regular red pixel value R1 has the maximum value in the gradation values, this regular red pixel value R1 is judged as being saturated and the pixel position of the saturated regular red pixel value R1 is regarded as a target pixel position to be restored.
  • the maximum regular red pixel value R1 is 1023. Therefore, if the regular red pixel value R1 is 1023, it is judged that the regular red pixel value R1 is saturated.
  • the main processor 40 judges that the saturated regular red pixel value R1 exists in the color image data (Step S20: Yes) , for example, the main processor 40 calculates reference correlations K between the red pixel value R1 and the dark green pixel value G2 at neighboring pixel positions which are the pixel positions neighbored to the target pixel position (Step S22) .
  • FIG. 13 shows an example of the arrangement of the dark green pixel values G2 and the arrangement of the regular red pixel values R1.
  • the regular red pixel values R1 at the neighboring pixel positions NP2, NP4 and NP7 and the target pixel position TPP are saturated.
  • the pixel position TPP is the target pixel position at which the regular red pixel value R1 should be restored.
  • the regular red pixel value R1 at the target pixel position TPP is a selected pixel position at which the regular red pixel value R1 should be restored in the Step S20.
  • the regular red pixel values R1 at the neighboring pixel positions NP1, NP3, NP5, NP6 and NP8 are not saturated. That is, the red pixel values R1 at the neighboring pixel positions NP1, NP3, NP5, NP6 and NP8 are less than 1023.
  • the reference correlations between the regular red pixel values R1 and the dark green pixel values G2 at the neighboring pixel positions NP1, NP3, NP5, NP6 and NP8 are calculated.
  • FIG. 14 shows the reference correlations K1 through K8 at the neighboring pixel positions NP1 through NP8.
  • the reference correlations K1, K3, K5, K6 and K8 are calculated.
  • each of the reference correlation K is a ratio between the dark green pixel value G2 and the regular red pixel value R1 at the same pixel position.
  • the reference correlation K can be defined as the regular red pixel value R1 /the dark green pixel value G2.
  • FIG. 15 is an example of a bar graph showing the relationship of the dark green pixel value G2, the regular red pixel value R1 and the reference correlations K. As shown in FIG. 15, the correlation K indicates a ratio of the regular red pixel value R1 to the dark green pixel value G2.
  • the reference correlation K is 1.5. If the regular red pixel value R1 is 825 and the dark green pixel value G2 is 550, the reference correlation K is 1.6.
  • each of the reference correlations K1, K3, K5, K6 and K8 are calculated based on the dark green pixel value G2 and the regular red pixel value R1 in the same manner as shown in FIG. 15.
  • the main processor 40 calculates a target correlation KX between the regular red pixel value R1 and the dark green pixel value G2 at the target pixel position TPP based on the reference correlations K calculated in the Step S22 (Step S24) .
  • the target correlation KX is an average of the reference correlations K calculated in the Step S22.
  • the target correlation KX is the average of the reference correlations K1, K3, K5, K6 and K8.
  • the target correlation KX is calculated based on the reference correlations K calculated based on the non-saturated regular pixel values R1 and the dark green pixel values G2 at the neighboring pixel positions. That is, the saturated regular red pixel values R1 are excluded to calculate the target correlation KX in order to accurately obtain the reference correlations K and the target correlation KX.
  • one of the reference correlations K is 1.5 and the other of the reference correlations K is 1.6.
  • the main processor 40 restores the regular red pixel value R1TARGET at the target pixel position TPP based on the dark green pixel value G2 at the target pixel position TPP and the target correlation KX calculated in the Step S24 (Step S26) .
  • the regular red pixel value R1TARGET at the target pixel position TPP is calculated as (the dark green pixel value G2 at the target pixel position TPP) X (the target correlation KX) .
  • the regular red pixel value R1TARGET at the target pixel position TPP is 930 (600 X 1.55) .
  • an exact regular red pixel value R1EXACT is 1150.
  • the regular red pixel value R1TARGET (930) is close to the exact regular red pixel value R1EXACT (1150) .
  • the regular red pixel value R1TARGET restored by the present method can exceed the regular red pixel values R1 at the neighboring pixel positions of the target pixel position TPP.
  • the regular red pixel value R1TARGET at the target pixel position TPP has been restored. That is, the saturated regular red pixel values R1 at the target pixel position has been replaced with the restored regular red pixel value R1TARGET.
  • the main processor 40 returns the dynamic range correction process to the Step S20 and judges again whether any saturated red pixel value R1 exists in the color image data. Then, if the main processor 40 judges that the saturated regular red pixel value R1 still exists in the color image data (Step S20: Yes) , for example, the main processor 40 repeats the process from the Step S22.
  • Step S20 the main processor 40 judges whether the saturated regular red pixel value R1 does not exist in the color image data anymore (Step S20: No) , for example, the main processor 40 judges whether any saturated blue pixel value B1 exists in the color image data. That is, after the saturated regular red pixel values R1 have been restored, the dynamic range correction process proceeds to the next process of restoring the saturated regular blue pixel values B1.
  • the process to restore the saturated regular blue pixel values B1 is substantially the same as the process to restore the saturated regular red pixel values R1 described above. That is, the Step S30 through the Step S36 are substantially the same as the Step S20 through the Step S26.
  • the main processor 40 repeats the Step S30 through the Step S34 until the saturated regular blue pixel values B1 does not exist in the color image data anymore. After the saturated regular blue pixel values B1 have been restored (Step S30: No) , the dynamic range correction process is completed.
  • the color image data and the dark image data are inputted to the image signal process 42 to generate the target image data, for example, to be displayed on the display 20 or to be stored in the memory 44.
  • the color image data having been subjected to the dynamic range correction process and the dark image data are directly inputted to the image signal processor 42. That is, the color image data having been subjected to the dynamic range correction process is the high dynamic range image data (HDR image data) . Therefore, the image signal processor 42 processes the high dynamic range image data to generate the target image data.
  • the target image data may also be the high dynamic range image data, and thus the format of the target image data may be the high dynamic range image format.
  • the color image data may be inputted to the image signal processor 42 along with the dark image data.
  • the main processor 40 adjusts the pixel values in the color image data not to include a halation region or a black defect region in the color image data. That is, a bright region in the color image data is shifted to be dark whereas a dark region in the color image data is shifted to be bright. More precisely, the high pixel values are lowered to avoid generating the halation region in the color image data whereas the low pixel values are raised to avoid generating the black defect region in the color image data. In general, this process is also called tone mapping.
  • the image signal processor 42 processes the standard dynamic range image data (SDR image data) to generate the target image data.
  • the target image data is also the standard dynamic range image data, and thus the format of the target image data is the standard dynamic range image format.
  • FIG. 18 shows a sample image in the standard dynamic range in which low brightness pixel values are cut off and a histogram between the brightness and the number of the pixels in this sample image. As shown in FIG. 18, the sample image contains the black defect regions and thus the user cannot observe an inside of a room.
  • FIG. 19 shows another sample image in the standard dynamic range in which high brightness pixel values are saturated and a histogram between the brightness and the number of the pixels in this sample image.
  • the sample image contains the halation regions and thus the user cannot observe an outside of the room. That is, the pixel values of the outside of the room are saturated.
  • FIG. 20 shows still another sample image in the high dynamic range in which neither low brightness pixel values nor high brightness pixel values are in black defects and a histogram between the brightness and the number of the pixels in this sample image.
  • the sample image does not contain the black defect regions or the halation regions, and thus the user can observe both the inside of the room and the outside of the room.
  • the saturated regular red pixel values R1 are restored based on the dark green pixel value G2 at the target pixel position TPP and the target correlation KX which is calculated based on the reference correlations K between the regular red pixel values R1 and the dark green pixel values G2 at the neighboring pixel positions. That is, the saturated regular red pixel value R1 is restored by correlations between the regular red pixel values R1 and the dark green pixel values G2 instead of the interpolation of the neighboring regular red pixel values R1. Therefore, it is possible to restore the saturated regular red pixel value R1 more precisely.
  • the saturated regular blue pixel values B1 are restored based on the dark green pixel value G2 at the target pixel position TPP and the target correlation KX which is calculated based on the reference correlations K between the regular blue pixel values B1 and the dark green pixel values G2 at the neighboring pixel positions. That is, the saturated regular blue pixel value B1 is restored by correlations between the regular blue pixel values B1 and the dark green pixel values G2 instead of the interpolation of the neighboring regular blue pixel values B1. Therefore, it is possible to restore the saturated regular blue pixel value B1 more precisely.
  • the target correlation KX can also be calculated by taking into consideration directions of the neighboring pixel positions.
  • the electrical device 10 according to the second embodiment will be explained.
  • FIG. 21 shows the arrangement of the regular red pixel values R1 and the arrangement of the dark green pixel values G2 according to the second embodiment.
  • the target correlation KX is calculated by selecting one of the following four options in the Step 24 of the dynamic range correction process.
  • the first average is a correlation average in the vertical direction of the target pixel position TPP. That is, the first average is the average of the reference correlation K2 immediately on the upper side of the target pixel position TPP and the reference correlation K7 immediately on the lower side of the target pixel position TPP.
  • the second average is a correlation average in the horizontal direction of the target pixel position TPP. That is, the second average is the average of the reference correlation K4 on the left side of the target pixel position TPP and the reference correlation K5 on the right side of the target pixel position TPP.
  • the third average is a correlation average in the top left to bottom right diagonal direction of the target pixel position TPP. That is, the third average is the average of the reference correlation K1 on the left upper side of the target pixel position TPP and the reference correlation K8 on the right lower side of the target pixel position TPP.
  • the fourth average is a correlation average in the top right to bottom left diagonal direction of the target pixel position TPP. That is, the fourth average is the average of the reference correlation K3 on the right upper side of the target pixel position TPP and the reference correlation K6 on the left lower side of the target pixel position TPP.
  • the main processor 40 selects one of the four averages which difference between two reference correlations K is the smallest. That is, if the two reference correlations K are close in value, it can be considered that a variation rate between the regular red pixel values R1 and the dark green pixel values G2 is stable or that the two regular red pixel values R1 are quite similar to each other. Also, it can be expected that the average which difference of two reference correlations is the smallest represents most precisely the correlation of the regular red pixel values R1TARGET and the dark green pixel value G2 at the target pixel positions TPP. This is the reason why the main processor 40 selects the one average which has the smallest difference between the two reference correlations K, as the target correlation KX.
  • one or more saturated regular red pixel values R1 may exist at the neighboring pixel positions.
  • the neighboring pixel positions including the saturated regular red pixel values R1 are excluded when calculating the reference correlation K. Therefore, no correlation average including the saturated regular red pixel values R1 is selected as the target correlation KX.
  • the four pairs of the reference correlations K at the neighboring pixel positions cannot always be calculated, and thus one, two or three pairs of the reference correlations K at the neighboring pixel positions in one, two or three different directions are used to calculate the correlation averages between the two reference correlations K.
  • the dynamic range correction process selects one of the correlation averages of the reference correlations which difference between the two correlations K is smallest.
  • the target correlation KX is calculated in the Step S34 in the same manner as that for the saturated regular red pixel values R1 in the Step S24.
  • the target correlation KX at the target pixel position TPP can be calculated and the saturated regular red pixel value R1 and the saturated regular blue pixel value B1 can be restored.
  • the dark image data is generated in green
  • another color may be used to generate the dark image data.
  • yellow may be used to generate the dark image data.
  • the color filter of the image sensor of the camera assembly 30 is composed of red, yellow and blue (RYB) , and the color image data is composed of red, yellow and blue whereas the dark image data is composed of yellow.
  • the color image data may include more than three colors.
  • the color image data may include green pixel values, red pixel values, blue pixel values and yellow pixel values. That is, the color image data may include a plurality of pixels of at least three colors.
  • the color image data also includes the dark red pixel values R2 and the dark blue pixel values B2
  • the color image data does not necessarily include the dark red pixel values R2 and the dark blue pixel values B2. That is, the color image data may not include the dark red pixel values R2 and the dark blue pixel values B2.
  • first and second are used herein for purposes of description and are not intended to indicate or imply relative importance or significance or to imply the number of indicated technical features.
  • the feature defined with “first” and “second” may comprise one or more of this feature.
  • a plurality of means two or more than two, unless specified otherwise.
  • the terms “mounted” , “connected” , “coupled” and the like are used broadly, and may be, for example, fixed connections, detachable connections, or integral connections; may also be mechanical or electrical connections; may also be direct connections or indirect connections via intervening structures; may also be inner communications of two elements, which can be understood by those skilled in the art according to specific situations.
  • a structure in which a first feature is "on" or “below” a second feature may include an embodiment in which the first feature is in direct contact with the second feature, and may also include an embodiment in which the first feature and the second feature are not in direct contact with each other, but are contacted via an additional feature formed therebetween.
  • a first feature "on” , “above” or “on top of” a second feature may include an embodiment in which the first feature is right or obliquely “on” , “above” or “on top of” the second feature, or just means that the first feature is at a height higher than that of the second feature; while a first feature “below” , “under” or “on bottom of” a second feature may include an embodiment in which the first feature is right or obliquely “below” , "under” or “on bottom of” the second feature, or just means that the first feature is at a height lower than that of the second feature.
  • Any process or method described in a flow chart or described herein in other ways may be understood to include one or more modules, segments or portions of codes of executable instructions for achieving specific logical functions or steps in the process, and the scope of a preferred embodiment of the present disclosure includes other implementations, in which it should be understood by those skilled in the art that functions may be implemented in a sequence other than the sequences shown or discussed, including in a substantially identical sequence or in an opposite sequence.
  • the logic and/or step described in other manners herein or shown in the flow chart, for example, a particular sequence table of executable instructions for realizing the logical function may be specifically achieved in any computer readable medium to be used by the instruction execution system, device or equipment (such as the system based on computers, the system comprising processors or other systems capable of obtaining the instruction from the instruction execution system, device and equipment and executing the instruction) , or to be used in combination with the instruction execution system, device and equipment.
  • the computer readable medium may be any device adaptive for including, storing, communicating, propagating or transferring programs to be used by or in combination with the instruction execution system, device or equipment.
  • the computer readable medium comprise but are not limited to: an electronic connection (an electronic device) with one or more wires, a portable computer enclosure (a magnetic device) , a random access memory (RAM) , a read only memory (ROM) , an erasable programmable read-only memory (EPROM or a flash memory) , an optical fiber device and a portable compact disk read-only memory (CDROM) .
  • the computer readable medium may even be a paper or other appropriate medium capable of printing programs thereon, this is because, for example, the paper or other appropriate medium may be optically scanned and then edited, decrypted or processed with other appropriate methods when necessary to obtain the programs in an electric manner, and then the programs may be stored in the computer memories.
  • each part of the present disclosure may be realized by the hardware, software, firmware or their combination.
  • a plurality of steps or methods may be realized by the software or firmware stored in the memory and executed by the appropriate instruction execution system.
  • the steps or methods may be realized by one or a combination of the following techniques known in the art: a discrete logic circuit having a logic gate circuit for realizing a logic function of a data signal, an application-specific integrated circuit having an appropriate combination logic gate circuit, a programmable gate array (PGA) , a field programmable gate array (FPGA) , etc.
  • each function cell of the embodiments of the present disclosure may be integrated in a processing module, or these cells may be separate physical existence, or two or more cells are integrated in a processing module.
  • the integrated module may be realized in a form of hardware or in a form of software function modules. When the integrated module is realized in a form of software function module and is sold or used as a standalone product, the integrated module may be stored in a computer readable storage medium.
  • the storage medium mentioned above may be read-only memories, magnetic disks, CD, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

A method of generating a corrected pixel data according to the embodiments of the present disclosure includes obtaining a captured pixel data including a plurality of pixel positions; calculating reference correlations between second color pixel values and fourth color pixel values at neighboring pixel positions which are the pixel positions neighbored to a target pixel position, wherein the target pixel position is one of the pixel positions at which the second pixel value is saturated; calculating a target correlation between the second pixel value and the fourth pixel value at the target pixel position based on the reference correlations; and restoring the second pixel value at the target pixel position based on the fourth pixel value at the target pixel position and the target correlation to generate the corrected pixel data.

Description

METHOD OF GENERATING CORRECTED PIXEL DATA, ELECTRICAL DEVICE AND NON-TRANSITORY COMPUTER READABLE MEDIUM TECHNICAL FIELD
The present disclosure relates to a method of generating a corrected pixel data, an electrical device implementing such method and a non-transitory computer readable medium including program instructions stored thereon for performing such method.
BACKGROUND
Electrical devices such as smartphones and tablet terminals are widely used in our daily life. Nowadays, many of the electrical devices are provided with a camera assembly to capture images. Some of the electrical devices are portable and are thus easy to carry. Therefore, a user of the electrical device can easily take a picture of an object by using the camera assembly of the electrical device anytime, anywhere.
The camera assembly has an image sensor to capture images by changing a light having passed through a color filter to an electrical signal. However, if the intensity of the light is high, the electrical signal is saturated, and it results in the halation of a target image displayed on a display.
Although several techniques combat the halation on the target image, their accuracy is not sufficient. That is, the exact pixel value cannot be restored by the current techniques. Therefore, a technique which restores the exact pixel value in the region of the halation is desired.
SUMMARY
The present disclosure aims to solve at least one of the technical problems mentioned above. Accordingly, the present disclosure needs to provide a method of generating a corrected pixel data, an electrical device implementing such method and a non-transitory computer readable medium including program instructions stored thereon for performing such method.
In accordance with the present disclosure, a method of generating a corrected pixel data may include:
obtaining a captured pixel data from an image sensor, wherein the captured pixel data includes a plurality of pixel positions, each of the pixel positions includes two signal values, one of the two signal values is one of a first color pixel value, a second color pixel value and a third color pixel value, the other of the two signal values is a fourth color pixel value, a color of a color filter of the image sensor to obtain the first color pixel value is equal to a color of the color filter of the image sensor to obtain the fourth color pixel value, and an exposure time to obtain the fourth color pixel value is shorter than an exposure time to obtain the first color pixel value and/or an analog gain to obtain the fourth color pixel value in the image sensor is lower than an analog gain to obtain the first color pixel value in the image sensor;
calculating reference correlations between the second color pixel values and the fourth color pixel values at neighboring pixel positions which are the pixel positions neighbored to a target pixel position, wherein the target pixel position is one of the pixel positions at which the second pixel value is saturated;
calculating a target correlation between the second pixel value and the fourth pixel value at the target pixel position based on the reference correlations; and
restoring the second pixel value at the target pixel position based on the fourth pixel value at the target pixel position and the target correlation to generate the corrected pixel data.
In some embodiments, the reference correlations may be calculated based on the unsaturated second color pixel values and the fourth color pixel values at the neighboring pixel positions.
In some embodiments, the target correlation may be an average of the reference correlations.
In some embodiments, the calculating the target correlation may include:
calculating each of averages of the two reference correlations at the neighboring pixel positions arranged in different directions; and
selecting one of the averages as the target correlation.
In some embodiments, the selecting one of the averages may include selecting one of the averages of which difference between the two reference correlations is smallest.
In some embodiments, each of the reference correlations at the neighboring pixel positions may be a ratio between the second color pixel value and the fourth color pixel value at the same pixel position.
In some embodiments, the target correlation at the target pixel position may be a ratio between the second pixel position to be restored and the fourth pixel value at the target pixel position.
In some embodiments, a color image data may include the first color pixel value, the second color pixel value and the third color pixel value, and a dark image data may include the fourth color pixel value.
In some embodiments, the method may further include resuming the first color pixel value in the color image data for the pixel position which does not include the first color pixel value, based on the fourth color pixel value in the dark image data at the same pixel position.
In some embodiments, the first color pixel value may be resumed based on a ratio between an exposure time to obtain the fourth color pixel value and an exposure time to obtain the first color pixel value in the image sensor.
In some embodiments, the first color pixel value may be resumed based on a ratio between an analog gain to obtain the fourth color pixel value and an analog gain to obtain the first color pixel value in the image sensor.
In some embodiments, the color image data may further include a fifth color pixel value and a sixth color pixel value,
a color of the color filter of the image sensor to obtain the second color pixel value is equal to a color of the color filter of the image sensor to obtain the fifth color pixel value, wherein an exposure time to obtain the fifth color pixel value is shorter than an exposure time to obtain the second color pixel value and/or an analog gain to obtain the fifth color pixel value in the image sensor is lower than an analog gain to obtain the second color pixel value in the image sensor, and
a color of the color filter of the image sensor to obtain the third color pixel value is equal to a color of the color filter of the image sensor to obtain the sixth color pixel value, wherein an exposure time to obtain the sixth color pixel value is shorter than an exposure time to obtain the third color pixel value and/or an analog gain to obtain the sixth color pixel value in the image sensor is lower than an analog gain to obtain the third color pixel value in the image sensor.
In some embodiments, the method may further include:
resuming the second color pixel value in the color image data for the pixel position which does not include the second color pixel value, based on the fifth color pixel value at the same pixel position; and
resuming the third color pixel value in the color image data for the pixel position which does not include the third color pixel value, based on the sixth color pixel value at the same pixel position.
In some embodiments, the second color pixel value may be resumed based on a ratio between an exposure time to obtain the fifth color pixel value and an exposure time to obtain the second color pixel value in the image sensor, and
the third color pixel value is resumed based on a ratio between an exposure time to obtain the sixth color pixel value and an exposure time to obtain the third color pixel value in the image  sensor.
In some embodiments, the second color pixel value may be resumed based on a ratio between an analog gain to obtain the fifth color pixel value and an analog gain to obtain the second color pixel value in the image sensor, and
the third color pixel value may be resumed based on a ratio between an analog gain to obtain the sixth color pixel value and an analog gain to obtain the third color pixel value in the image sensor.
In some embodiments, a color of the first color pixel value may be green, and a color of the fourth color pixel value may be dark green.
In some embodiments, a color of the second color pixel value may be one of red and blue, and a color of the third color pixel value may be the other of the red and the blue.
In some embodiments, a color of the fifth color pixel value may be one of dark red and dark blue, and the sixth color pixel value may be the other of the dark red and the dark blue.
In some embodiments, an arrangement of the first color pixel value, the second color pixel value and the third color pixel value may be in conformity to a Bayer format.
In accordance with the present disclosure, an electrical device may include:
a camera assembly configured to be provided with an image sensor to generate a captured pixel data including a plurality of pixel positions, wherein each of the pixel positions includes two signal values, one of the two signal values is one of a first color pixel value, a second color pixel value and a third color pixel value, the other of the two signal values is a fourth color pixel value, a color of a color filter of the image sensor to obtain the first color pixel value is equal to a color of the color filter of the image sensor to obtain the fourth color pixel value, and an exposure time to obtain the fourth color pixel value is shorter than an exposure time to obtain the first color pixel value and/or an analog gain to obtain the fourth color pixel value in the image sensor is lower than an analog gain to obtain the first color pixel value in the image sensor; and
a main processor configured to:
obtain a captured pixel data from an image sensor;
calculate reference correlations between the second color pixel values and the fourth color pixel values at neighboring pixel positions which are the pixel positions neighbored to a target pixel position, wherein the target pixel position is one of the pixel positions at which the second pixel value is saturated;
calculate a target correlation between the second pixel value and the fourth pixel value at the target pixel position based on the reference correlations; and
restore the second pixel value at the target pixel position based on the fourth pixel value at the target pixel position and the target correlation to generate the corrected pixel data.
In accordance with the present disclosure, a non-transitory computer readable medium may include program instructions stored thereon for performing, at least, the following:
obtaining a captured pixel data from an image sensor, wherein the captured pixel data includes a plurality of pixel positions, wherein each of the pixel positions includes two signal values, one of the two signal values is one of a first color pixel value, a second color pixel value and a third color pixel value, the other of the two signal values is a fourth color pixel value, a color of a color filter of the image sensor to obtain the first color pixel value is equal to a color of the color filter of the image sensor to obtain the fourth color pixel value, and an exposure time to obtain the fourth color pixel value is shorter than an exposure time to obtain the first color pixel value and/or an analog gain to obtain the fourth color pixel value in the image sensor is lower than an analog gain to obtain the first color pixel value in the image sensor;
calculating reference correlations between the second color pixel values and the fourth color pixel values at neighboring pixel positions which are the pixel positions neighbored to a target pixel position, wherein the target pixel position is one of the pixel positions at which the second pixel value is saturated;
calculating a target correlation between the second pixel value and the fourth pixel value at  the target pixel position based on the reference correlations; and
restoring the second pixel value at the target pixel position based on the fourth pixel value at the target pixel position and the target correlation to generate the corrected pixel data.
BRIEF DESCRIPTION OF THE DRAWINGS
These and/or other aspects and advantages of embodiments of the present disclosure will become apparent and more readily appreciated from the following descriptions made with reference to the drawings, in which:
FIG. 1 is a plan view of a first side of an electrical device according to a first embodiment of the present disclosure;
FIG. 2 is a plan view of a second side of the electrical device according to the first embodiment of the present disclosure;
FIG. 3 is a block diagram of the electrical device according to the first embodiment of the present disclosure;
FIG. 4 is a diagram for showing a color image data in a captured pixel data according to the first embodiment of the present disclosure;
FIG. 5 is a diagram for showing a dark image data in the captured pixel data according to the first embodiment of the present disclosure;
FIG. 6 is a diagram for showing a part of a pixel array of an image sensor according to the first embodiment of the present disclosure;
FIG. 7 is a flowchart of a dynamic range correction process performed by the electrical device according to the first embodiment of the present disclosure (Part 1) ;
FIG. 8 is a flowchart of a dynamic range correction process performed by the electrical device according to the first embodiment of the present disclosure (Part 2) ;
FIG 9 is a diagram for showing an example of an arrangement of regular green pixel values and an arrangement of dark green pixel values according to the first embodiment of the present disclosure, before the regular green pixel values are resumed;
FIG. 10 is a diagram for showing an example of an arrangement of the regular green pixel values and an arrangement of the dark green pixel values according to the first embodiment of the present disclosure, after the regular green pixel values are resumed;
FIG. 11 is a diagram for showing an example of an arrangement of regular red pixel values and an arrangement of dark red pixel values according to the first embodiment of the present disclosure, before the regular red pixel values are resumed;
FIG. 12 is a diagram for showing an example of an arrangement of the regular red pixel values according to the first embodiment of the present disclosure, after the regular red pixel values are resumed;
FIG. 13 is a diagram for showing an example of an arrangement of the dark green pixel value and an arrangement of the regular red pixel values where several regular pixel values are saturated (Part 1) ;
FIG. 14 is a diagram for showing the example of the arrangement of the dark green pixel value and the arrangement of the regular red pixel values where several regular pixel values are saturated (Part 2) ;
FIG. 15 is a diagram for showing an example of an explanation bar graph to explain a relationship of the dark green pixel value, the regular red pixel value and reference correlations;
FIG. 16 is a diagram for showing the example of the arrangement of the dark green pixel value and the arrangement of the regular red pixel values where several regular pixel values are saturated (Part 3) ;
FIG. 17 is a diagram for showing the example of the arrangement of the dark green pixel value and the arrangement of the regular red pixel values where several regular pixel values are saturated (Part 4) ;
FIG. 18 shows a sample image in a standard dynamic range in which low brightness pixel  values are cut off, and a histogram between the brightness and the number of the pixels in this sample image;
FIG. 19 shows another sample image in the standard dynamic range in which high brightness pixel values are cut off, and a histogram between the brightness and the number of the pixels in this sample image;
FIG. 20 shows still another sample image in a high dynamic range in which neither low brightness pixel values nor high brightness pixel values are cut off, and a histogram between the brightness and the number of the pixels in this sample image; and
FIG. 21 is a diagram for showing an arrangement of the regular red pixel values and an arrangement of the dark green pixel values according to a second embodiment of the present disclosure.
DETAILED DESCRIPTION
Embodiments of the present disclosure will be described in detail and examples of the embodiments will be illustrated in the accompanying drawings. The same or similar elements and the elements having same or similar functions are denoted by like reference numerals throughout the descriptions. The embodiments described herein with reference to the drawings are explanatory, which aim to illustrate the present disclosure, but shall not be construed to limit the present disclosure.
(First Embodiment)
FIG. 1 illustrates a plan view of a first side of an electrical device 10 according to a first embodiment of the present disclosure and FIG. 2 illustrates a plan view of a second side of the electrical device 10 according to the first embodiment of the present disclosure. The first side may be referred to as a back side of the electrical device 10 whereas the second side may be referred to as a front side of the electrical device 10.
As shown in FIG. 1 and FIG. 2, the electrical device 10 may include a display 20 and a camera assembly 30. In the present embodiment, the camera assembly 30 includes a first main camera 32, a second main camera 34 and a sub camera 36. The first main camera 32 and the second main camera 34 can capture an image in the first side of the electrical device 10 and the sub camera 36 can capture an image in the second side of the electrical device 10. Therefore, the first main camera 32 and the second main camera 34 are so-called out-cameras whereas the sub camera 36 is a so-called in-camera. As an example, the electrical device 10 can be a mobile phone, a tablet computer, a personal digital assistant, and so on.
Each of the first main camera 32, the second main camera 34 and the sub camera 36 has an image sensor which converts a light which has passed a color filter to an electrical signal. A signal value of the electrical signal depends on an amount of the light which has passed the color filter.
Although the electrical device 10 according to the present embodiment has three cameras, the electrical device 10 may have less than three cameras or more than three cameras. For example, the electrical device 10 may have two, four, five, and so on, cameras.
FIG. 3 is a block diagram of the electrical device 10 according to the present embodiment. As shown in FIG. 3, in addition to the display 20 and the camera assembly 30, the electrical device 10 may include a main processor 40, an image signal processor 42, a memory 44, a power supply circuit 46 and a communication circuit 48. The display 20, the camera assembly 30, the main processor 40, the image signal processor 42, the memory 44, the power supply circuit 46 and the communication circuit 48 are connected each other via a bus 50.
The main processor 40 executes one or more program instructions stored in the memory 44. The main processor 40 implements various applications and data processing of the electrical device 10 by executing the program instructions. The main processor 40 may be one or more computer processors. The main processor 40 is not limited to one CPU core, but it may have a plurality of CPU cores. The main processor 40 may be a main CPU of the electrical device 10,  an image processing unit (IPU) or a DSP provided with the camera assembly 30.
The image signal processor 42 controls the camera assembly 30 and processes various kinds of image data captured by the camera assembly 30 to generate a target image data. For example, the image signal processor 42 can execute a de-mosaic process, a noise reduction process, an auto exposure process, an auto focus process, an auto white balance process, a high dynamic range process and so on, for the image data captured by the camera assembly 30.
In the present embodiment, the main processor 40 and the image signal processor 42 collaborate with each other to generate a target image data of the object captured by the camera assembly 30. That is, the main processor 40 and the image signal processor 42 are configured to capture the image of the object by means of the camera assembly 30 and execute various kinds of image processing to the captured image data.
The memory 44 stores program instructions to be executed by the main processor 40, and various kinds of data. For example, data of the captured image are stored in the memory 44.
The memory 44 may include a high-speed RAM memory, and/or a non-volatile memory such as a flash memory and a magnetic disk memory. That is, the memory 44 may include a non-transitory computer readable medium in which the program instructions are stored.
The power supply circuit 46 may have a battery such as a lithium-ion rechargeable battery and a battery management unit (BMU) for managing the battery.
The communication circuit 48 is configured to receive and transmit data to communicate with base stations of the telecommunication network system, the Internet or other devices via wireless communication. The wireless communication may adopt any communication standard or protocol, including but not limited to GSM (Global System for Mobile communication) , CDMA (Code Division Multiple Access) , LTE (Long Term Evolution) , LTE-Advanced, 5th generation (5G) . The communication circuit 48 may include an antenna and a RF (radio frequency) circuit.
FIG. 4 shows a color image data in a captured pixel data, and FIG. 5 shows a dark image data in the captured pixel data. The captured pixel data is an image data captured by the camera assembly 30. In the present embodiment, for example, the captured pixel data is generated by the image sensor of the first main camera 32. However, the captured pixel data may be generated by the image sensor of the second main camera 34 or the sub camera 36.
As shown in FIG. 4 the color image data in the captured pixel data includes green pixel values G1, red pixel values R1, and blue pixel values B1 as well as dark red pixel values R2 and dark blue pixel values B2. On the other hand, as shown in FIG. 5, the dark image data in the captured image includes dark green pixel values G2. When it is necessary to distinguish regular colors from dark colors, the green pixel values G1, the red pixel values R1 and the blue pixel values B1 may also be referred to as the regular green pixel values G1, the regular red pixel values R1 and the regular blue pixel values B1, respectively. On the other hand, when it is not necessary to distinguish dark colors from regular colors, the regular green pixel values G1 and the dark green pixel values G2 are simply referred to as green pixel values G, the regular red pixel values R1 and the dark red pixel values R2 are simply referred to as red pixel values R, and the regular blue pixel values B1 and the dark blue pixel values B2 are simply referred to as blue pixel values B.
As shown in FIG. 4, the color image data includes a plurality of pixel positions, and one pixel position includes one signal value from the image sensor of the first main camera 32. Each of the pixel positions of the color image data includes one of the red pixel value R1, the dark red pixel value R2, the blue pixel value B1 and the dark blue pixel value B2.
Moreover, the color image data is in conformity to a Bayer format. That is, an arrangement of the regular green pixel values G1, the red pixel values R and the blue pixel values B is in conformity to the Bayer format, and therefore the number of green pixels is twice as many as the number of red pixels or blue pixels in the color filter of the image sensor. As a result, the arrangement of the green pixel values G1, the red pixel values R and the blue pixel values B is  also in conformity to the Bayer format.
In addition, the dark red pixel values R2 are sparsely inlaid in the regular red pixel values R1, and the dark blue pixel values B2 are sparsely inlaid in the regular blue pixel values B1. This is because, even if the regular red pixel value R1 is saturated, the saturated red pixel value R1 can be restored by using the dark red pixel values R2 neighbored to the saturated red pixel value R1. Similarly, even if the regular blue pixel value B1 is saturated, the saturated blue pixel value B1 can be restored by using the dark blue pixel values B2 neighbored to the saturated blue pixel value B1.
As shown in FIG. 5, the dark image data also includes a plurality of the pixel positions, and the number of the pixel positions of the dark image data is equal to the number of the pixel positions of the color image data. That is, each of the pixel positions of the dark image data corresponds to one pixel position of the color image data which is located at the same position of the target image data to be displayed on the display 20. In other words, the arrangement of the pixel positions in the color image data is matched with the arrangement of the pixel positions in the dark image data.
In the dark image data, one pixel position includes one dark green pixel value G2. Therefore, the number of the dark green pixel values G2 in the dark image data is double the number of the green pixel values G1 in the color image data.
In FIG. 4 and FIG. 5, a part of the pixel positions is illustrated, and the color image data and the dark image data may include more pixel positions than those shown in FIG. 4 and FIG. 5.
FIG. 6 shows a part of a pixel array of the image sensor of the first main camera 32 of the camera assembly 30 in the present embodiment. In FIG. 6, four pixel positions are illustrated, but the number of the pixel positions of the pixel array of the image sensor is equal to the number of the pixel positions of the color image data shown in FIG. 4 or the dark image data shown in FIG. 5.
As shown in FIG. 6, one pixel position is composed of four physical pixel elements. That is, the pixel array of the present embodiment employs 2X2 binning technology.
In order to generate the dark image data, each pixel position includes two dark green physical pixel elements PG2 which are arranged at diagonal corners in each of the pixel positions. That is, a signal value of the dark green pixel value G2 is generated by combing two electric charges in the two dark green physical pixel elements PG2.
In order to generate the color image data, each pixel position includes two regular green physical pixel elements PG1, two red physical pixel elements PR, or two blue physical pixel elements PB, which are inversely arranged at diagonal corners in each of the pixel positions. That is, a signal value of the regular green pixel value G1, a signal value of the red pixel value R, and a signal value of the blue pixel value B are generated by combing two electric charges in the two regular green physical pixel elements PG1, the two red physical pixel elements PR, and the two blue physical pixel elements PB, respectively.
As a result, two signal values are generated in one pixel position. One of the two signal values is one of the green pixel value G1, the red pixel value R1 (R) and the blue pixel value B1 (B) , and this signal value is for the color image data. The other of the two signal values is the dark green pixel value G2, and this signal value is for the dark image data.
Since the pixel array of the present embodiment is in conformity to the Bayer format, a ratio of the green physical pixel elements PG1 to the red physical pixel elements PR or the blue physical pixel elements PB is 2.
As mentioned above, in order to obtain the dark red pixel value R2 and the dark blue pixel value B2, dark red physical pixel elements and dark blue physical pixel elements are sparsely inlaid in the red physical pixel elements PR and the blue physical pixel elements BR, respectively.
Furthermore, in the present embodiment, a color of the color filter of the image sensor to obtain the regular green pixel value G1 is equal to a color of the color filter of the image sensor  to obtain the dark green pixel value G2. However, an exposure time to obtain the dark green pixel value G2 is shorter than an exposure time to obtain the regular green pixel value G1, and/or an analog gain to obtain the dark green pixel value G2 is lower than an analog gain to obtain the regular green pixel value G1 in the image sensor.
Similarly, a color of the color filter of the image sensor to obtain the regular red pixel value R1 is equal to a color of the color filter of the image sensor to obtain the dark red pixel value R2. However, an exposure time to obtain the dark red pixel value R2 is shorter than an exposure time to obtain the regular red pixel value R1, and/or an analog gain to obtain the dark red pixel value R2 is lower than an analog gain to obtain the regular red pixel value R1 in the image sensor.
Similarly, a color of the color filter of the image sensor to obtain the regular blue pixel value B1 is equal to a color of the color filter of the image sensor to obtain the dark blue pixel value B2. However, an exposure time to obtain the dark blue pixel value B2 is shorter than an exposure time to obtain the regular blue pixel value B1, and/or an analog gain to obtain the dark blue pixel value B2 is lower than an analog gain to obtain the regular blue pixel value B1 in the image sensor.
FIG. 7 and FIG. 8 show a flowchart of a dynamic range correction process performed by the electrical device 10 according to the present embodiment. In the present embodiment, the dynamic range correction process is executed, for example, by the main processor 40 in order to correct a dynamic range of the captured pixel data from the image sensor. However, the main processor 40 may cooperate with the image signal processor 42 to execute the dynamic range correction process. Therefore, the main processor 40 and the image signal processor 42 may constitute an image processor in the present embodiment.
In addition, in the present embodiment, program instructions of the dynamic range correction process are stored in the non-transitory computer readable medium of the memory 44. Therefore, for example, when the program instructions are read out from the memory 44 and executed in the main processor 40, the main processor 40 implements the dynamic range correction process illustrated in FIG. 7 and FIG. 8.
As shown in FIG. 7, for example, the main processor 40 obtains the captured pixel data from the image sensor (Step S10) . In the present embodiment, the main processor 40 obtains the captured pixel data from the camera assembly 30. That is, the camera assembly 30 captures an image of an object and generates its captured pixel data including the color image data and the dark image data.
As mentioned above, each of the pixel positions of the captured pixel data includes two signal values. One of the two signals constitutes the color image data and the other of the two signals constitutes the dark image data. The captured pixel data may be temporarily stored in the memory 44.
Next, as shown in FIG. 7, for example, the main processor 40 obtains the color image data and the dark image data from the captured pixel data (Step S12) . That is, the color image data and the dark image data are extracted from the captured pixel data. The color image data and the dark image data are temporarily stored in the memory 44.
Next, as shown in FIG. 7, for example, the main processor 40 generates a full regular green pixel data based on the regular green pixel values G1 of the color image data and the dark green pixel values G2 of the dark image data (Step S14) .
FIG. 9 shows an example of the arrangement of the regular green pixel values G1 and the arrangement of the dark green pixel values G2. As shown in FIG. 9, each of the pixel positions includes one dark green pixel value G2. That is, in this embodiment, each of the pixel positions has one dark green pixel value G2.
On the other hand, every other pixel position includes one regular green pixel value G1. In the present embodiment, the data format of the color image data is the Bayer format. Therefore, the number of the regular green pixel values G1 is half of the number of the dark green pixel values G2.
However, the regular green pixel values G1 at the pixel positions which do not include the regular green pixel values G1 in the color image data can be resumed based on the dark green pixel values G2 in the dark image data. For example, if the exposure time ETG2 to obtain the dark green pixel values G2 is 1/80 seconds whereas the exposure time ETG1 to obtain the regular green pixel values G1 is 1/10 seconds, the exposure time ETG1 is eight times as long as the exposure time ETG2. Therefore, the regular green pixel values G1 at the pixel positions which do not include the regular green pixel values G1 can be calculated as the dark green pixel value G2 X 8.
In the present embodiment, for example, it is assumed that each of the regular green pixel values G1 and each of the dark green pixel values G2 have 10 bits. Therefore, each of gradation values of the regular green pixel values G1 and gradation values of the dark green pixel values G2 is 1024 (210) . Therefore, the minimum regular green pixel value G1 and the minimum dark green pixel value G2 are 0, and the maximum regular green pixel value G1 and the maximum dark green pixel value G2 are 1023.
In this case, each of the resumed regular green pixel values G1RES has 13 bits because of 10 bits plus 3 bits (8=23) . As a result, gradation values of the resumed regular green pixel values G1RES are 8192 (213) . However, since 1023 X 8 = 8184, the maximum resumed regular green pixel value G1RES is 8184.
In the Step S14, for example, if the dark green pixel value G2 is 30, the resumed regular green pixel value G1RES is 240 (30 X 8) . For example, if the dark green pixel value G2 is 510, the resumed regular green pixel value G1RES is 4080 (510 X 8) .
By means of the calculation in the Step S14, the full regular green pixel data as shown in FIG. 10 is generated. That is, the resumed regular green pixel values G1RES are calculated based on the dark green pixel values B2 at the pixel positions which do not include the regular green pixel values B1. As a result, each of the pixel positions has the regular green pixel value G1 or the resumed regular green pixel value G1RES.
Hereinafter, when it is not necessary to distinguish the resumed regular green pixel values G1RES from the regular green pixel values G1, the resumed regular green pixel values G1RES and the regular green pixel values G1 are simply referred to as the regular green pixels G1.
Next, as shown in FIG. 7, for example, the main processor 40 resumes the regular red pixel values R1 from the dark red pixel values R2 at the same pixel positions (Step S16) . That is, the regular red pixel values R1 are resumed based on the dark red pixel values R2 in the same manner as the resumed regular green pixel values G1RES.
FIG. 11 shows an example of the arrangement of the regular red pixel values R1 and the arrangement of the dark red pixel values R2 in the present embodiment. As shown in FIG. 11, the dark red pixel values R2 are sparsely inlaid in the red pixel values R. In this example, the red pixel values R are allocated once in every four pixel positions, and the dark red pixel values R2 are allocated once in every four red pixel values R.
In the present embodiment, the same exposure time as the green pixels is also applied to the red pixels. That is, for example, if the exposure time ETR2 to obtain the dark red pixel values R2 is 1/80 seconds whereas the exposure time ETR1 to obtain the regular red pixel values R1 is 1/10 seconds, the exposure time ETR1 is eight times as long as the exposure time ETR2. Therefore, the resumed regular red pixel values R1RES in the pixel positions which include the dark red pixel values R2 can be calculated as the dark red pixel values R2 X 8.
In the Step 16, for example, if the dark red pixel values R2 is 30, the resumed regular red pixel value R1RES is 240 (30 X 8) . For example, if the dark red pixel value R2 is 510, the resumed regular red pixel value R1RES is 4080 (510 X 8) .
By means of the calculation in the Step S16, the dark red pixel values R2 are replaced by the resumed regular red pixel values R1RES as shown in FIG. 12. That is, the resumed regular red pixel values R1RES are calculated based on the dark red pixel values R2 at the same pixel positions. As a result, every fourth pixel position has the regular red pixel value R1 or the  resumed regular red pixel value R1RES.
Hereinafter, when it is not necessary to distinguish the resumed regular red pixel values R1RES from the regular red pixel values R1, the resumed regular red pixel values R1RES and the regular red pixel values R1 are simply referred to as the regular red pixel values R1.
Next, as shown in FIG. 7, for example, the main processor 40 resumes the regular blue pixel values B1 from the dark blue pixel values B2 (Step S18) . That is, the regular blue pixel values B1 are resumed based on the dark blue pixel values B2 at the same pixel positions in the same manner as the resumed regular red pixel values R1RES.
Since the process for the resumed regular blue pixel values B1RES is substantially the same as the process for the resumed regular red pixel values R1RES, the detailed explanation in the Step S18 is omitted.
Next, as shown in FIG. 8, for example, the main processor 40 judges whether any saturated red pixel value R1 exists in the color image data which has been processed in the Step S16 (Step S20) . In the present embodiment, if the regular red pixel value R1 has the maximum value in the gradation values, this regular red pixel value R1 is judged as being saturated and the pixel position of the saturated regular red pixel value R1 is regarded as a target pixel position to be restored.
As explained above, in the present embodiment, the maximum regular red pixel value R1 is 1023. Therefore, if the regular red pixel value R1 is 1023, it is judged that the regular red pixel value R1 is saturated.
Incidentally, there is a possibility that the resumed red pixel value R1RES is also saturated. However, if the resumed red pixel value R1RES is saturated, the regular red pixel values R1 around the resumed red pixel value R1RES are also saturated. Therefore, this situation is outside the scope of the present embodiment. That is, restoring the saturated resumed red pixel values R1RES is not considered in the present embodiment.
If the main processor 40 judges that the saturated regular red pixel value R1 exists in the color image data (Step S20: Yes) , for example, the main processor 40 calculates reference correlations K between the red pixel value R1 and the dark green pixel value G2 at neighboring pixel positions which are the pixel positions neighbored to the target pixel position (Step S22) .
FIG. 13 shows an example of the arrangement of the dark green pixel values G2 and the arrangement of the regular red pixel values R1. In the example of FIG. 13, the regular red pixel values R1 at the neighboring pixel positions NP2, NP4 and NP7 and the target pixel position TPP are saturated. The pixel position TPP is the target pixel position at which the regular red pixel value R1 should be restored. In other words, it is assumed that the regular red pixel value R1 at the target pixel position TPP is a selected pixel position at which the regular red pixel value R1 should be restored in the Step S20. On the other hand, the regular red pixel values R1 at the neighboring pixel positions NP1, NP3, NP5, NP6 and NP8 are not saturated. That is, the red pixel values R1 at the neighboring pixel positions NP1, NP3, NP5, NP6 and NP8 are less than 1023.
Therefore, in the present embodiment, the reference correlations between the regular red pixel values R1 and the dark green pixel values G2 at the neighboring pixel positions NP1, NP3, NP5, NP6 and NP8 are calculated.
FIG. 14 shows the reference correlations K1 through K8 at the neighboring pixel positions NP1 through NP8. In this example, since the regular red pixel values R1 at the neighboring pixel positions NP1, NP3, NP5, NP6 and NP8 are not saturated, the reference correlations K1, K3, K5, K6 and K8 are calculated.
In the present embodiment, each of the reference correlation K is a ratio between the dark green pixel value G2 and the regular red pixel value R1 at the same pixel position. Here, the reference correlation K can be defined as the regular red pixel value R1 /the dark green pixel value G2.
FIG. 15 is an example of a bar graph showing the relationship of the dark green pixel value  G2, the regular red pixel value R1 and the reference correlations K. As shown in FIG. 15, the correlation K indicates a ratio of the regular red pixel value R1 to the dark green pixel value G2.
For example, if the regular red pixel value R1 is 750 and the dark green pixel value G2 is 500, the reference correlation K is 1.5. If the regular red pixel value R1 is 825 and the dark green pixel value G2 is 550, the reference correlation K is 1.6.
In the example of FIG. 14, each of the reference correlations K1, K3, K5, K6 and K8 are calculated based on the dark green pixel value G2 and the regular red pixel value R1 in the same manner as shown in FIG. 15.
Next, as shown in FIG. 8, for example, the main processor 40 calculates a target correlation KX between the regular red pixel value R1 and the dark green pixel value G2 at the target pixel position TPP based on the reference correlations K calculated in the Step S22 (Step S24) .
In the present embodiment, the target correlation KX is an average of the reference correlations K calculated in the Step S22. In the example of FIG. 16, the target correlation KX is the average of the reference correlations K1, K3, K5, K6 and K8. In other words, the target correlation KX is calculated based on the reference correlations K calculated based on the non-saturated regular pixel values R1 and the dark green pixel values G2 at the neighboring pixel positions. That is, the saturated regular red pixel values R1 are excluded to calculate the target correlation KX in order to accurately obtain the reference correlations K and the target correlation KX.
In the example of FIG. 15, one of the reference correlations K is 1.5 and the other of the reference correlations K is 1.6. As a result, the target correlation KX is (1.5+1.6) /2=1.55.
Next, as shown in FIG. 8, for example, the main processor 40 restores the regular red pixel value R1TARGET at the target pixel position TPP based on the dark green pixel value G2 at the target pixel position TPP and the target correlation KX calculated in the Step S24 (Step S26) .
That is, the regular red pixel value R1TARGET at the target pixel position TPP is calculated as (the dark green pixel value G2 at the target pixel position TPP) X (the target correlation KX) .
In the example of FIG. 15, since the target correlation KX is 1.55 and the dark green pixel value G2 at the target pixel position TPP is 600, the regular red pixel value R1TARGET at the target pixel position TPP is 930 (600 X 1.55) . Here, it is assumed that an exact regular red pixel value R1EXACT is 1150. In the present embodiment, the regular red pixel value R1TARGET (930) is close to the exact regular red pixel value R1EXACT (1150) . In other words, the regular red pixel value R1TARGET restored by the present method can exceed the regular red pixel values R1 at the neighboring pixel positions of the target pixel position TPP.
If the regular red pixel value R1TARGET is restored by an interpolation method, the regular red pixel value R1TARGET is (750+825) /2=787.5. It is less than the regular red pixel values R1 at the neighboring pixel positions of the target pixel position TPP. In other words, the regular red pixel value R1TARGET restored by the interpolation method is far from the exact regular red pixel value R1EXACT.
As shown in FIG. 17, by executing the Step S26, the regular red pixel value R1TARGET at the target pixel position TPP has been restored. That is, the saturated regular red pixel values R1 at the target pixel position has been replaced with the restored regular red pixel value R1TARGET.
Next, as shown in FIG. 8, for example, the main processor 40 returns the dynamic range correction process to the Step S20 and judges again whether any saturated red pixel value R1 exists in the color image data. Then, if the main processor 40 judges that the saturated regular red pixel value R1 still exists in the color image data (Step S20: Yes) , for example, the main processor 40 repeats the process from the Step S22.
On the other hand, if the main processor 40 judges that the saturated regular red pixel value R1 does not exist in the color image data anymore (Step S20: No) , for example, the main processor 40 judges whether any saturated blue pixel value B1 exists in the color image data. That is, after the saturated regular red pixel values R1 have been restored, the dynamic range  correction process proceeds to the next process of restoring the saturated regular blue pixel values B1.
The process to restore the saturated regular blue pixel values B1 is substantially the same as the process to restore the saturated regular red pixel values R1 described above. That is, the Step S30 through the Step S36 are substantially the same as the Step S20 through the Step S26.
Therefore, for example, the main processor 40 repeats the Step S30 through the Step S34 until the saturated regular blue pixel values B1 does not exist in the color image data anymore. After the saturated regular blue pixel values B1 have been restored (Step S30: No) , the dynamic range correction process is completed.
After the dynamic range correction process shown in FIG. 8 and FIG. 9 has been completed, the color image data and the dark image data are inputted to the image signal process 42 to generate the target image data, for example, to be displayed on the display 20 or to be stored in the memory 44.
In the present embodiment, the color image data having been subjected to the dynamic range correction process and the dark image data are directly inputted to the image signal processor 42. That is, the color image data having been subjected to the dynamic range correction process is the high dynamic range image data (HDR image data) . Therefore, the image signal processor 42 processes the high dynamic range image data to generate the target image data. The target image data may also be the high dynamic range image data, and thus the format of the target image data may be the high dynamic range image format.
On the other hand, after the color image data having been subjected to the dynamic range correction process is compressed to the standard dynamic range image data (SDR image data) , the color image data may be inputted to the image signal processor 42 along with the dark image data. In this case, for example, the main processor 40 adjusts the pixel values in the color image data not to include a halation region or a black defect region in the color image data. That is, a bright region in the color image data is shifted to be dark whereas a dark region in the color image data is shifted to be bright. More precisely, the high pixel values are lowered to avoid generating the halation region in the color image data whereas the low pixel values are raised to avoid generating the black defect region in the color image data. In general, this process is also called tone mapping.
In this case, the image signal processor 42 processes the standard dynamic range image data (SDR image data) to generate the target image data. The target image data is also the standard dynamic range image data, and thus the format of the target image data is the standard dynamic range image format.
FIG. 18 shows a sample image in the standard dynamic range in which low brightness pixel values are cut off and a histogram between the brightness and the number of the pixels in this sample image. As shown in FIG. 18, the sample image contains the black defect regions and thus the user cannot observe an inside of a room.
FIG. 19 shows another sample image in the standard dynamic range in which high brightness pixel values are saturated and a histogram between the brightness and the number of the pixels in this sample image. As shown in FIG. 19, the sample image contains the halation regions and thus the user cannot observe an outside of the room. That is, the pixel values of the outside of the room are saturated.
FIG. 20 shows still another sample image in the high dynamic range in which neither low brightness pixel values nor high brightness pixel values are in black defects and a histogram between the brightness and the number of the pixels in this sample image. As shown in FIG. 20, the sample image does not contain the black defect regions or the halation regions, and thus the user can observe both the inside of the room and the outside of the room.
As described above, in accordance with the electrical device 10 according to the present embodiment, the saturated regular red pixel values R1 are restored based on the dark green pixel value G2 at the target pixel position TPP and the target correlation KX which is calculated based  on the reference correlations K between the regular red pixel values R1 and the dark green pixel values G2 at the neighboring pixel positions. That is, the saturated regular red pixel value R1 is restored by correlations between the regular red pixel values R1 and the dark green pixel values G2 instead of the interpolation of the neighboring regular red pixel values R1. Therefore, it is possible to restore the saturated regular red pixel value R1 more precisely.
Similarly, in accordance with the electrical device 10 according to the present embodiment, the saturated regular blue pixel values B1 are restored based on the dark green pixel value G2 at the target pixel position TPP and the target correlation KX which is calculated based on the reference correlations K between the regular blue pixel values B1 and the dark green pixel values G2 at the neighboring pixel positions. That is, the saturated regular blue pixel value B1 is restored by correlations between the regular blue pixel values B1 and the dark green pixel values G2 instead of the interpolation of the neighboring regular blue pixel values B1. Therefore, it is possible to restore the saturated regular blue pixel value B1 more precisely.
(Second Embodiment)
In a second embodiment which is an alternative embodiment of the first embodiment, the target correlation KX can also be calculated by taking into consideration directions of the neighboring pixel positions. Hereinafter, the electrical device 10 according to the second embodiment will be explained.
FIG. 21 shows the arrangement of the regular red pixel values R1 and the arrangement of the dark green pixel values G2 according to the second embodiment. In the example of FIG. 21, the target correlation KX is calculated by selecting one of the following four options in the Step 24 of the dynamic range correction process.
First average: Average of the reference correlations K2 and K7
Second average: Average of the reference correlations K4 and K5
Third average: Average of the reference correlations K1 and K8
Fourth average: Average of the reference correlations K3 and K6
The first average is a correlation average in the vertical direction of the target pixel position TPP. That is, the first average is the average of the reference correlation K2 immediately on the upper side of the target pixel position TPP and the reference correlation K7 immediately on the lower side of the target pixel position TPP.
The second average is a correlation average in the horizontal direction of the target pixel position TPP. That is, the second average is the average of the reference correlation K4 on the left side of the target pixel position TPP and the reference correlation K5 on the right side of the target pixel position TPP.
The third average is a correlation average in the top left to bottom right diagonal direction of the target pixel position TPP. That is, the third average is the average of the reference correlation K1 on the left upper side of the target pixel position TPP and the reference correlation K8 on the right lower side of the target pixel position TPP.
The fourth average is a correlation average in the top right to bottom left diagonal direction of the target pixel position TPP. That is, the fourth average is the average of the reference correlation K3 on the right upper side of the target pixel position TPP and the reference correlation K6 on the left lower side of the target pixel position TPP.
In the present embodiment, in the Step S24, for example, the main processor 40 selects one of the four averages which difference between two reference correlations K is the smallest. That is, if the two reference correlations K are close in value, it can be considered that a variation rate between the regular red pixel values R1 and the dark green pixel values G2 is stable or that the two regular red pixel values R1 are quite similar to each other. Also, it can be expected that the average which difference of two reference correlations is the smallest represents most precisely the correlation of the regular red pixel values R1TARGET and the dark green pixel value G2 at the target pixel positions TPP. This is the reason why the main processor 40 selects the one average which has the smallest difference between the two reference correlations K, as the target  correlation KX.
Although no saturated regular red pixel values R1 at the neighboring pixel positions in four different directions are shown in the example of FIG. 21, one or more saturated regular red pixel values R1 may exist at the neighboring pixel positions. In this case, the neighboring pixel positions including the saturated regular red pixel values R1 are excluded when calculating the reference correlation K. Therefore, no correlation average including the saturated regular red pixel values R1 is selected as the target correlation KX.
In other words, the four pairs of the reference correlations K at the neighboring pixel positions cannot always be calculated, and thus one, two or three pairs of the reference correlations K at the neighboring pixel positions in one, two or three different directions are used to calculate the correlation averages between the two reference correlations K. Even in this case, the dynamic range correction process selects one of the correlation averages of the reference correlations which difference between the two correlations K is smallest.
For the saturated regular blue pixel values B1, the target correlation KX is calculated in the Step S34 in the same manner as that for the saturated regular red pixel values R1 in the Step S24.
Also, in the method explained above as the second embodiment, the target correlation KX at the target pixel position TPP can be calculated and the saturated regular red pixel value R1 and the saturated regular blue pixel value B1 can be restored.
Incidentally, in the embodiment mentioned above, although the dark image data is generated in green, another color may be used to generate the dark image data. For example, yellow may be used to generate the dark image data. In this case, the color filter of the image sensor of the camera assembly 30 is composed of red, yellow and blue (RYB) , and the color image data is composed of red, yellow and blue whereas the dark image data is composed of yellow.
Moreover, the color image data may include more than three colors. For example, the color image data may include green pixel values, red pixel values, blue pixel values and yellow pixel values. That is, the color image data may include a plurality of pixels of at least three colors.
In addition, although the color image data also includes the dark red pixel values R2 and the dark blue pixel values B2, the color image data does not necessarily include the dark red pixel values R2 and the dark blue pixel values B2. That is, the color image data may not include the dark red pixel values R2 and the dark blue pixel values B2.
In the description of embodiments of the present disclosure, it is to be understood that terms such as "central" , "longitudinal" , "transverse" , "length" , "width" , "thickness" , "upper" , "lower" , "front" , "rear" , "back" , "left" , "right" , "vertical" , "horizontal" , "top" , "bottom" , "inner" , "outer" , "clockwise" and "counterclockwise" should be construed to refer to the orientation or the position as described or as shown in the drawings under discussion. These relative terms are only used to simplify description of the present disclosure, and do not indicate or imply that the device or element referred to must have a particular orientation, or constructed or operated in a particular orientation. Thus, these terms cannot be constructed to limit the present disclosure.
In addition, terms such as "first" and "second" are used herein for purposes of description and are not intended to indicate or imply relative importance or significance or to imply the number of indicated technical features. Thus, the feature defined with "first" and "second" may comprise one or more of this feature. In the description of the present disclosure, "a plurality of" means two or more than two, unless specified otherwise.
In the description of embodiments of the present disclosure, unless specified or limited otherwise, the terms "mounted" , "connected" , "coupled" and the like are used broadly, and may be, for example, fixed connections, detachable connections, or integral connections; may also be mechanical or electrical connections; may also be direct connections or indirect connections via intervening structures; may also be inner communications of two elements, which can be understood by those skilled in the art according to specific situations.
In the embodiments of the present disclosure, unless specified or limited otherwise, a structure in which a first feature is "on" or "below" a second feature may include an embodiment  in which the first feature is in direct contact with the second feature, and may also include an embodiment in which the first feature and the second feature are not in direct contact with each other, but are contacted via an additional feature formed therebetween. Furthermore, a first feature "on" , "above" or "on top of" a second feature may include an embodiment in which the first feature is right or obliquely "on" , "above" or "on top of" the second feature, or just means that the first feature is at a height higher than that of the second feature; while a first feature "below" , "under" or "on bottom of" a second feature may include an embodiment in which the first feature is right or obliquely "below" , "under" or "on bottom of" the second feature, or just means that the first feature is at a height lower than that of the second feature.
Various embodiments and examples are provided in the above description to implement different structures of the present disclosure. In order to simplify the present disclosure, certain elements and settings are described in the above. However, these elements and settings are only by way of example and are not intended to limit the present disclosure. In addition, reference numbers and/or reference letters may be repeated in different examples in the present disclosure. This repetition is for the purpose of simplification and clarity and does not refer to relations between different embodiments and/or settings. Furthermore, examples of different processes and materials are provided in the present disclosure. However, it would be appreciated by those skilled in the art that other processes and/or materials may be also applied.
Reference throughout this specification to "an embodiment" , "some embodiments" , "an exemplary embodiment" , "an example" , "a specific example" or "some examples" means that a particular feature, structure, material, or characteristics described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. Thus, the appearances of the above phrases throughout this specification are not necessarily referring to the same embodiment or example of the present disclosure. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in one or more embodiments or examples.
Any process or method described in a flow chart or described herein in other ways may be understood to include one or more modules, segments or portions of codes of executable instructions for achieving specific logical functions or steps in the process, and the scope of a preferred embodiment of the present disclosure includes other implementations, in which it should be understood by those skilled in the art that functions may be implemented in a sequence other than the sequences shown or discussed, including in a substantially identical sequence or in an opposite sequence.
The logic and/or step described in other manners herein or shown in the flow chart, for example, a particular sequence table of executable instructions for realizing the logical function, may be specifically achieved in any computer readable medium to be used by the instruction execution system, device or equipment (such as the system based on computers, the system comprising processors or other systems capable of obtaining the instruction from the instruction execution system, device and equipment and executing the instruction) , or to be used in combination with the instruction execution system, device and equipment. As to the specification, "the computer readable medium" may be any device adaptive for including, storing, communicating, propagating or transferring programs to be used by or in combination with the instruction execution system, device or equipment. More specific examples of the computer readable medium comprise but are not limited to: an electronic connection (an electronic device) with one or more wires, a portable computer enclosure (a magnetic device) , a random access memory (RAM) , a read only memory (ROM) , an erasable programmable read-only memory (EPROM or a flash memory) , an optical fiber device and a portable compact disk read-only memory (CDROM) . In addition, the computer readable medium may even be a paper or other appropriate medium capable of printing programs thereon, this is because, for example, the paper or other appropriate medium may be optically scanned and then edited, decrypted or processed with other appropriate methods when necessary to obtain the programs in an electric  manner, and then the programs may be stored in the computer memories.
It should be understood that each part of the present disclosure may be realized by the hardware, software, firmware or their combination. In the above embodiments, a plurality of steps or methods may be realized by the software or firmware stored in the memory and executed by the appropriate instruction execution system. For example, if it is realized by the hardware, likewise in another embodiment, the steps or methods may be realized by one or a combination of the following techniques known in the art: a discrete logic circuit having a logic gate circuit for realizing a logic function of a data signal, an application-specific integrated circuit having an appropriate combination logic gate circuit, a programmable gate array (PGA) , a field programmable gate array (FPGA) , etc.
Those skilled in the art shall understand that all or parts of the steps in the above exemplifying method of the present disclosure may be achieved by commanding the related hardware with programs. The programs may be stored in a computer readable storage medium, and the programs comprise one or a combination of the steps in the method embodiments of the present disclosure when run on a computer.
In addition, each function cell of the embodiments of the present disclosure may be integrated in a processing module, or these cells may be separate physical existence, or two or more cells are integrated in a processing module. The integrated module may be realized in a form of hardware or in a form of software function modules. When the integrated module is realized in a form of software function module and is sold or used as a standalone product, the integrated module may be stored in a computer readable storage medium.
The storage medium mentioned above may be read-only memories, magnetic disks, CD, etc.
Although embodiments of the present disclosure have been shown and described, it would be appreciated by those skilled in the art that the embodiments are explanatory and cannot be construed to limit the present disclosure, and changes, modifications, alternatives and variations can be made in the embodiments without departing from the scope of the present disclosure.

Claims (21)

  1. A method of generating a corrected pixel data, comprising:
    obtaining a captured pixel data from an image sensor, wherein the captured pixel data includes a plurality of pixel positions, each of the pixel positions includes two signal values, one of the two signal values is one of a first color pixel value, a second color pixel value and a third color pixel value, the other of the two signal values is a fourth color pixel value, a color of a color filter of the image sensor to obtain the first color pixel value is equal to a color of the color filter of the image sensor to obtain the fourth color pixel value, and an exposure time to obtain the fourth color pixel value is shorter than an exposure time to obtain the first color pixel value and/or an analog gain to obtain the fourth color pixel value in the image sensor is lower than an analog gain to obtain the first color pixel value in the image sensor;
    calculating reference correlations between the second color pixel values and the fourth color pixel values at neighboring pixel positions which are the pixel positions neighbored to a target pixel position, wherein the target pixel position is one of the pixel positions at which the second pixel value is saturated;
    calculating a target correlation between the second pixel value and the fourth pixel value at the target pixel position based on the reference correlations; and
    restoring the second pixel value at the target pixel position based on the fourth pixel value at the target pixel position and the target correlation to generate the corrected pixel data.
  2. The method according to claim 1, wherein the reference correlations are calculated based on the unsaturated second color pixel values and the fourth color pixel values at the neighboring pixel positions.
  3. The method according to claim 2, wherein the target correlation is an average of the reference correlations.
  4. The method according to claim 2, wherein the calculating the target correlation comprises:
    calculating each of averages of the two reference correlations at the neighboring pixel positions arranged in different directions; and
    selecting one of the averages as the target correlation.
  5. The method according to claim 4, wherein the selecting one of the averages comprises selecting one of the averages of which difference between the two reference correlations is smallest.
  6. The method according to claim 1, wherein each of the reference correlations at the neighboring pixel positions is a ratio between the second color pixel value and the fourth color pixel value at the same pixel position.
  7. The method according to claim 1, wherein the target correlation at the target pixel position is a ratio between the second pixel position to be restored and the fourth pixel value at the target pixel position.
  8. The method according to any one of claims 1 to 7, wherein a color image data includes the first color pixel value, the second color pixel value and the third color pixel value, and a dark image data includes the fourth color pixel value.
  9. The method according to claim 8, further comprising resuming the first color pixel value in the color image data for the pixel position which does not include the first color pixel value, based on the fourth color pixel value in the dark image data at the same pixel position.
  10. The method according to claim 9, wherein the first color pixel value is resumed based on a ratio between an exposure time to obtain the fourth color pixel value and an exposure time to obtain the first color pixel value in the image sensor.
  11. The method according to claim 9, wherein the first color pixel value is resumed based on a ratio between an analog gain to obtain the fourth color pixel value and an analog gain to obtain the first color pixel value in the image sensor.
  12. The method according to claim 8, wherein the color image data further includes a fifth color pixel value and a sixth color pixel value,
    a color of the color filter of the image sensor to obtain the second color pixel value is equal to a color of the color filter of the image sensor to obtain the fifth color pixel value, wherein an exposure time to obtain the fifth color pixel value is shorter than an exposure time to obtain the second color pixel value and/or an analog gain to obtain the fifth color pixel value in the image sensor is lower than an analog gain to obtain the second color pixel value in the image sensor, and
    a color of the color filter of the image sensor to obtain the third color pixel value is equal to a color of the color filter of the image sensor to obtain the sixth color pixel value, wherein an exposure time to obtain the sixth color pixel value is shorter than an exposure time to obtain the third color pixel value and/or an analog gain to obtain the sixth color pixel value in the image sensor is lower than an analog gain to obtain the third color pixel value in the image sensor.
  13. The method according to claim 12, further comprising:
    resuming the second color pixel value in the color image data for the pixel position which does not include the second color pixel value, based on the fifth color pixel value at the same pixel position; and
    resuming the third color pixel value in the color image data for the pixel position which does not include the third color pixel value, based on the sixth color pixel value at the same pixel position.
  14. The method according to claim 13, wherein the second color pixel value is resumed based on a ratio between an exposure time to obtain the fifth color pixel value and an exposure time to obtain the second color pixel value in the image sensor, and
    the third color pixel value is resumed based on a ratio between an exposure time to obtain the sixth color pixel value and an exposure time to obtain the third color pixel value in the image sensor.
  15. The method according to claim 13, wherein the second color pixel value is resumed based on a ratio between an analog gain to obtain the fifth color pixel value and an analog gain to obtain the second color pixel value in the image sensor, and
    the third color pixel value is resumed based on a ratio between an analog gain to obtain the sixth color pixel value and an analog gain to obtain the third color pixel value in the image sensor.
  16. The method according to claim 1, wherein a color of the first color pixel value is green, and a color of the fourth color pixel value is dark green.
  17. The method according to claim 16, wherein a color of the second color pixel value is one of red and blue, and a color of the third color pixel value is the other of the red and the blue.
  18. The method according to claim 12, wherein a color of the fifth color pixel value is one of dark red and dark blue, and the sixth color pixel value is the other of the dark red and the dark blue.
  19. The method according to claim 1, wherein an arrangement of the first color pixel value, the second color pixel value and the third color pixel value is in conformity to a Bayer format.
  20. An electrical device, comprising:
    a camera assembly configured to be provided with an image sensor to generate a captured pixel data including a plurality of pixel positions, wherein each of the pixel positions includes two signal values, one of the two signal values is one of a first color pixel value, a second color pixel value and a third color pixel value, the other of the two signal values is a fourth color pixel value, a color of a color filter of the image sensor to obtain the first color pixel value is equal to a color of the color filter of the image sensor to obtain the fourth color pixel value, and an exposure time to obtain the fourth color pixel value is shorter than an exposure time to obtain the first color pixel value and/or an analog gain to obtain the fourth color pixel value in the image sensor is lower than an analog gain to obtain the first color pixel value in the image sensor; and
    a main processor configured to:
    obtain a captured pixel data from an image sensor;
    calculate reference correlations between the second color pixel values and the fourth color pixel values at neighboring pixel positions which are the pixel positions neighbored to a target pixel position, wherein the target pixel position is one of the pixel positions at which the second pixel value is saturated;
    calculate a target correlation between the second pixel value and the fourth pixel value at the target pixel position based on the reference correlations; and
    restore the second pixel value at the target pixel position based on the fourth pixel value at the target pixel position and the target correlation to generate the corrected pixel data.
  21. A non-transitory computer readable medium comprising program instructions stored thereon for performing at least the following:
    obtaining a captured pixel data from an image sensor, wherein the captured pixel data includes a plurality of pixel positions, wherein each of the pixel positions includes two signal values, one of the two signal values is one of a first color pixel value, a second color pixel value and a third color pixel value, the other of the two signal values is a fourth color pixel value, a color of a color filter of the image sensor to obtain the first color pixel value is equal to a color of the color filter of the image sensor to obtain the fourth color pixel value, and an exposure time to obtain the fourth color pixel value is shorter than an exposure time to obtain the first color pixel value and/or an analog gain to obtain the fourth color pixel value in the image sensor is lower than an analog gain to obtain the first color pixel value in the image sensor;
    calculating reference correlations between the second color pixel values and the fourth color pixel values at neighboring pixel positions which are the pixel positions neighbored to a target pixel position, wherein the target pixel position is one of the pixel positions at which the second pixel value is saturated;
    calculating a target correlation between the second pixel value and the fourth pixel value at the target pixel position based on the reference correlations; and
    restoring the second pixel value at the target pixel position based on the fourth pixel value at the target pixel position and the target correlation to generate the corrected pixel data.
PCT/CN2020/103319 2020-07-21 2020-07-21 Method of generating corrected pixel data, electrical device and non-transitory computer readable medium WO2022016385A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/103319 WO2022016385A1 (en) 2020-07-21 2020-07-21 Method of generating corrected pixel data, electrical device and non-transitory computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/103319 WO2022016385A1 (en) 2020-07-21 2020-07-21 Method of generating corrected pixel data, electrical device and non-transitory computer readable medium

Publications (1)

Publication Number Publication Date
WO2022016385A1 true WO2022016385A1 (en) 2022-01-27

Family

ID=79728421

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/103319 WO2022016385A1 (en) 2020-07-21 2020-07-21 Method of generating corrected pixel data, electrical device and non-transitory computer readable medium

Country Status (1)

Country Link
WO (1) WO2022016385A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104079842A (en) * 2014-06-27 2014-10-01 广东欧珀移动通信有限公司 Camera noise and frame rate control method and device
CN105323495A (en) * 2014-05-30 2016-02-10 原相科技(槟城)有限公司 Region based shutter adaptation method for image exposure
CN105516699A (en) * 2015-12-18 2016-04-20 广东欧珀移动通信有限公司 Image sensor and imaging method thereof, imaging device and electronic device
US20190156516A1 (en) * 2018-12-28 2019-05-23 Intel Corporation Method and system of generating multi-exposure camera statistics for image processing
CN110868908A (en) * 2017-06-15 2020-03-06 富士胶片株式会社 Medical image processing apparatus, endoscope system, and method for operating medical image processing apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105323495A (en) * 2014-05-30 2016-02-10 原相科技(槟城)有限公司 Region based shutter adaptation method for image exposure
CN104079842A (en) * 2014-06-27 2014-10-01 广东欧珀移动通信有限公司 Camera noise and frame rate control method and device
CN105516699A (en) * 2015-12-18 2016-04-20 广东欧珀移动通信有限公司 Image sensor and imaging method thereof, imaging device and electronic device
CN110868908A (en) * 2017-06-15 2020-03-06 富士胶片株式会社 Medical image processing apparatus, endoscope system, and method for operating medical image processing apparatus
US20190156516A1 (en) * 2018-12-28 2019-05-23 Intel Corporation Method and system of generating multi-exposure camera statistics for image processing

Similar Documents

Publication Publication Date Title
US10531019B2 (en) Image processing method and apparatus, and electronic device
US10110809B2 (en) Control method and apparatus, and electronic device
US10559070B2 (en) Image processing method and apparatus, and electronic device
US10264178B2 (en) Control method and apparatus, and electronic device
US10339632B2 (en) Image processing method and apparatus, and electronic device
US10249021B2 (en) Image processing method and apparatus, and electronic device
US10432905B2 (en) Method and apparatus for obtaining high resolution image, and electronic device for same
US10382709B2 (en) Image processing method and apparatus, and electronic device
US10165205B2 (en) Image processing method and apparatus, and electronic device
US10380717B2 (en) Image processing method and apparatus, and electronic device
US10262395B2 (en) Image processing method and apparatus, and electronic device
WO2022016385A1 (en) Method of generating corrected pixel data, electrical device and non-transitory computer readable medium
WO2022047671A1 (en) Method of removing noise in image and electrical device
WO2021138867A1 (en) Method for electronic device with a plurality of cameras and electronic device
WO2022174460A1 (en) Sensor, electrical device, and non-transitory computer readable medium
WO2022094970A1 (en) Electrical device, method of generating image data, and non-transitory computer readable medium
WO2022047614A1 (en) Method of generating target image data, electrical device and non-transitory computer readable medium
WO2022094937A1 (en) Electrical device, method of generating image data, and non-transitory computer readable medium
WO2021159295A1 (en) Method of generating captured image and electrical device
WO2021120107A1 (en) Method of generating captured image and electrical device
WO2022246606A1 (en) Electrical device, method of generating image data, and non-transitory computer readable medium
WO2021253166A1 (en) Method of generating target image data and electrical device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20946067

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20946067

Country of ref document: EP

Kind code of ref document: A1