WO2023020532A1 - 图像处理方法、装置、电子设备及可读存储介质 - Google Patents

图像处理方法、装置、电子设备及可读存储介质 Download PDF

Info

Publication number
WO2023020532A1
WO2023020532A1 PCT/CN2022/112986 CN2022112986W WO2023020532A1 WO 2023020532 A1 WO2023020532 A1 WO 2023020532A1 CN 2022112986 W CN2022112986 W CN 2022112986W WO 2023020532 A1 WO2023020532 A1 WO 2023020532A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
exposure
pixel
generate
pixels
Prior art date
Application number
PCT/CN2022/112986
Other languages
English (en)
French (fr)
Inventor
黄春成
Original Assignee
维沃移动通信(杭州)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信(杭州)有限公司 filed Critical 维沃移动通信(杭州)有限公司
Publication of WO2023020532A1 publication Critical patent/WO2023020532A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals

Definitions

  • the present application belongs to the technical field of image processing, and in particular relates to an image processing method, device, electronic equipment and readable storage medium.
  • HDR High-Dynamic Range
  • HDR High-Dynamic Range
  • HDR High-Dynamic Range
  • the common ways to improve the dynamics of the image mainly include the following three:
  • Mode 1 processing a single frame image.
  • local tone mapping for example, local tone mapping, global tone mapping, etc.;
  • Method 2 Use the same module to capture multiple frames of images with different exposures, and then perform HDR synthesis on multiple frames of images;
  • the frame rates of the multi-frame images are different, and the exposure timings of the multi-frame images are different, which is mainly accomplished by combining two frames of images with different exposures at different timings into one frame of image.
  • the brightness dynamic range of images synthesized in this way is limited by a settable integration time, wherein the integration time influences the exposure time. If it is fixed at a higher frame rate, the adjustable range of the exposure time will be reduced, and the increase in the brightness dynamic range will be relatively small; resulting in a large difference in the frame rate of multiple frames of images.
  • Method 3 Use different modules (including lens components and sensor components) to capture different frames of images, and then perform HDR synthesis on multiple frames of images;
  • the methods for improving the brightness dynamic range of images in the related art generally have the problems of affecting image details and image quality, small improvement of brightness dynamic range, large difference in frame rate of multi-frame images, and increased power consumption and cost.
  • the purpose of the embodiments of the present application is to provide an image processing method, device, electronic equipment, and readable storage medium, which can solve the problems in the related art of improving the brightness dynamic range of images that affect image details, image quality, and brightness dynamic range.
  • the improvement is small, the frame rate difference of multi-frame images is large, and the power consumption and cost increase.
  • the embodiment of the present application provides an image processing method, the method comprising:
  • the first pixel is different from the second pixel
  • the first exposure and the second exposure correspond to the same frame rate.
  • an image processing device which includes:
  • An exposure module configured to perform first exposure on a first pixel in the image sensor using a first exposure parameter to generate a first image; and perform a second exposure on a second pixel in the image sensor using a second exposure parameter to generate The second image, wherein the value of the first exposure parameter is different from the value of the second exposure parameter;
  • a fusion module is used to perform image fusion on the first image and the second image to generate a target picture
  • the first pixel is different from the second pixel
  • the first exposure and the second exposure correspond to the same frame rate.
  • an embodiment of the present application provides an electronic device, the electronic device includes a processor, a memory, and a program or instruction stored in the memory and operable on the processor, and the program or instruction is The processor implements the steps of the method described in the first aspect when executed.
  • an embodiment of the present application provides a readable storage medium, on which a program or an instruction is stored, and when the program or instruction is executed by a processor, the steps of the method described in the first aspect are implemented .
  • the embodiment of the present application provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run programs or instructions, so as to implement the first aspect the method described.
  • a first exposure is performed on a first pixel in the image sensor using a first exposure parameter to generate a first image
  • a second exposure is performed on a second pixel in the image sensor using a second exposure parameter , to generate a second image, in which the frame rate adopted for exposing the first pixel and the second pixel is the same, so that there is no frame rate difference between the first image and the second image, the difference is only in the exposure parameters adopted different values, different images with different exposure levels can be generated by multiple exposures of a single frame, regardless of whether the first image and the second image are exposed and output at the same time, it can solve the problem of multi-frame image fusion in the traditional technology.
  • the improvement of the brightness dynamic range caused by the rate difference is relatively small, and the time difference between the first image and the second image used for fusion is reduced.
  • the first image and the second image are exposed at the same frame rate, if the first image and the second image are exposed and output at the same time, since the exposure parameters of the first image and the second image are different, the first The brightness difference between the image and the second image is large, so that the brightness dynamic range of the target image can be improved; in addition, the method uses the same image sensor to operate, and will not cause differences in different exposure parameters due to differences in the components of the camera module.
  • the method fuses the first image and the second image to generate the target image, so that the target image retains Original image information and more image details, compared with the single-frame image processing method in the traditional technology, the image quality is better.
  • Fig. 1 is the flowchart of the image processing method of an embodiment of the present application
  • Fig. 2 is one of the pixel layout schematic diagrams of the image sensor of an embodiment of the present application.
  • FIG. 3 is the second schematic diagram of pixel layout of an image sensor according to an embodiment of the present application.
  • Fig. 4 is one of the image diagrams of an embodiment of the present application.
  • Fig. 5 is the second schematic diagram of an image of an embodiment of the present application.
  • Fig. 6 is the third schematic diagram of an image of an embodiment of the present application.
  • FIG. 7 is the third schematic diagram of pixel layout of an image sensor according to an embodiment of the present application.
  • FIG. 8 is a fourth schematic diagram of pixel layout of an image sensor according to an embodiment of the present application.
  • FIG. 9 is a fifth schematic diagram of pixel layout of an image sensor according to an embodiment of the present application.
  • FIG. 10 is a sixth schematic diagram of pixel layout of an image sensor according to an embodiment of the present application.
  • FIG. 11 is a seventh schematic diagram of pixel layout of an image sensor according to an embodiment of the present application.
  • Fig. 12 is a block diagram of an image processing device according to an embodiment of the present application.
  • FIG. 13 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
  • Fig. 14 is a schematic diagram of a hardware structure of an electronic device according to another embodiment of the present application.
  • FIG. 1 shows a flowchart of an image processing method according to an embodiment of the present application, and the method may specifically include the following steps:
  • Step 101 performing a first exposure on a first pixel in an image sensor using a first exposure parameter to generate a first image
  • Step 102 performing a second exposure on a second pixel in the image sensor using a second exposure parameter to generate a second image, wherein the value of the first exposure parameter is different from the value of the second exposure parameter ;
  • the first pixel and the second pixel are different.
  • the type of pixels can include imaging pixels, which refer to red, green and blue (RGB) pixels, and also include phase (PD, phase detection) pixels for phase focusing, where the PD pixels can be further divided into left (L) pixels and right ( R) pixel two types of pixels;
  • imaging pixels which refer to red, green and blue (RGB) pixels
  • phase (PD, phase detection) pixels for phase focusing, where the PD pixels can be further divided into left (L) pixels and right ( R) pixel two types of pixels;
  • the first pixel and the second pixel may represent: the above-mentioned pixels of different pixel types.
  • PD focusing is achieved through phase detection.
  • the layout of the imaging pixels is not limited to Figure 2, and the imaging pixels include GR pixels 61, R pixels 62 , B pixel 63, and GB pixel 64.
  • the image sensor can be applied to the scene of shooting a human face, and the human face is more sensitive to the G pixel, so the G pixel of the red and green channels can be set for the G channel, that is, the GR pixel 61 , and the G pixel of the blue-green channel, that is, the GB pixel 64), a PD pixel is added, wherein the PD pixel may include a left (L) pixel and a right (R) pixel.
  • the image sensor here has added PD pixels (shown as L and R) on the basis of the imaging pixels in FIG. 2 .
  • the same reference numerals represent the same objects, and the reference numerals in FIG. 3 will not be described here one by one, just refer to the explanation of FIG. 2 .
  • the arrangement form of the PD pixels is not limited to that shown in FIG. 3 , so that the PD pixels can be used to assist focusing.
  • the phase difference (Phase diff) of the focus area can be calculated to achieve phase focus.
  • L pixels and R pixels for a pixel in the image sensor, if half of the pixel is covered with metal, so that the pixel on the left half of the covered one can only receive light from the left, it should be covered
  • the pixels on the left half are called L pixels; similarly, the pixels on the right half that are covered can only receive light from the right, and the pixels on the right half that are covered are called R pixels; , L pixels and R pixels appear in pairs at adjacent positions, as shown in FIG. 3 .
  • the exposure parameters include but not limited to integration time (integration time, INT), analog gain (gain), and digital gain.
  • the integration time represents the exposure time (exposure time) in line units.
  • INT is 159, which means that the image sensor (Sensor) has an exposure time of 159 lines.
  • the meanings of the integration time and the exposure time are the same. , both represent the exposure time of the Sensor, but integration time is a relative concept, that is, in units of lines, and the absolute time occupied by each line is related to the clock frequency and how many pclks (ie line length) each line contains; while exposure time is It refers to the absolute time of Sensor exposure.
  • the first pixel including the PD pixel in the Sensor is exposed using the first exposure parameter
  • the second pixel including the imaging pixel in the Sensor is exposed using the second exposure parameter.
  • the values of the two exposure parameters are different.
  • the exposure parameter is the exposure duration
  • the exposure duration of the first pixel including the PD pixel in the Sensor is different from the exposure duration of the second pixel including the imaging pixel, so that the two generated respectively There are differences in the brightness of the frame pictures.
  • step 101 and step 102 can be performed at the same time, and the first exposure and the second exposure correspond to the same frame rate, so that the first image and the second image are triggered to be exposed at the same time, that is, the exposure timing is the same, That is to say, although the exposure time of the first image and the second image with different exposure parameters are different, that is, the duration of exposure is different, but the execution timing of the step of triggering the exposure is the same, so that there is no time between the two frames of images difference.
  • the first exposure and the second exposure correspond to the same frame rate, therefore, within the same time period, the number of frames of the first image generated by the first exposure is the same as the number of frames of the second image generated by the second exposure same number of frames.
  • the first images and the second images of the same order can be corresponding to each other, and this group of corresponding first images and the second images
  • the two images are fused to generate a frame of target image.
  • the generated first frame of the first image and the generated first frame of the second image may be fused to generate the first frame of the target image.
  • first image and the second image of the same order are not output at the same time, that is, there is an output sequence, for example, the first image of the first frame is output before the second image of the first frame, then the first image and the second image can be output
  • the respective generation order, the first image and the second image of the same order are combined to form a group of images for fusion.
  • the two frames of images are not output at the same time, because the corresponding frame rates of the two are the same, it can still be reduced to a certain extent.
  • the time difference between the first image and the second image is not output at the same time, that is, there is an output sequence, for example, the first image of the first frame is output before the second image of the first frame.
  • the blanking period when controlling the simultaneous output of the first image and the second image, it can be realized by setting a blanking period for the exposed first image and/or the second image, and the blanking period can include a horizontal blanking period, that is, a horizontal blanking period.
  • Time horizontal blank time, HBLT
  • vertical blanking period that is, vertical blank time (vertical blank time, VBLT).
  • a horizontal blanking period may be set for an image with a shorter exposure time, and/or Or a vertical blanking period, to wait for the image with a longer exposure time to complete the exposure of corresponding pixels in all rows, so that the first image and the second image generated by the exposure can be output at the same time.
  • Different pixels in the Sensor are exposed at the same frame rate with different exposure parameters, and the images generated by the two exposure parameters are regarded as two images, that is, the first image and the second image, and multiple exposures and fusion are performed in a single frame In this way, a picture with a high dynamic brightness range can be obtained. Since the frame rate of the two exposures is the same, the problem of the frame rate difference between the first image and the second image can be solved, and the improvement of the brightness dynamic range can be further expanded. magnitude.
  • each pixel in the Sensor adopts a unified exposure control path to control the exposure of each pixel.
  • One exposure control channel can be set separately for the first pixel (which can be represented by a semiconductor hardware channel), and another exposure control channel can be separately set for the second pixel.
  • the two channels are independent of each other, so that the first pixel and the second Independent control of pixel exposure parameters.
  • the first exposure parameter is controlled by the first control path
  • the second exposure parameter is controlled by the second control path
  • the first control path is different from the second control path
  • the image sensor and Both the first control path and the second control path are connected.
  • the image sensor when separately configuring the exposure parameters of the first pixel and the second pixel, it may be realized by separating the semiconductor hardware paths, that is, configuring the image sensor to be connected to different semiconductor hardware paths. Wherein, the image sensor can communicate with the back-end controller through different semiconductor hardware channels.
  • Step 103 performing image fusion on the first image and the second image to generate a target picture.
  • various image fusion algorithms may be used to fuse the first image and the second image to generate the target image.
  • the target image generated after the fusion of the two images may be a high dynamic image, which improves the dynamic range of brightness.
  • Fig. 4 and Fig. 5 respectively show the first image and the second image
  • Fig. 6 shows the target image.
  • the first pixel in the Sensor is exposed using the first exposure parameter with a relatively large exposure time to generate the image shown in Figure 4, which is named Image 1 here, and there are many overexposed areas in Image 1;
  • the second pixel in the Sensor is exposed using a second exposure parameter with a relatively small exposure time to generate the image shown in Figure 5, which is named Image 2 here, and there are many underexposed areas in Image 2;
  • Image fusion is performed on image 1 and image 2 to generate the image shown in Figure 6, which is named image 3 here.
  • the values of the exposure parameters of image 1 and image 2 are different, and thus the exposure time is different, the brightness difference between image 1 and image 2 is relatively large, so the image 3 obtained by combining image 1 and image 2 has a large brightness dynamic Range, to achieve the effect of improving the brightness dynamic range of the image.
  • a first exposure is performed on a first pixel in the image sensor using a first exposure parameter to generate a first image
  • a second exposure is performed on a second pixel in the image sensor using a second exposure parameter , to generate a second image, in which the frame rate adopted for exposing the first pixel and the second pixel is the same, so that there is no frame rate difference between the first image and the second image, the difference is only in the exposure parameters adopted different values, different images with different exposure levels can be generated by multiple exposures of a single frame, regardless of whether the first image and the second image are exposed and output at the same time, it can solve the problem of multi-frame image fusion in the traditional technology.
  • the improvement of the brightness dynamic range caused by the rate difference is relatively small, and the time difference between the first image and the second image used for fusion is reduced.
  • the first image and the second image are exposed at the same frame rate, if the first image and the second image are exposed and output at the same time, since the exposure parameters of the first image and the second image are different, the first The brightness difference between the image and the second image is large, so that the brightness dynamic range of the target image can be improved; in addition, the method uses the same image sensor to operate, and will not cause differences in different exposure parameters due to differences in the components of the camera module.
  • the method fuses the first image and the second image to generate the target image, so that the target image retains Original image information and more image details, compared with the single-frame image processing method in the traditional technology, the image quality is better.
  • This method can be applied to electronic devices such as PD-focused cameras or mobile phones.
  • electronic devices such as PD-focused cameras or mobile phones.
  • each pixel in the image sensor is a PD pixel
  • each pixel in the image sensor is a PD pixel
  • FIG. 7 shows a schematic diagram of pixel layout of an image sensor.
  • each pixel is a PD pixel, specifically an L pixel or an R pixel in the PD pixel, and there is no imaging pixel in FIG. 7 .
  • FIG. 7 shows the pixel layout of the image sensor in a 2PD arrangement. In other examples, it may also be a pixel arrangement of a sensor in which each pixel of an image sensor such as 4PD and 8PD is a PD pixel.
  • the first pixel includes a left pixel among the PD pixels, and the second pixel includes a right pixel among the PD pixels;
  • the first pixel includes every L pixel shown in FIG. 7
  • the second pixel includes R pixels among every PD pixels shown in FIG. 7 .
  • first exposure may be performed on the left pixel in the image sensor using first exposure parameters to generate a first image
  • a second exposure may be performed on the right pixel in the image sensor using a second exposure parameter to generate a second image.
  • FIG. 8 wherein the pixel layout in FIG. 8 is consistent with the pixel layout in FIG. Draw grayscale shades.
  • FIG. 8 shows the reference signs of the exposure parameters of the first row of PD pixels in the Sensor, for For the exposure parameters of the pixels in other rows, refer to the reference signs of the pixels in the first row;
  • FIG. 8 also shows that the second exposure is performed for each R pixel in the PD pixel using the exposure parameter 32.
  • the reference numerals of the exposure parameters of the PD pixels in the first row in the Sensor are shown, and for the exposure parameters of the pixels of other rows, refer to the reference numerals of the pixels of the first row.
  • the arrows in FIG. 8 are used to represent the pixels not shown in each row in the Sensor. Wherein, since FIG. 8 is an extension of FIG. 7 , the pixel layout of FIG. 8 can be explained with reference to FIG. 7 .
  • the first exposure parameter can be used to expose the pixel position of the L pixel in the PD pixel in the image sensor
  • the second exposure parameter can be used
  • the exposure parameter exposes the position of the pixel point where the R pixel in the PD pixel in the image sensor is located.
  • the exposure time of the two exposure parameters is different, so that two frames of images with a large difference in brightness can be generated. Then the two frames of images are fused
  • the target image generated later can have a larger brightness dynamic range.
  • the image sensor when the image sensor includes PD pixels and imaging pixels, that is to say, some of the pixels in the Sensor are imaging pixels, another part of the pixels are PD pixels, and the imaging pixels
  • the first exposure parameter includes a third exposure parameter and a fourth exposure parameter
  • the value of the third exposure parameter is the same as or different from the value of the fourth exposure parameter
  • the first image includes the third image and a fourth image
  • FIG. 9 shows a schematic diagram of a pixel layout of the Sensor in this embodiment, wherein the Sensor includes L pixels and R pixels in the PD pixels, and each pixel other than the shown L pixels and R pixels The point position is the imaging pixel, therefore, only some of the pixels in the Sensor are PD pixels, and the other pixels are RGB pixels.
  • step 101 the third exposure can be performed on the left pixel among the PD pixels in the image sensor using the third exposure parameter to generate a third image; and the right pixel among the PD pixels in the image sensor performing a fourth exposure using the fourth exposure parameter to generate a fourth image;
  • step 102 second exposure can be performed on the imaging pixels in the image sensor using second exposure parameters to generate a second image
  • the frame rates corresponding to the third exposure, the fourth exposure, and the second exposure are the same, that is, in the pair of Sensors, the L pixel in the PD pixel, the R pixel in the PD pixel, and all but When the RGB pixels other than the PD pixel are exposed, the frame rate of the exposure is the same, but the exposure parameters used for the imaging pixel are different from the exposure parameters used for the PD pixel, that is, image processing is performed in a single-frame multiple exposure mode , to generate the third image, the fourth image and the second image respectively.
  • step 103 image fusion may be performed on the third image, the fourth image, and the second image to generate a target picture.
  • the image fusion algorithm may adopt any image fusion algorithm in the conventional technology, which will not be repeated here.
  • the L pixels and R pixels in the PD pixels can be separately exposed, and the imaging pixels can be separately exposed, and, for the imaging pixels
  • the exposure parameters used when exposing the PD pixels are different, so that the brightness of the second image generated for the imaging pixels, the third image generated for the L pixels in the PD pixels, and the fourth image generated for the R pixels in the PD pixels exist. difference, then the target image generated after fusing the third image, the fourth image, and the second image can include the brightness of the exposure parameter corresponding to the imaging pixel, and the brightness of the exposure parameter corresponding to the PD pixel, which improves the target image Brightness dynamic range.
  • the value of the third exposure parameter is the same as the value of the fourth exposure parameter, that is, the same exposure parameter is used for the L pixel and the R pixel in the PD pixel to respectively perform Exposure, but the exposure parameter is also different from the exposure parameter used for the imaging pixels in the Sensor, then when performing image fusion on the third image, the fourth image, and the second image to generate the target picture,
  • the third image, the fourth image, and the fifth image any one or two frames of images may be fused with the second image to generate a target image, wherein the fifth image is a reference to the An image generated by performing image fusion on the third image and the fourth image.
  • FIG. 10 For example, for the continuation of FIG. 9, refer to FIG. 10, wherein the pixel layout in FIG. 10 is consistent with the pixel layout in FIG. Draw shades of gray within each pixel grid in .
  • the third exposure can be performed on each L pixel shown in FIG. Indicate the exposure parameter 41; and perform the fourth exposure using the exposure parameter 41 for each R pixel shown in FIG. And, adopt exposure parameter 42 to carry out second exposure to each imaging pixel shown in Fig. 10; Show the reference sign of the exposure parameter of imaging pixel of the first row in Sensor in Fig. 10, for the imaging pixel of other rows
  • exposure parameters refer to the reference signs of the pixels in the first row.
  • the arrows in FIG. 10 are used to represent the pixels not shown in each row in the Sensor.
  • the values of the exposure parameters used for the L pixel and the R pixel in the PD pixel are the same, the two types of pixels are also exposed independently, thereby generating the third image and the fourth image; in addition, it is also possible to performing image fusion on the third image and the fourth image to generate a fifth image; The second image is fused to generate a target image with high brightness and dynamics.
  • the same exposure parameters can be performed on the L pixels and R pixels in the PD pixels respectively.
  • the individual exposure of the imaging pixel and the individual exposure of the exposure parameters different from the exposure parameters of the PD pixel for the imaging pixel, and the exposure parameters used for the exposure of the imaging pixel and the PD pixel are different, so that the second image generated by the imaging pixel , and there is a difference in the brightness of the third image generated for the L pixel in the PD pixel and the fourth image generated for the R pixel in the PD pixel, then when the third image, the fourth image, and the second image are fused, you can pass
  • the third image and the fourth image are fused to generate a fifth image that is different from the brightness of the third image and the fourth image, and then, one or two frames are selected from the third image, the fourth image, and the fifth image.
  • the frame image is fused with the second image of the imaging pixel, so that the luminance range of the target image generated after fusion is larger than the luminance range of the target image generated by the scheme where all pixels in the image sensor are PD pixels.
  • the brightness dynamic range of the target image is further improved.
  • Image fusion can be performed on any two or three frames of images in the third image, the fourth image, the sixth image, and the second image to generate a target image, wherein the sixth image is the An image generated after performing image fusion on the third image and the fourth image.
  • FIG. 11 Exemplarily, for the continuation of FIG. 9, refer to FIG. 11, wherein the pixel layout in FIG. 11 is consistent with the pixel layout in FIG. Draw shades of gray within each pixel grid in .
  • each L pixel shown in Figure 11 can be exposed using the exposure parameter 51, wherein the black vertical line in the pixel position including the L pixel in Figure 11 is used to represent the exposure Parameter 51; and exposure parameter 52 is used for each R pixel shown in FIG.
  • the imaging pixels shown in Figure 11 are exposed using exposure parameters 53, wherein the black vertical lines in the pixel grids including imaging pixels in Figure 11 are used to indicate exposure parameters 53,
  • Figure 11 shows the first row of imaging pixels in the Sensor
  • the arrows in Fig. 11 are used to represent the pixels not shown in each row in the Sensor.
  • the values of the exposure parameters used for the L pixel and the R pixel in the PD pixel are different, and these two types of pixels are exposed independently, thereby generating the third image and the fourth image; in addition, Optionally two frames of images in the second image, the third image, the fourth image, and the sixth image can be fused to generate a target image with high brightness and dynamics, wherein the sixth image is the An image generated after performing image fusion on the third image and the fourth image.
  • the pixel layout in the image sensor is such that some pixels are PD pixels and other pixels are imaging pixels
  • different exposure parameters can be performed on the L pixels and R pixels in the PD pixels.
  • the individual exposure of the imaging pixel and the individual exposure of the PD pixel with different exposure parameters, and the exposure parameters adopted for the imaging pixel and the exposure of the L pixel and the R pixel in the PD pixel are different, so that the imaging pixel generates
  • the brightness of the second image is different from that of the third image generated for the L pixels in the PD pixels and the fourth image generated for the R pixels in the PD pixels.
  • the The third image, the fourth image, and the sixth image are optionally fused with two frames of images, wherein the sixth image is generated after image fusion of the third image and the fourth image image. Since the brightness of the four images that are finally fused are different, the target images generated by selecting two frames of images are images with high brightness and dynamics, so the brightness range of the target image generated after fusion is larger than that of all images in the image sensor.
  • the brightness range of the target image generated by the scheme where the pixels are all PD pixels, and the brightness range of the target image obtained by the scheme where some pixels are PD pixels and the exposure parameters of the left and right pixels of the PD pixels are the same , further improving the brightness dynamic range of the target image.
  • the image processing method provided in the embodiment of the present application may be executed by an image processing device, or a control module in the image processing device for executing the image processing method.
  • the image processing device executed by the image processing device is taken as an example to describe the image processing device provided in the embodiment of the present application.
  • FIG. 12 shows a block diagram of an image processing device according to an embodiment of the present application.
  • the image processing device includes:
  • the exposure module 201 is configured to perform a first exposure on a first pixel in the image sensor using a first exposure parameter to generate a first image; and perform a second exposure on a second pixel in the image sensor using a second exposure parameter, generating a second image, wherein the value of the first exposure parameter is different from the value of the second exposure parameter;
  • a fusion module 202 configured to perform image fusion on the first image and the second image to generate a target picture
  • the first pixel is different from the second pixel
  • the first exposure and the second exposure correspond to the same frame rate.
  • each pixel in the image sensor is a phase PD pixel for phase focusing
  • the first pixel includes the left pixel of the PD pixels
  • the second pixel includes the the right pixel among the PD pixels
  • the exposure module 201 includes:
  • the first exposure sub-module is configured to perform first exposure on the left pixel in the image sensor using first exposure parameters to generate a first image
  • the second exposure sub-module is configured to perform second exposure on the right pixel in the image sensor using second exposure parameters to generate a second image.
  • the first exposure parameter includes a third exposure parameter and a fourth exposure parameter
  • the value of the third exposure parameter is the same as that of the fourth exposure parameter.
  • the values of the exposure parameters are the same or different, and the first image includes a third image and a fourth image;
  • the exposure module 201 includes:
  • a third exposure submodule configured to perform a third exposure on the left pixel of the PD pixel in the image sensor using the third exposure parameter to generate a third image
  • the fourth exposure sub-module is configured to perform fourth exposure on the right pixel of the PD pixel in the image sensor using the fourth exposure parameter to generate a fourth image;
  • the fifth exposure sub-touch is used to perform a second exposure on the imaging pixel in the image sensor using a second exposure parameter to generate a second image;
  • the frame rates corresponding to the third exposure, the fourth exposure, and the second exposure are the same;
  • the fusion module 202 includes:
  • the fusion sub-module is configured to perform image fusion on the third image, the fourth image, and the second image to generate a target picture.
  • the fusion submodule includes:
  • the first fusion unit is configured to fuse any one or two frames of images among the third image, the fourth image, and the fifth image with the second image to generate a target image, wherein the first The fifth image is an image generated after performing image fusion on the third image and the fourth image.
  • the fusion submodule includes:
  • the second fusion unit is configured to perform image fusion on any two or three frames of images in the third image, the fourth image, the sixth image, and the second image to generate a target image, wherein the The sixth image is an image generated after performing image fusion on the third image and the fourth image.
  • a first exposure is performed on a first pixel in the image sensor using a first exposure parameter to generate a first image
  • a second exposure is performed on a second pixel in the image sensor using a second exposure parameter , to generate a second image, in which the frame rate adopted for exposing the first pixel and the second pixel is the same, so that there is no frame rate difference between the first image and the second image, the difference is only in the exposure parameters adopted different values, different images with different exposure levels can be generated by multiple exposures of a single frame, regardless of whether the first image and the second image are exposed and output at the same time, it can solve the problem of multi-frame image fusion in the traditional technology.
  • the improvement of the brightness dynamic range caused by the rate difference is relatively small, and the time difference between the first image and the second image used for fusion is reduced.
  • the first image and the second image are exposed at the same frame rate, if the first image and the second image are exposed and output at the same time, since the exposure parameters of the first image and the second image are different, the first The brightness difference between the image and the second image is large, so that the brightness dynamic range of the target image can be improved; in addition, the method uses the same image sensor to operate, and will not cause differences in different exposure parameters due to differences in the components of the camera module.
  • the method fuses the first image and the second image to generate the target image, so that the target image retains Original image information and more image details, compared with the single-frame image processing method in the traditional technology, the image quality is better.
  • the image processing apparatus in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal.
  • the device may be a mobile electronic device or a non-mobile electronic device.
  • the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a handheld computer, a vehicle electronic device, a wearable device, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook or a personal digital assistant (personal digital assistant). assistant, PDA), etc.
  • the non-mobile electronic device may be a personal computer (personal computer, PC), television (television, TV), teller machine or self-service machine, etc., which are not specifically limited in this embodiment of the present application.
  • the image processing device in the embodiment of the present application may be a device with an operating system.
  • the operating system may be an Android operating system, an iOS operating system, or other possible operating systems, which are not specifically limited in this embodiment of the present application.
  • the image processing apparatus provided in the embodiments of the present application can implement the various processes implemented in the foregoing method embodiments, and details are not repeated here to avoid repetition.
  • the embodiment of the present application further provides an electronic device 2000, including a processor 2002, a memory 2001, and programs or instructions stored in the memory 2001 and operable on the processor 2002,
  • an electronic device 2000 including a processor 2002, a memory 2001, and programs or instructions stored in the memory 2001 and operable on the processor 2002,
  • the program or instruction is executed by the processor 2002, each process of the above-mentioned image processing method embodiment can be achieved, and the same technical effect can be achieved. To avoid repetition, details are not repeated here.
  • the electronic devices in the embodiments of the present application include the above-mentioned mobile electronic devices and non-mobile electronic devices.
  • FIG. 14 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
  • the electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010, etc. part.
  • the electronic device 1000 can also include a power supply (such as a battery) for supplying power to various components, and the power supply can be logically connected to the processor 1010 through the power management system, so that the management of charging, discharging, and function can be realized through the power management system. Consumption management and other functions.
  • a power supply such as a battery
  • the structure of the electronic device shown in FIG. 14 does not constitute a limitation to the electronic device.
  • the electronic device may include more or fewer components than shown in the figure, or combine certain components, or arrange different components, and details will not be repeated here. .
  • the senor 1005 may be an image sensor.
  • the processor 1010 is configured to perform a first exposure on a first pixel in the image sensor by using a first exposure parameter to generate a first image; and perform a second exposure on a second pixel in the image sensor by using a second exposure parameter to generate The second image, wherein the value of the first exposure parameter is different from the value of the second exposure parameter; performing image fusion on the first image and the second image to generate a target image;
  • the first pixel is different from the second pixel
  • the first exposure and the second exposure correspond to the same frame rate.
  • a first exposure is performed on a first pixel in the image sensor using a first exposure parameter to generate a first image
  • a second exposure is performed on a second pixel in the image sensor using a second exposure parameter , to generate a second image, in which the frame rate adopted for exposing the first pixel and the second pixel is the same, so that there is no frame rate difference between the first image and the second image, the difference is only in the exposure parameters adopted different values, different images with different exposure levels can be generated by multiple exposures of a single frame, regardless of whether the first image and the second image are exposed and output at the same time, it can solve the problem of multi-frame image fusion in the traditional technology.
  • the improvement of the brightness dynamic range caused by the rate difference is relatively small, and the time difference between the first image and the second image used for fusion is reduced.
  • the first image and the second image are exposed at the same frame rate, if the first image and the second image are exposed and output at the same time, since the exposure parameters of the first image and the second image are different, the first The brightness difference between the image and the second image is large, so that the brightness dynamic range of the target image can be improved; in addition, the method uses the same image sensor to operate, and will not cause differences in different exposure parameters due to differences in the components of the camera module.
  • the method fuses the first image and the second image to generate the target image, so that the target image retains Original image information and more image details, compared with the single-frame image processing method in the traditional technology, the image quality is better.
  • each pixel in the image sensor is a PD pixel
  • the first pixel includes a left pixel among the PD pixels
  • the second pixel includes a right pixel among the PD pixels.
  • Processor 1010 configured to perform first exposure on the left pixel in the image sensor using first exposure parameters to generate a first image; perform second exposure on the right pixel in the image sensor using second exposure parameters to generate second image.
  • the first exposure parameter can be used to expose the pixel position where the L pixel in the PD pixel in the image sensor is located
  • the second exposure parameter can be used
  • the exposure parameter exposes the position of the pixel point where the R pixel in the PD pixel in the image sensor is located.
  • the exposure time of the two exposure parameters is different, so that two frames of images with a large difference in brightness can be generated. Then the two frames of images are fused
  • the target image generated later can have a larger brightness dynamic range.
  • the first exposure parameter includes a third exposure parameter and a fourth exposure parameter
  • the value of the third exposure parameter is the same as that of the fourth exposure parameter.
  • the values of the exposure parameters are the same or different, and the first image includes a third image and a fourth image;
  • Processor 1010 configured to perform a third exposure on the left pixel among the PD pixels in the image sensor using the third exposure parameter to generate a third image; performing a fourth exposure with a fourth exposure parameter to generate a fourth image; performing a second exposure on the imaging pixel in the image sensor using a second exposure parameter to generate a second image; performing a second exposure on the third image, the fourth image , performing image fusion on the second image to generate a target image.
  • the frame rates corresponding to the third exposure, the fourth exposure, and the second exposure are the same;
  • the L pixels and R pixels in the PD pixels can be separately exposed, and the imaging pixels can be separately exposed, and, for the imaging pixels
  • the exposure parameters used when exposing the PD pixels are different, so that the brightness of the second image generated for the imaging pixels, the third image generated for the L pixels in the PD pixels, and the fourth image generated for the R pixels in the PD pixels exist. difference, then the target image generated after fusing the third image, the fourth image, and the second image can include the brightness of the exposure parameter corresponding to the imaging pixel, and the brightness of the exposure parameter corresponding to the PD pixel, which improves the target image Brightness dynamic range.
  • the processor 1010 is configured to, when the value of the third exposure parameter is the same as the value of the fourth exposure parameter, in the third image, the fourth image, the fifth Fusion of any one or two frames of images in the images with the second image to generate a target image, wherein the fifth image is an image generated after image fusion of the third image and the fourth image .
  • the same exposure parameters can be performed on the L pixels and R pixels in the PD pixels respectively.
  • the individual exposure of the imaging pixel and the individual exposure of the exposure parameters different from the exposure parameters of the PD pixel for the imaging pixel, and the exposure parameters used for the exposure of the imaging pixel and the PD pixel are different, so that the second image generated by the imaging pixel , and there is a difference in the brightness of the third image generated for the L pixel in the PD pixel and the fourth image generated for the R pixel in the PD pixel, then when the third image, the fourth image, and the second image are fused, you can pass
  • the third image and the fourth image are fused to generate a fifth image that is different from the brightness of the third image and the fourth image, and then, one or two frames are selected from the third image, the fourth image, and the fifth image.
  • the frame image is fused with the second image of the imaging pixel, so that the luminance range of the target image generated after fusion is larger than the luminance range of the target image generated by the scheme where all pixels in the image sensor are PD pixels.
  • the brightness dynamic range of the target image is further improved.
  • the processor 1010 is configured to, in the case that the value of the third exposure parameter is different from the value of the fourth exposure parameter, in the third image, the fourth image, the sixth Image and the second image are selected to perform image fusion on two or three frames of images to generate a target image, wherein the sixth image is generated after image fusion is performed on the third image and the fourth image image.
  • the pixel layout in the image sensor is such that some pixels are PD pixels and other pixels are imaging pixels
  • different exposure parameters can be performed on the L pixels and R pixels in the PD pixels.
  • the individual exposure of the imaging pixel and the individual exposure of the PD pixel with different exposure parameters, and the exposure parameters adopted for the imaging pixel and the exposure of the L pixel and the R pixel in the PD pixel are different, so that the imaging pixel generates
  • the brightness of the second image is different from that of the third image generated for the L pixels in the PD pixels and the fourth image generated for the R pixels in the PD pixels.
  • the The third image, the fourth image, and the sixth image are optionally fused with two frames of images, wherein the sixth image is generated after image fusion of the third image and the fourth image image. Since the brightness of the four images that are finally fused are different, the target images generated by selecting two frames of images are images with high brightness and dynamics, so the brightness range of the target image generated after fusion is larger than that of all images in the image sensor.
  • the brightness range of the target image generated by the scheme where the pixels are all PD pixels, and the brightness range of the target image obtained by the scheme where some pixels are PD pixels and the exposure parameters of the left and right pixels of the PD pixels are the same , further improving the brightness dynamic range of the target image.
  • the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, and the graphics processor 10041 is used for the image capture device (such as the image data of the still picture or video obtained by the camera) for processing.
  • the display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the user input unit 1007 includes a touch panel 10071 and other input devices 10072 .
  • the touch panel 10071 is also called a touch screen.
  • the touch panel 10071 may include two parts, a touch detection device and a touch controller.
  • Other input devices 10072 may include, but are not limited to, physical keyboards, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, and joysticks, which will not be repeated here.
  • the memory 1009 can be used to store software programs as well as various data, including but not limited to application programs and operating systems.
  • Processor 1010 may integrate an application processor and a modem processor, wherein the application processor mainly processes operating systems, user interfaces, and application programs, and the modem processor mainly processes wireless communications. It can be understood that the foregoing modem processor may not be integrated into the processor 1010 .
  • the embodiment of the present application also provides a readable storage medium, the readable storage medium stores a program or an instruction, and when the program or instruction is executed by a processor, each process of the above-mentioned image processing method embodiment is realized, and can achieve the same To avoid repetition, the technical effects will not be repeated here.
  • the processor is the processor in the electronic device described in the above embodiments.
  • the readable storage medium includes computer readable storage medium, such as computer read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
  • the embodiment of the present application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement the above image processing method embodiment Each process can achieve the same technical effect, so in order to avoid repetition, it will not be repeated here.
  • chips mentioned in the embodiments of the present application may also be called system-on-chip, system-on-chip, system-on-a-chip, or system-on-a-chip.
  • the term “comprising”, “comprising” or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article or apparatus comprising a set of elements includes not only those elements, It also includes other elements not expressly listed, or elements inherent in the process, method, article, or device. Without further limitations, an element defined by the phrase “comprising a " does not preclude the presence of additional identical elements in the process, method, article, or apparatus comprising that element.
  • the scope of the methods and devices in the embodiments of the present application is not limited to performing functions in the order shown or discussed, and may also include performing functions in a substantially simultaneous manner or in a reverse order according to the functions involved. Functions are performed, for example, the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

本申请公开了一种图像处理方法、装置、电子设备及可读存储介质,属于图像处理技术领域。该方法包括:对图像传感器中的第一像素采用第一曝光参数进行第一曝光,生成第一图像;对所述图像传感器中的第二像素采用第二曝光参数进行第二曝光,生成第二图像,其中,所述第一曝光参数的取值与所述第二曝光参数的取值不同;对所述第一图像和所述第二图像进行图像融合,生成目标图片;其中,所述第一像素与所述第二像素不同;其中,所述第一曝光和所述第二曝光对应同一帧率。

Description

图像处理方法、装置、电子设备及可读存储介质
相关申请的交叉引用
本申请要求在2021年08月19日提交中国专利局、申请号为202110953654.6、名称为“图像处理方法、装置、电子设备及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于图像处理技术领域,具体涉及一种图像处理方法、装置、电子设备及可读存储介质。
背景技术
随着技术的进步,电子设备的拍照功能,在高清晰、高分辨率、高动态(HDR,High-Dynamic Range)、高信噪等方面不断得到改善。其中,HDR可以使图像中可变化信号最大值和最小值的比值比较大,其中,该可变化信号主要体现在亮度上。基于图像中亮度范围的可变化情况,图像的高动态越来越受到用户的关注,目前常见的提升图像动态的方式主要包括以下三种:
方式1、对单帧图像进行处理。例如local tone mapping、global tone mapping等方式;
但是,该单帧图像处理方式,如果使用local tone mapping,则图像上各个块边缘较容易出现问题,而采用globaltone mapping则是以牺牲部分输入亮度的灰度值,来提高特定亮度的作用的,影响了图像细节。
方式2、利用同一个模组抓取多帧不同曝光的图像,然后,对多帧图像进行HDR合成;
但是,这种多帧合成的方式中,多帧图像的帧率不同,多帧图像的曝光时机不同,主要是通过将不同时机下进行不同曝光的两帧图像合成一帧图像的方式来完成。这种方式所合成的图像的亮度动态范围受限于可设的积分时间,其中,积分时间影响曝光时间。如果固定在较高帧率,则曝光时间可设范围降低,亮度动态范围提高的幅度也相对比较小;造成多帧图像的的帧率 差异大。
方式3、利用不同模组(包含镜头组件与传感器(sensor)组件)分别抓取不同帧图像,然后,对多帧图像进行HDR合成;
但是,这种通过多个模组获取到不同亮度的多帧图像,然后,对多帧图像进行合成的方式,由于sensor组件有差异,那么多帧图像间图像画质会存在一定差异,影响图像质量;而且多模组会造成功耗和成本的增加。
因此,相关技术中提升图像的亮度动态范围的方法普遍存在着影响图像细节和图像质量、亮度动态范围提升的幅度较小、多帧图像的的帧率差异大以及功耗和成本增加的问题。
发明内容
本申请实施例的目的是提供一种图像处理方法、装置、电子设备及可读存储介质,能够解决相关技术中提升图像的亮度动态范围的方法所存在的影响图像细节和图像质量、亮度动态范围提升的幅度较小、多帧图像的的帧率差异大以及功耗和成本增加的问题。
第一方面,本申请实施例提供了一种图像处理方法,该方法包括:
对图像传感器中的第一像素采用第一曝光参数进行第一曝光,生成第一图像;
对所述图像传感器中的第二像素采用第二曝光参数进行第二曝光,生成第二图像,其中,所述第一曝光参数的取值与所述第二曝光参数的取值不同;
对所述第一图像和所述第二图像进行图像融合,生成目标图片;
其中,所述第一像素与所述第二像素不同;
其中,所述第一曝光和所述第二曝光对应同一帧率。
第二方面,本申请实施例提供了一种图像处理装置,该装置包括:
曝光模块,用于对图像传感器中的第一像素采用第一曝光参数进行第一曝光,生成第一图像;以及对所述图像传感器中的第二像素采用第二曝光参数进行第二曝光,生成第二图像,其中,所述第一曝光参数的取值与所述第二曝光参数的取值不同;
融合模块,用于对所述第一图像和所述第二图像进行图像融合,生成目 标图片;
其中,所述第一像素与所述第二像素不同;
其中,所述第一曝光和所述第二曝光对应同一帧率。
第三方面,本申请实施例提供了一种电子设备,该电子设备包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤。
第四方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤。
第五方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法。
在本申请实施例中,对图像传感器中的第一像素采用第一曝光参数进行第一曝光,生成第一图像;以及对所述图像传感器中的第二像素采用第二曝光参数进行第二曝光,生成第二图像,其中,对第一像素和第二像素进行曝光所采用的的帧率相同,使得第一图像和第二图像之间不存在帧率差异,区别只在于所采用的曝光参数的取值不同,可以以单帧多种曝光的方式生成不同曝光程度的的不同图像,不论第一图像和第二图像是否同时进行曝光和输出,都可解决传统技术中多帧图像融合因帧率差异引起的亮度动态范围提高的幅度相对比较小的问题,并缩小用于融合的第一图像和第二图像之间的时间差异。另外,由于第一图像和第二图像曝光时的帧率相同,因此,如果第一图像和第二图像同时进行曝光并输出,由于第一图像和第二图像的曝光参数不同,因此,第一图像和第二图像的亮度差异较大,从而可以提升了目标图像的亮度动态范围;此外,该方法使用同一个图像传感器进行操作,不会因摄像模组的组件差异,而导致不同曝光参数的不同图像存在画质差异的问题,从而能够确保图像质量,也不会提升功耗和成本的增加;另外,该方法对第一图像和第二图像进行融合生成目标图像,使得目标图像中保留了原始的图像信息和更多的图像细节,相比于传统技术中的单帧图像处理的方法,图像质量更好。
附图说明
图1是本申请一个实施例的图像处理方法的流程图;
图2是本申请一个实施例的图像传感器的像素布局示意图之一;
图3是本申请一个实施例的图像传感器的像素布局示意图之二;
图4是本申请一个实施例的图像示意图之一;
图5是本申请一个实施例的图像示意图之二;
图6是本申请一个实施例的图像示意图之三;
图7是本申请一个实施例的图像传感器的像素布局示意图之三;
图8是本申请一个实施例的图像传感器的像素布局示意图之四;
图9是本申请一个实施例的图像传感器的像素布局示意图之五;
图10是本申请一个实施例的图像传感器的像素布局示意图之六;
图11是本申请一个实施例的图像传感器的像素布局示意图之七;
图12是本申请一个实施例的图像处理装置的框图;
图13是本申请一个实施例的电子设备的硬件结构示意图;
图14是本申请另一个实施例的电子设备的硬件结构示意图。
具体实施例
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的 图像处理方法进行详细地说明。
参照图1,示出了本申请一个实施例的图像处理方法的流程图,所述方法具体可以包括如下步骤:
步骤101,对图像传感器中的第一像素采用第一曝光参数进行第一曝光,生成第一图像;
步骤102,对所述图像传感器中的第二像素采用第二曝光参数进行第二曝光,生成第二图像,其中,所述第一曝光参数的取值与所述第二曝光参数的取值不同;
其中,所述第一像素和第二像素不同。
像素的类型可以包括成像像素,指代红绿蓝(RGB)像素,还包括用于相位对焦的相位(PD,phase detection)像素,其中,PD像素又可以分为左(L)像素和右(R)像素两种像素类型;
因此,第一像素与第二像素可以表示:上述举例的不同像素类型的像素。
其中,PD对焦是通过相位检测实现对焦。具体可以在图像传感器(Sensor)原有的成像像素基础上(如图2所示的图像传感器的成像像素的示意图,成像像素的布局方式不限于图2,成像像素包括GR像素61,R像素62、B像素63、GB像素64,本例中,图像传感器可以应用于拍摄人脸的场景,而人脸对G像素较为敏感,因此可以对G通道设置红绿通道的G像素,即GR像素61,以及蓝绿通道的G像素,即GB像素64),增加一种PD像素,其中,PD像素可以包括左(L)像素和右(R)像素。例如,对比于图2,参照图3可以看出,这里的图像传感器在图2的成像像素的基础上,增加了PD像素(以L、R示出)。对于图2和图3,相同的附图标记表示相同的对象,这里不再对图3的附图标记做一一赘述,参照图2的解释即可。其中,PD像素的排列形式不限于图3,从而可以利用PD像素辅助对焦。通过PD像素中的L像素和R像素的像素值,可以计算对焦区域的相位差(Phase diff),从而实现相位对焦。
对于L像素和R像素,对于图像传感器中的一个像素点,若将该像素点的一半位置加了金属遮盖,使得被遮住左边一半的像素点只能接受左边来的光,该被遮住左边一半的像素点称之为L像素;同理,被遮住右边一半的像素点就只能接受右边来的光,该被遮住右边一半的像素点称之为R像素;在 图像传感器中,L像素和R像素是在相邻位置成对出现,如图3所示。
其中,曝光参数包括但不限于积分时间(integration time,INT),模拟增益(gain)以及数字gain等。
其中,积分时间是以行为单位表示曝光时间(exposure time)的,比如说INT为159,就是指图像传感器(Sensor)曝光时间为159行,积分时间和曝光时间两者所代表的意思是相同的,都是表示Sensor的曝光时间,但是integration time是一个相对的概念,即以行为单位,而每行所占的绝对时间与时钟频率和每一行包含多少pclk(即行长)有关;而exposure time则是指Sensor曝光的绝对时间。
本步骤101和本步骤102中,则对Sensor中包括PD像素的第一像素采用第一曝光参数进行曝光,同时,对Sensor中包括成像像素的第二像素采用第二曝光参数进行曝光,这两个曝光参数的取值是不同的,例如曝光参数为曝光时长,则Sensor中包括PD像素的第一像素的曝光时长,与包括成像像素的第二像素的曝光时长不同,从而使得分别生成的两帧图片的亮度存在差异。
可选地,步骤101和步骤102可以同时执行,且所述第一曝光和所述第二曝光对应同一帧率,使得第一图像和第二图像在同一时间被触发曝光,即曝光时机相同,也就是说,不同曝光参数的第一图像和第二图像虽然曝光时间,即曝光所用的时长存在差异,但是触发执行曝光的步骤的执行时机是相同的,可以使得两帧图像之间不存在时间差异。
另外,所述第一曝光和所述第二曝光对应同一帧率,因此,在同一时间周期内,第一曝光所生成的第一图像的帧数是与第二曝光所生成的第二图像的帧数相同的。
那么按照多帧第一图像的生成顺序,以及多帧第二图像的生成顺序,使得相同序位的第一图像和第二图像可以相互对应起来,将这一组相互对应的第一图像和第二图像进行融合,就可以生成一帧目标图像。示例地,以序位为第一为例,可以将生成的首帧第一图像和生成的首帧第二图像进行融合,生成首帧目标图像。
因此,如果相同序位的第一图像和第二图像是同时输出的,则可以将同时输出的第一图像和第二图像构成的一组图像进行融合,则可以解决传统技 术中多帧图像存在时间差异的问题。
如果相同序位的第一图像和第二图像不是同时输出的,即存在着输出的先后顺序,例如首帧第一图像先于首帧第二图像输出,则可以依据第一图像和第二图像各自的生成顺序,将相同序位的第一图像和第二图像构成一组图像进行融合,虽然两帧图像不是同时输出,但是由于二者对应的帧率相同,还是可以在一定程度上降低第一图像和第二图像之间的时间差异。
因此,在第一曝光和第二曝光的帧率相同的情况下,不论第一图像和第二图像是否同时输出,都可以降低传统技术中用于合成的多帧图像的的帧率差异大的问题。
其中,在控制第一图像和第二图像同时输出时,可以通过对曝光的第一图像和/或第二图像设置消隐期来实现,该消隐期可以包括水平消隐期,即水平空白时间(horizontal blank time,HBLT),和/或,垂直消隐期,即垂直空白时间(vertical blank time,VBLT)。
具体而言,由于第一像素和第二像素的曝光参数的取值不同,例如具体可以体现为曝光时间不同,那么可选地,可以对曝光时间较短的图像设置水平消隐期,和/或垂直消隐期,来等待曝光时长较长的图像完成所有行的相应像素的曝光,使得曝光生成的第一图像和第二图像可以同时输出。
以不同曝光参数对Sensor中的不同像素按照同一帧率进行曝光,将两种曝光参数所生成的图像看做两张图像,即第一图像和第二图像,以单帧多种曝光并进行融合的方式,则可以得到高动态的亮度范围的图片,由于两种曝光的帧率相同,因此,可以解决第一图像和第二图像之间的帧率差异问题,并进一步扩大亮度动态范围的提高幅度。
示例地,传统技术中Sensor中各像素采用统一的一个曝光控制通路来对各像素进行曝光控制,本实施例中,为了实现对Sensor中第一像素和第二像素采用不同的曝光参数进行曝光,可以对第一像素单独设置一路曝光控制通路(可以以半导体硬件通路来体现),以及对第二像素单独设置另一路曝光控制通路,两个通路相互独立,从而可以实现对第一像素和第二像素的曝光参数的独立控制。
具体的,第一曝光参数由所述第一控制通路控制,第二曝光参数由第二控制通路控制,其中,所述第一控制通路与所述第二控制通路不同,且所述 图像传感器与所述第一控制通路、第二控制通路均连接。
示例地,在单独配置第一像素和第二像素的曝光参数时,可以通过在半导体硬件通路上的分离实现,即配置图像传感器与不同的半导体硬件通路连接。其中,图像传感器可以通过不同的半导体硬件通路与后端的控制器通信连接。
步骤103,对所述第一图像和所述第二图像进行图像融合,生成目标图片。
其中,可以采用各种图像融合算法,来对第一图像和第二图像进行融合,生成目标图像。其中,由于第一图像和第二图像的亮度存在差别,那么两张图片融合后生成的目标图片则可以是高动态的图片,提高了亮度动态范围。
示例地,图4和图5分别示出了第一图像和第二图像,图6示出了目标图像。
对Sensor中的第一像素采用曝光时间比较大的第一曝光参数进行曝光,生成图4所示的图像,这里命名为图像1,图像1中过曝区域较多;
对Sensor中的第二像素采用曝光时间比较小的第二曝光参数进行曝光,生成图5所示的图像,这里命名为图像2,图像2中欠曝区域较多;
对图像1和图像2进行图像融合,生成图6所示的图像,这里以图像3命名。
由于图像1和图像2的曝光参数的取值不同,从而曝光时间不同,使得图像1和图像2的亮度差异较大,那么利用图像1和图像2合成得到的图像3则具有较大的亮度动态范围,达到提升图像的亮度动态范围的效果。
在本申请实施例中,对图像传感器中的第一像素采用第一曝光参数进行第一曝光,生成第一图像;以及对所述图像传感器中的第二像素采用第二曝光参数进行第二曝光,生成第二图像,其中,对第一像素和第二像素进行曝光所采用的的帧率相同,使得第一图像和第二图像之间不存在帧率差异,区别只在于所采用的曝光参数的取值不同,可以以单帧多种曝光的方式生成不同曝光程度的的不同图像,不论第一图像和第二图像是否同时进行曝光和输出,都可解决传统技术中多帧图像融合因帧率差异引起的亮度动态范围提高的幅度相对比较小的问题,并缩小用于融合的第一图像和第二图像之间的时间差异。另外,由于第一图像和第二图像曝光时的帧率相同,因此,如果第一图像和第二图像同时进行曝光并输出,由于第一图像和第二图像的曝光参 数不同,因此,第一图像和第二图像的亮度差异较大,从而可以提升了目标图像的亮度动态范围;此外,该方法使用同一个图像传感器进行操作,不会因摄像模组的组件差异,而导致不同曝光参数的不同图像存在画质差异的问题,从而能够确保图像质量,也不会提升功耗和成本的增加;另外,该方法对第一图像和第二图像进行融合生成目标图像,使得目标图像中保留了原始的图像信息和更多的图像细节,相比于传统技术中的单帧图像处理的方法,图像质量更好。
该方法可以应用到PD对焦的相机或手机等电子设备中,通过分离成像像素与PD像素的参数控制,并独立设置不同曝光参数,可以提升相机所采集的图像的亮度动态范围。
可选地,在一个实施例中,在所述图像传感器中每个像素点均为PD像素的情况下,也就是说,图像传感器中的每个像素点都是PD像素。
示例地,图7示出了一种图像传感器的像素布局示意图。在图7中,每个像素点均为PD像素,具体为PD像素中的L像素或R像素,在图7中不具备成像像素。图7以2PD的排列方式示出了图像传感器的像素布局,在其他示例中,还可以是4PD、8PD等图像传感器中的每个像素点均为PD像素的传感器的像素排列方式。
其中,所述第一像素包括所述PD像素中的左像素,所述第二像素包括所述PD像素中的右像素;
因此,在图7中,第一像素包括图7中示出每个L像素,第二像素包括图7中示出的每个PD像素中的R像素。
那么在执行步骤101时,可以对图像传感器中的所述左像素采用第一曝光参数进行第一曝光,生成第一图像;
那么在执行步骤102时,则可以对图像传感器中所述右像素采用第二曝光参数进行第二曝光,生成第二图像。
示例地,参照图8,其中,图8的像素布局方式与图7的像素布局方式一致,这里为了使得图8中用于表示曝光参数的线条更加清楚,未在图8中的各个像素格内绘制灰度阴影。
从图8可以看出,可以对图8中示出的每个L像素采用曝光参数31进行第一曝光,图8中示出了Sensor中第一行PD像素的曝光参数的附图标记, 对于其他行的像素点的曝光参数参照第一行的像素点的附图标记即可;图8中还示出了对PD像素中的每个R像素采用曝光参数32进行第二曝光,图8中示出了Sensor中第一行PD像素的曝光参数的附图标记,对于其他行的像素点的曝光参数参照第一行的像素点的附图标记即可。图8中的箭头用于表示Sensor中每行未示出的像素点。其中,由于图8是图7的延伸,因此,图8的像素布局参照图7的解释即可。
在本申请实施例中,在图像传感器中每个像素点均为PD像素的情况下,则可以采用第一曝光参数对图像传感器中PD像素中L像素所在的像素点位置进行曝光,采用第二曝光参数对图像传感器中PD像素中R像素所在的像素点位置进行曝光,两种曝光的曝光参数的曝光时长存在区别,从而可以生成亮度存在很大区别的两帧图像,那么两帧图像经过融合后生成的目标图像则可以具有较大的亮度动态范围。
可选地,在一个实施例中,在所述图像传感器包括PD像素和成像像素的情况下,也就是说,Sensor中一部分像素点为成像像素,另一部分像素点为PD像素,而对于成像像素与PD像素之间的布局方式,本发明对此不做限制。其中,所述第一曝光参数包括第三曝光参数和第四曝光参数,所述第三曝光参数的取值与所述第四曝光参数的取值相同或不同,所述第一图像包括第三图像和第四图像;
示例地,图9示出了本实施例中的Sensor的一种像素布局示意图,其中,该Sensor包括PD像素中的L像素和R像素,在示出的L像素和R像素之外的各个像素点位置则是成像像素,因此,该Sensor中只有部分像素点为PD像素,其他像素点则是RGB像素。
那么在执行步骤101时,可以对图像传感器中所述PD像素中的左像素采用所述第三曝光参数进行第三曝光,生成第三图像;以及对图像传感器中所述PD像素中的右像素采用所述第四曝光参数进行第四曝光,生成第四图像;
那么在执行步骤102时,则可以对图像传感器中的所述成像像素采用第二曝光参数进行第二曝光,生成第二图像;
其中,所述第三曝光、所述第四曝光、所述第二曝光各自对应的帧率相同,也就是说,在对Sensor中PD像素中的L像素、PD像素中的R像素,以及除该PD像素之外的RGB像素进行曝光时,曝光的帧率相同,但是对成像 像素所采用的曝光参数不同于对PD像素所采用的曝光参数,即以单帧多种曝光的方式进行图像处理,来分别生成第三图像、第四图像和第二图像。
那么在执行步骤103时,则可以对所述第三图像、所述第四图像、所述第二图像进行图像融合,生成目标图片。
其中,图像融合算法可以采用传统技术中的任意一种图像融合算法,这里不再赘述。
在本申请实施例中,当图像传感器包括PD像素和成像像素的情况下,则可以对PD像素中的L像素和R像素分别进行单独曝光,以及对成像像素进行单独曝光,而且,对于成像像素和PD像素进行曝光时所采用的曝光参数不同,使得对成像像素生成的第二图像,与对PD像素中L像素生成的第三图像、对PD像素中R像素生成的第四图像的亮度存在差异,那么在对第三图像、第四图像、第二图像进行融合后生成的目标图像,则可以包括成像像素对应的曝光参数的亮度,以及PD像素对应的曝光参数的亮度,提升了目标图像的亮度动态范围。
可选地,在所述第三曝光参数的取值与所述第四曝光参数的取值相同的情况下,也就是说,对PD像素中的L像素和R像素采用相同的曝光参数分别进行曝光,但是该曝光参数同样区别于对Sensor中的成像像素所采用的曝光参数,那么在对所述第三图像、所述第四图像、所述第二图像进行图像融合,生成目标图片时,可以在所述第三图像、所述第四图像、第五图像中任选一帧或两帧图像与所述第二图像进行融合,生成目标图像,其中,所述第五图像为对所述第三图像和所述第四图像进行图像融合后生成的图像。
示例地,对于图9的延续,参照图10,其中,图10的像素布局方式与图9的像素布局方式一致,这里为了使得图10中用于表示曝光参数的线条更加清楚,未在图10中的各个像素格内绘制灰度阴影。
如图10所示,本示例中,可以对图10中示出的每个L像素采用曝光参数41进行第三曝光,其中,图10中包括L像素的像素点位置中的黑色竖线用于表示曝光参数41;以及对图10中示出的每个R像素采用曝光参数41进行第四曝光,其中,图10中包括R像素的像素点位置中的黑色竖线用于表示曝光参数41;以及,对图10中示出的每个成像像素采用曝光参数42进行第二曝光;图10中示出了Sensor中第一行成像像素的曝光参数的附图标记,对 于其他行的成像像素的曝光参数参照第一行的像素点的附图标记即可。图10中的箭头用于表示Sensor中每行未示出的像素点。
其中,虽然对PD像素中的L像素和R像素所采用的曝光参数的取值相同,但是,也是对这两种像素进行独立的曝光,从而生成第三图像和第四图像;此外,还可以对第三图像和第四图像进行图像融合,生成第五图像;然后,在所述第三图像、所述第四图像、所述第五图像中任选一帧或两帧图像来与所述第二图像进行融合,生成亮度高动态的目标图像。
在本申请实施例中,在图像传感器中的像素布局为部分像素点为PD像素,而其他像素点为成像像素的情况下,则可以对PD像素中的L像素和R像素分别进行相同曝光参数的单独曝光,以及对成像像素进行与PD像素的曝光参数不同的曝光参数的单独曝光,而且,对于成像像素和PD像素进行曝光时所采用的曝光参数不同,使得对成像像素生成的第二图像,与对PD像素中L像素生成的第三图像、对PD像素中R像素生成的第四图像的亮度存在差异,那么在对第三图像、第四图像、第二图像进行融合时,可以通过对第三图像和第四图像进行融合,来生成区别于第三图像和第四图像的亮度的第五图像,然后,从第三图像、第四图像、第五图像中任选一帧或两帧图像来与成像像素的第二图像进行融合,这样融合后生成的目标图像的亮度范围,相比于图像传感器中全部像素点均为PD像素的方案所生成的目标图像的亮度范围更加大,进一步提升了目标图像的亮度动态范围。
可选地,在所述第三曝光参数的取值与所述第四曝光参数的取值不同的情况下,也就是说,对PD像素中的L像素和R像素采用不同曝光参数分别进行曝光,并且该两种曝光参数同样区别于对Sensor中的成像像素所采用的曝光参数,在对所述第三图像、所述第四图像、所述第二图像进行图像融合,生成目标图片时,可以在所述第三图像、所述第四图像、第六图像、所述第二图像中任选两帧或三帧图像进行图像融合,生成目标图像,其中,所述第六图像为对所述第三图像和所述第四图像进行图像融合后生成的图像。
示例地,对于图9的延续,参照图11,其中,图11的像素布局方式与图9的像素布局方式一致,这里为了使得图11中用于表示曝光参数的线条更加清楚,未在图11中的各个像素格内绘制灰度阴影。
如图11所示,本示例中,可以对图11中示出的每个L像素采用曝光参 数51进行曝光,其中,图11中包括L像素的像素点位置中的黑色竖线用于表示曝光参数51;以及对图11中示出的每个R像素采用曝光参数52进行曝光,其中,图11中包括R像素的像素点位置中的黑色竖线用于表示曝光参数52;以及,对图11中示出的成像像素采用曝光参数53进行曝光,其中,图11中包括成像像素的像素格内的黑色竖线用于表示曝光参数53,图11中示出了Sensor中第一行成像像素的曝光参数的附图标记,对于其他行的成像像素的曝光参数参照第一行的像素点的附图标记即可。图11中的箭头用于表示Sensor中每行未示出的像素点。
其中,本实施例中对PD像素中的L像素和R像素所采用的曝光参数的取值不同,并对这两种像素进行独立的曝光,从而生成第三图像和第四图像;此外,还可以在第二图像、所述第三图像、所述第四图像、所述第六图像中任选两帧图像进行融合,生成亮度高动态的目标图像,其中,所述第六图像为对所述第三图像和所述第四图像进行图像融合后生成的图像。
在本申请实施例中,在图像传感器中的像素布局为部分像素点为PD像素,而其他像素点为成像像素的情况下,则可以对PD像素中的L像素和R像素分别进行不同曝光参数的单独曝光,以及对成像像素进行与PD像素的曝光参数不同的单独曝光,而且,对于成像像素和PD像素中L像素、R像素进行曝光时所采用的曝光参数不同,使得对成像像素生成的第二图像,与对PD像素中L像素生成的第三图像、对PD像素中R像素生成的第四图像的亮度存在差异,此外,为了增加亮度的范围,还可以在第二图像、所述第三图像、所述第四图像、所述第六图像中任选两帧图像进行融合,其中,所述第六图像为对所述第三图像和所述第四图像进行图像融合后生成的图像。由于最终被融合的四种图像的亮度均不同,那么任选两帧图像所生成的目标图像都是亮度高动态的图像,这样融合后生成的目标图像的亮度范围,相比于图像传感器中全部像素点均为PD像素的方案所生成的目标图像的亮度范围,以及部分像素点为PD像素且PD像素中左像素和右像素的曝光参数相同的方案所得到的的目标图像的亮度范围更加大,进一步提升了目标图像的亮度动态范围。
需要说明的是,本申请实施例提供的图像处理方法,执行主体可以为图像处理装置,或者该图像处理装置中的用于执行图像处理方法的控制模块。本申请实施例中以图像处理装置执行图像处理方法为例,说明本申请实施例 提供的图像处理装置。
参照图12,示出了本申请一个实施例的图像处理装置的框图。该图像处理装置包括:
曝光模块201,用于对图像传感器中的第一像素采用第一曝光参数进行第一曝光,生成第一图像;以及对所述图像传感器中的第二像素采用第二曝光参数进行第二曝光,生成第二图像,其中,所述第一曝光参数的取值与所述第二曝光参数的取值不同;
融合模块202,用于对所述第一图像和所述第二图像进行图像融合,生成目标图片;
其中,所述第一像素与所述第二像素不同;
其中,所述第一曝光和所述第二曝光对应同一帧率。
可选地,在所述图像传感器中每个像素点均为用于相位对焦的相位PD像素的情况下,所述第一像素包括所述PD像素中的左像素,所述第二像素包括所述PD像素中的右像素;
所述曝光模块201包括:
第一曝光子模块,用于对图像传感器中的所述左像素采用第一曝光参数进行第一曝光,生成第一图像;
第二曝光子模块,用于对图像传感器中的所述右像素采用第二曝光参数进行第二曝光,生成第二图像。
可选地,在所述图像传感器包括PD像素和成像像素的情况下,所述第一曝光参数包括第三曝光参数和第四曝光参数,所述第三曝光参数的取值与所述第四曝光参数的取值相同或不同,所述第一图像包括第三图像和第四图像;
所述曝光模块201包括:
第三曝光子模块,用于对图像传感器中所述PD像素中的左像素采用所述第三曝光参数进行第三曝光,生成第三图像;
第四曝光子模块,用于对图像传感器中所述PD像素中的右像素采用所述第四曝光参数进行第四曝光,生成第四图像;
第五曝光子摸,用于对图像传感器中的所述成像像素采用第二曝光参数进行第二曝光,生成第二图像;
其中,所述第三曝光、所述第四曝光、所述第二曝光各自对应的帧率相 同;
所述融合模块202包括:
融合子模块,用于对所述第三图像、所述第四图像、所述第二图像进行图像融合,生成目标图片。
可选地,在所述第三曝光参数的取值与所述第四曝光参数的取值相同的情况下,所述融合子模块包括:
第一融合单元,用于在所述第三图像、所述第四图像、第五图像中任选一帧或两帧图像与所述第二图像进行融合,生成目标图像,其中,所述第五图像为对所述第三图像和所述第四图像进行图像融合后生成的图像。
可选地,在所述第三曝光参数的取值与所述第四曝光参数的取值不同的情况下,所述融合子模块包括:
第二融合单元,用于在所述第三图像、所述第四图像、第六图像、所述第二图像中任选两帧或三帧图像进行图像融合,生成目标图像,其中,所述第六图像为对所述第三图像和所述第四图像进行图像融合后生成的图像。
在本申请实施例中,对图像传感器中的第一像素采用第一曝光参数进行第一曝光,生成第一图像;以及对所述图像传感器中的第二像素采用第二曝光参数进行第二曝光,生成第二图像,其中,对第一像素和第二像素进行曝光所采用的的帧率相同,使得第一图像和第二图像之间不存在帧率差异,区别只在于所采用的曝光参数的取值不同,可以以单帧多种曝光的方式生成不同曝光程度的的不同图像,不论第一图像和第二图像是否同时进行曝光和输出,都可解决传统技术中多帧图像融合因帧率差异引起的亮度动态范围提高的幅度相对比较小的问题,并缩小用于融合的第一图像和第二图像之间的时间差异。另外,由于第一图像和第二图像曝光时的帧率相同,因此,如果第一图像和第二图像同时进行曝光并输出,由于第一图像和第二图像的曝光参数不同,因此,第一图像和第二图像的亮度差异较大,从而可以提升了目标图像的亮度动态范围;此外,该方法使用同一个图像传感器进行操作,不会因摄像模组的组件差异,而导致不同曝光参数的不同图像存在画质差异的问题,从而能够确保图像质量,也不会提升功耗和成本的增加;另外,该方法对第一图像和第二图像进行融合生成目标图像,使得目标图像中保留了原始的图像信息和更多的图像细节,相比于传统技术中的单帧图像处理的方法, 图像质量更好。
本申请实施例中的图像处理装置可以是装置,也可以是终端中的部件、集成电路、或芯片。该装置可以是移动电子设备,也可以为非移动电子设备。示例性的,移动电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,非移动电子设备可以为个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例中的图像处理装置可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为iOS操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。
本申请实施例提供的图像处理装置能够实现上述方法实施例实现的各个过程,为避免重复,这里不再赘述。
可选地,如图13所示,本申请实施例还提供一种电子设备2000,包括处理器2002,存储器2001,存储在存储器2001上并可在所述处理器2002上运行的程序或指令,该程序或指令被处理器2002执行时实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要注意的是,本申请实施例中的电子设备包括上述所述的移动电子设备和非移动电子设备。
图14为实现本申请实施例的一种电子设备的硬件结构示意图。
该电子设备1000包括但不限于:射频单元1001、网络模块1002、音频输出单元1003、输入单元1004、传感器1005、显示单元1006、用户输入单元1007、接口单元1008、存储器1009、以及处理器1010等部件。
本领域技术人员可以理解,电子设备1000还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器1010逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图14中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
其中,传感器1005可以为图像传感器。
处理器1010,用于对图像传感器中的第一像素采用第一曝光参数进行第 一曝光,生成第一图像;对所述图像传感器中的第二像素采用第二曝光参数进行第二曝光,生成第二图像,其中,所述第一曝光参数的取值与所述第二曝光参数的取值不同;对所述第一图像和所述第二图像进行图像融合,生成目标图片;
其中,所述第一像素与所述第二像素不同;
其中,所述第一曝光和所述第二曝光对应同一帧率。
在本申请实施例中,对图像传感器中的第一像素采用第一曝光参数进行第一曝光,生成第一图像;以及对所述图像传感器中的第二像素采用第二曝光参数进行第二曝光,生成第二图像,其中,对第一像素和第二像素进行曝光所采用的的帧率相同,使得第一图像和第二图像之间不存在帧率差异,区别只在于所采用的曝光参数的取值不同,可以以单帧多种曝光的方式生成不同曝光程度的的不同图像,不论第一图像和第二图像是否同时进行曝光和输出,都可解决传统技术中多帧图像融合因帧率差异引起的亮度动态范围提高的幅度相对比较小的问题,并缩小用于融合的第一图像和第二图像之间的时间差异。另外,由于第一图像和第二图像曝光时的帧率相同,因此,如果第一图像和第二图像同时进行曝光并输出,由于第一图像和第二图像的曝光参数不同,因此,第一图像和第二图像的亮度差异较大,从而可以提升了目标图像的亮度动态范围;此外,该方法使用同一个图像传感器进行操作,不会因摄像模组的组件差异,而导致不同曝光参数的不同图像存在画质差异的问题,从而能够确保图像质量,也不会提升功耗和成本的增加;另外,该方法对第一图像和第二图像进行融合生成目标图像,使得目标图像中保留了原始的图像信息和更多的图像细节,相比于传统技术中的单帧图像处理的方法,图像质量更好。
可选地,在所述图像传感器中每个像素点均为PD像素的情况下,所述第一像素包括所述PD像素中的左像素,所述第二像素包括所述PD像素中的右像素;
处理器1010,用于对图像传感器中的所述左像素采用第一曝光参数进行第一曝光,生成第一图像;对图像传感器中的所述右像素采用第二曝光参数进行第二曝光,生成第二图像。
在本申请实施例中,在图像传感器中每个像素点均为PD像素的情况下, 则可以采用第一曝光参数对图像传感器中PD像素中L像素所在的像素点位置进行曝光,采用第二曝光参数对图像传感器中PD像素中R像素所在的像素点位置进行曝光,两种曝光的曝光参数的曝光时长存在区别,从而可以生成亮度存在很大区别的两帧图像,那么两帧图像经过融合后生成的目标图像则可以具有较大的亮度动态范围。
可选地,在所述图像传感器包括PD像素和成像像素的情况下,所述第一曝光参数包括第三曝光参数和第四曝光参数,所述第三曝光参数的取值与所述第四曝光参数的取值相同或不同,所述第一图像包括第三图像和第四图像;
处理器1010,用于对图像传感器中所述PD像素中的左像素采用所述第三曝光参数进行第三曝光,生成第三图像;对图像传感器中所述PD像素中的右像素采用所述第四曝光参数进行第四曝光,生成第四图像;对图像传感器中的所述成像像素采用第二曝光参数进行第二曝光,生成第二图像;对所述第三图像、所述第四图像、所述第二图像进行图像融合,生成目标图片。
其中,所述第三曝光、所述第四曝光、所述第二曝光各自对应的帧率相同;
在本申请实施例中,当图像传感器包括PD像素和成像像素的情况下,则可以对PD像素中的L像素和R像素分别进行单独曝光,以及对成像像素进行单独曝光,而且,对于成像像素和PD像素进行曝光时所采用的曝光参数不同,使得对成像像素生成的第二图像,与对PD像素中L像素生成的第三图像、对PD像素中R像素生成的第四图像的亮度存在差异,那么在对第三图像、第四图像、第二图像进行融合后生成的目标图像,则可以包括成像像素对应的曝光参数的亮度,以及PD像素对应的曝光参数的亮度,提升了目标图像的亮度动态范围。
可选地,处理器1010,用于在所述第三曝光参数的取值与所述第四曝光参数的取值相同的情况下,在所述第三图像、所述第四图像、第五图像中任选一帧或两帧图像与所述第二图像进行融合,生成目标图像,其中,所述第五图像为对所述第三图像和所述第四图像进行图像融合后生成的图像。
在本申请实施例中,在图像传感器中的像素布局为部分像素点为PD像素,而其他像素点为成像像素的情况下,则可以对PD像素中的L像素和R像素分别进行相同曝光参数的单独曝光,以及对成像像素进行与PD像素的曝光参 数不同的曝光参数的单独曝光,而且,对于成像像素和PD像素进行曝光时所采用的曝光参数不同,使得对成像像素生成的第二图像,与对PD像素中L像素生成的第三图像、对PD像素中R像素生成的第四图像的亮度存在差异,那么在对第三图像、第四图像、第二图像进行融合时,可以通过对第三图像和第四图像进行融合,来生成区别于第三图像和第四图像的亮度的第五图像,然后,从第三图像、第四图像、第五图像中任选一帧或两帧图像来与成像像素的第二图像进行融合,这样融合后生成的目标图像的亮度范围,相比于图像传感器中全部像素点均为PD像素的方案所生成的目标图像的亮度范围更加大,进一步提升了目标图像的亮度动态范围。
可选地,处理器1010,用于在所述第三曝光参数的取值与所述第四曝光参数的取值不同的情况下,在所述第三图像、所述第四图像、第六图像、所述第二图像中任选两帧或三帧图像进行图像融合,生成目标图像,其中,所述第六图像为对所述第三图像和所述第四图像进行图像融合后生成的图像。
在本申请实施例中,在图像传感器中的像素布局为部分像素点为PD像素,而其他像素点为成像像素的情况下,则可以对PD像素中的L像素和R像素分别进行不同曝光参数的单独曝光,以及对成像像素进行与PD像素的曝光参数不同的单独曝光,而且,对于成像像素和PD像素中L像素、R像素进行曝光时所采用的曝光参数不同,使得对成像像素生成的第二图像,与对PD像素中L像素生成的第三图像、对PD像素中R像素生成的第四图像的亮度存在差异,此外,为了增加亮度的范围,还可以在第二图像、所述第三图像、所述第四图像、所述第六图像中任选两帧图像进行融合,其中,所述第六图像为对所述第三图像和所述第四图像进行图像融合后生成的图像。由于最终被融合的四种图像的亮度均不同,那么任选两帧图像所生成的目标图像都是亮度高动态的图像,这样融合后生成的目标图像的亮度范围,相比于图像传感器中全部像素点均为PD像素的方案所生成的目标图像的亮度范围,以及部分像素点为PD像素且PD像素中左像素和右像素的曝光参数相同的方案所得到的的目标图像的亮度范围更加大,进一步提升了目标图像的亮度动态范围。
应理解的是,本申请实施例中,输入单元1004可以包括图形处理器(Graphics Processing Unit,GPU)10041和麦克风10042,图形处理器10041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静 态图片或视频的图像数据进行处理。显示单元1006可包括显示面板10061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板10061。用户输入单元1007包括触控面板10071以及其他输入设备10072。触控面板10071,也称为触摸屏。触控面板10071可包括触摸检测装置和触摸控制器两个部分。其他输入设备10072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。存储器1009可用于存储软件程序以及各种数据,包括但不限于应用程序和操作系统。处理器1010可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1010中。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能, 还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (15)

  1. 一种图像处理方法,其中,所述方法包括:
    对图像传感器中的第一像素采用第一曝光参数进行第一曝光,生成第一图像;
    对所述图像传感器中的第二像素采用第二曝光参数进行第二曝光,生成第二图像,其中,所述第一曝光参数的取值与所述第二曝光参数的取值不同;
    对所述第一图像和所述第二图像进行图像融合,生成目标图片;
    其中,所述第一像素与所述第二像素不同;
    其中,所述第一曝光和所述第二曝光对应同一帧率。
  2. 根据权利要求1所述的方法,其中,在所述图像传感器中每个像素点均为用于相位对焦的相位PD像素的情况下,所述第一像素包括所述PD像素中的左像素,所述第二像素包括所述PD像素中的右像素;
    所述对图像传感器中的第一像素采用第一曝光参数进行第一曝光,生成第一图像,包括:
    对图像传感器中的所述左像素采用第一曝光参数进行第一曝光,生成第一图像;
    所述对所述图像传感器中的第二像素采用第二曝光参数进行第二曝光,生成第二图像,包括:
    对图像传感器中的所述右像素采用第二曝光参数进行第二曝光,生成第二图像。
  3. 根据权利要求1所述的方法,其中,在所述图像传感器包括PD像素和成像像素的情况下,所述第一曝光参数包括第三曝光参数和第四曝光参数,所述第三曝光参数的取值与所述第四曝光参数的取值相同或不同,所述第一图像包括第三图像和第四图像;
    所述对图像传感器中的第一像素采用所述第一曝光参数进行第一曝光,生成第一图像,包括:
    对图像传感器中所述PD像素中的左像素采用所述第三曝光参数进行第三曝光,生成第三图像;
    对图像传感器中所述PD像素中的右像素采用所述第四曝光参数进行第四曝光,生成第四图像;
    所述对所述图像传感器中的第二像素采用第二曝光参数进行第二曝光,生成第二图像,包括:
    对图像传感器中的所述成像像素采用第二曝光参数进行第二曝光,生成第二图像;
    其中,所述第三曝光、所述第四曝光、所述第二曝光各自对应的帧率相同;
    所述对所述第一图像和所述第二图像进行图像融合,生成目标图片,包括:
    对所述第三图像、所述第四图像、所述第二图像进行图像融合,生成目标图片。
  4. 根据权利要求3所述的方法,其中,在所述第三曝光参数的取值与所述第四曝光参数的取值相同的情况下,所述对所述第三图像、所述第四图像、所述第二图像进行图像融合,生成目标图片,包括:
    在所述第三图像、所述第四图像、第五图像中任选一帧或两帧图像与所述第二图像进行融合,生成目标图像,其中,所述第五图像为对所述第三图像和所述第四图像进行图像融合后生成的图像。
  5. 根据权利要求3所述的方法,其中,在所述第三曝光参数的取值与所述第四曝光参数的取值不同的情况下,所述对所述第三图像、所述第四图像、所述第二图像进行图像融合,生成目标图片,包括:
    在所述第三图像、所述第四图像、第六图像、所述第二图像中任选两帧或三帧图像进行图像融合,生成目标图像,其中,所述第六图像为对所述第三图像和所述第四图像进行图像融合后生成的图像。
  6. 一种图像处理装置,其中,所述装置包括:
    曝光模块,用于对图像传感器中的第一像素采用第一曝光参数进行第一曝光,生成第一图像;以及对所述图像传感器中的第二像素采用第二曝光参数进行第二曝光,生成第二图像,其中,所述第一曝光参数的取值与所述第二曝光参数的取值不同;
    融合模块,用于对所述第一图像和所述第二图像进行图像融合,生成目标图片;
    其中,所述第一像素与所述第二像素不同;
    其中,所述第一曝光和所述第二曝光对应同一帧率。
  7. 根据权利要求6所述的装置,其中,在所述图像传感器中每个像素点均为用于相位对焦的相位PD像素的情况下,所述第一像素包括所述PD像素中的左像素,所述第二像素包括所述PD像素中的右像素;
    所述曝光模块包括:
    第一曝光子模块,用于对图像传感器中的所述左像素采用第一曝光参数进行第一曝光,生成第一图像;
    第二曝光子模块,用于对图像传感器中的所述右像素采用第二曝光参数进行第二曝光,生成第二图像。
  8. 根据权利要求6所述的装置,其中,在所述图像传感器包括PD像素和成像像素的情况下,所述第一曝光参数包括第三曝光参数和第四曝光参数,所述第三曝光参数的取值与所述第四曝光参数的取值相同或不同,所述第一图像包括第三图像和第四图像;
    所述曝光模块包括:
    第三曝光子模块,用于对图像传感器中所述PD像素中的左像素采用所述第三曝光参数进行第三曝光,生成第三图像;
    第四曝光子模块,用于对图像传感器中所述PD像素中的右像素采用所述第四曝光参数进行第四曝光,生成第四图像;
    第五曝光子摸,用于对图像传感器中的所述成像像素采用第二曝光参数进行第二曝光,生成第二图像;
    其中,所述第三曝光、所述第四曝光、所述第二曝光各自对应的帧率相同;
    所述融合模块包括:
    融合子模块,用于对所述第三图像、所述第四图像、所述第二图像进行图像融合,生成目标图片。
  9. 根据权利要求8所述的装置,其中,在所述第三曝光参数的取值与所述第四曝光参数的取值相同的情况下,所述融合子模块包括:
    第一融合单元,用于在所述第三图像、所述第四图像、第五图像中任选一帧或两帧图像与所述第二图像进行融合,生成目标图像,其中,所述第五图像为对所述第三图像和所述第四图像进行图像融合后生成的图像。
  10. 根据权利要求8所述的装置,其中,在所述第三曝光参数的取值与所述第四曝光参数的取值不同的情况下,所述融合子模块包括:
    第二融合单元,用于在所述第三图像、所述第四图像、第六图像、所述第二图像中任选两帧或三帧图像进行图像融合,生成目标图像,其中,所述第六图像为对所述第三图像和所述第四图像进行图像融合后生成的图像。
  11. 一种电子设备,其中,包括处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1至5中任意一项所述的图像处理方法的步骤。
  12. 一种可读存储介质,其中,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1至5中任意一项所述的图像处理方法的步骤。
  13. 一种芯片,其中,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如权利要求1至5中任意一项所述的图像处理方法。
  14. 一种计算机程序产品,其特征在于,所述程序产品被存储在非易失的存储介质中,所述程序产品被至少一个处理器执行以实现如权利要求1至5中任意一项所述的图像处理方法。
  15. 一种图像处理装置,其特征在于,所述装置被配置成用于执行如权利要求1至5中任意一项所述的图像处理方法。
PCT/CN2022/112986 2021-08-19 2022-08-17 图像处理方法、装置、电子设备及可读存储介质 WO2023020532A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110953654.6A CN113676674B (zh) 2021-08-19 2021-08-19 图像处理方法、装置、电子设备及可读存储介质
CN202110953654.6 2021-08-19

Publications (1)

Publication Number Publication Date
WO2023020532A1 true WO2023020532A1 (zh) 2023-02-23

Family

ID=78543893

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/112986 WO2023020532A1 (zh) 2021-08-19 2022-08-17 图像处理方法、装置、电子设备及可读存储介质

Country Status (2)

Country Link
CN (1) CN113676674B (zh)
WO (1) WO2023020532A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113676674B (zh) * 2021-08-19 2023-06-27 维沃移动通信(杭州)有限公司 图像处理方法、装置、电子设备及可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107409180A (zh) * 2015-03-09 2017-11-28 三星电子株式会社 具有相机模块的电子装置和用于电子装置的图像处理方法
JP2018019296A (ja) * 2016-07-28 2018-02-01 キヤノン株式会社 撮像装置およびその制御方法
US20200280659A1 (en) * 2019-02-28 2020-09-03 Qualcomm Incorporated Quad color filter array camera sensor configurations
WO2020262193A1 (ja) * 2019-06-25 2020-12-30 ソニーセミコンダクタソリューションズ株式会社 固体撮像装置、及び電子機器
CN113676674A (zh) * 2021-08-19 2021-11-19 维沃移动通信(杭州)有限公司 图像处理方法、装置、电子设备及可读存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014077065A1 (ja) * 2012-11-14 2014-05-22 富士フイルム株式会社 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム
CN110278375B (zh) * 2019-06-28 2021-06-15 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107409180A (zh) * 2015-03-09 2017-11-28 三星电子株式会社 具有相机模块的电子装置和用于电子装置的图像处理方法
JP2018019296A (ja) * 2016-07-28 2018-02-01 キヤノン株式会社 撮像装置およびその制御方法
US20200280659A1 (en) * 2019-02-28 2020-09-03 Qualcomm Incorporated Quad color filter array camera sensor configurations
WO2020262193A1 (ja) * 2019-06-25 2020-12-30 ソニーセミコンダクタソリューションズ株式会社 固体撮像装置、及び電子機器
CN113676674A (zh) * 2021-08-19 2021-11-19 维沃移动通信(杭州)有限公司 图像处理方法、装置、电子设备及可读存储介质

Also Published As

Publication number Publication date
CN113676674B (zh) 2023-06-27
CN113676674A (zh) 2021-11-19

Similar Documents

Publication Publication Date Title
CN112150399B (zh) 基于宽动态范围的图像增强方法及电子设备
TWI511558B (zh) 具有高動態範圍攝取能力之影像感測器
US11470294B2 (en) Method, device, and storage medium for converting image from raw format to RGB format
CN113766129B (zh) 录像方法、装置、电子设备及介质
CN109345485A (zh) 一种图像增强方法、装置、电子设备及存储介质
WO2023020527A1 (zh) 图像处理方法、装置、电子设备及可读存储介质
US20200412967A1 (en) Imaging element and imaging apparatus
WO2023061326A1 (zh) 拍摄方法、装置及电子设备
WO2023020532A1 (zh) 图像处理方法、装置、电子设备及可读存储介质
US10951816B2 (en) Method and apparatus for processing image, electronic device and storage medium
CN113674685B (zh) 像素阵列的控制方法、装置、电子设备和可读存储介质
CN111953908A (zh) 用于生成高动态范围图像的成像系统
WO2020207427A1 (en) Method for image-processingand electronic device
WO2022262848A1 (zh) 图像处理方法、装置和电子设备
WO2023098552A1 (zh) 图像传感器、信号处理方法、装置、摄像模组及电子设备
CN111835941A (zh) 图像生成方法及装置、电子设备、计算机可读存储介质
WO2022042753A1 (zh) 拍摄方法、装置及电子设备
WO2021179142A1 (zh) 一种图像处理方法及相关装置
JP2022027438A (ja) 画像処理方法、装置及び記憶媒体
CN112651899A (zh) 图像处理方法及装置、电子设备、存储介质
CN111970439A (zh) 图像处理方法和装置、终端和可读存储介质
US20230394787A1 (en) Imaging apparatus
KR102600849B1 (ko) 이미지 처리 방법, 장치 및 저장 매체
CN113965688A (zh) 图像传感器、摄像模组、摄像装置及控制方法
CN117956296A (zh) 视频拍摄方法及其装置

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE