CN115018743A - On-chip partition exposure image fusion method, imaging device and computer storage medium - Google Patents

On-chip partition exposure image fusion method, imaging device and computer storage medium Download PDF

Info

Publication number
CN115018743A
CN115018743A CN202110247361.6A CN202110247361A CN115018743A CN 115018743 A CN115018743 A CN 115018743A CN 202110247361 A CN202110247361 A CN 202110247361A CN 115018743 A CN115018743 A CN 115018743A
Authority
CN
China
Prior art keywords
brightness
image
pixel
value
adjusted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110247361.6A
Other languages
Chinese (zh)
Inventor
夏志伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SmartSens Technology Shanghai Co Ltd
Original Assignee
SmartSens Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SmartSens Technology Shanghai Co Ltd filed Critical SmartSens Technology Shanghai Co Ltd
Priority to CN202110247361.6A priority Critical patent/CN115018743A/en
Publication of CN115018743A publication Critical patent/CN115018743A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention belongs to the technical field of image sensors, and relates to an on-chip area exposure image fusion method, which is characterized in that brightness adjustment is carried out on an original image to be adjusted; performing pixel value mapping on the brightness adjusting image; maintaining the color saturation of the original image to be adjusted; and carrying out transition processing on the adjacent exposure subareas in the connection area, thereby realizing the fusion of the subarea exposure images. The invention can automatically complete the fusion task of the subarea exposure image without manual intervention, is suitable for gray level images and color images, and is suitable for the subarea exposure fusion of continuous video images. The invention also provides an imaging device and a computer storage medium.

Description

On-chip partition exposure image fusion method, imaging device and computer storage medium
Technical Field
The invention relates to the technical field of image sensors, in particular to an on-chip area exposure image fusion method, an imaging device and a computer storage medium.
Background
Image sensors are widely used in video surveillance and other related fields. In some traffic monitoring camera shooting application occasions, the brightness of a traffic signal lamp after being started is equivalent to the surrounding environment, and the brightness is brighter. When the image sensor is used for video monitoring shooting of scenes containing traffic lights, the red lights with weak illumination are prone to color cast (except red, other color channels are over-exploded) in cloudy days, and the problem of traffic light over-exposure can occur in shooting at night. The reason for this problem is that the current image sensor is usually linear in design, and the linear image sensor has a small illumination range and cannot collect all signals from a low-illumination environment to a strong-light environment, so that the output dynamic range of the linear image sensor cannot simultaneously satisfy the brightness range of the traffic light and the surrounding environment. To solve the above problem, it is necessary to improve the dynamic range of the image output from the image sensor to meet the application requirements of different scenes.
The dynamic range of the output of the image sensor is increased by adopting a mode of outputting two frames of images for synthesis so as to increase the dynamic range. In the specific design, two frames of images have different exposure times, one frame of image has a long exposure time, and the other frame of image has a short exposure time. One frame of image with a long exposure time can obtain image details clearly in a low-illumination scene, and another frame of image with a short exposure time can obtain image details in a high-illumination scene. The two frames of images are combined, so that a clear image with details of both the low-illumination scene and the high-illumination scene can be obtained. However, in the implementation of the two-frame synthesis, the first frame image needs to be read and stored, and the second frame image needs to be read and then merged. Therefore, in a specific application, the two-frame image synthesis has a problem of motion blur.
Another kind of image sensor solution for this scene is to use different exposure times for different areas in one frame of exposure process, i.e. to control the exposure time so that the output one frame of image contains different exposures, thereby solving the problem of overexposure in the image. However, the existing image fusion method is to fuse complete scene images with different exposures, that is, to fuse multi-frame exposure images, and in the case of zone exposure, the different exposure images for fusion only contain a part of the scene, so the existing image fusion method cannot meet the requirement of zone exposure image fusion.
Disclosure of Invention
The present invention provides an on-chip area exposure image fusion method, an imaging device and a computer storage medium to realize the fusion of area exposure images, aiming at the defects of the prior art.
The invention provides an in-chip partition exposure image fusion method, which comprises the following steps:
adjusting the brightness of an original image to be adjusted, dividing the original image to be adjusted into a plurality of exposure subareas, designating one exposure subarea as a reference subarea for primary adjustment, and adjusting the brightness of adjacent exposure subareas according to the reference subarea to enable the brightness of different exposure subareas to be consistent, so as to obtain a brightness adjustment image;
performing pixel value mapping on each partition of the brightness adjusting image, and mapping the pixel value of each pixel point of the brightness adjusting image into a range of [0,2^ n-1] according to the pixel point to be mapped and the pixel values of other pixel points in a set neighborhood where the pixel point is located to obtain a pixel value mapping image of each partition; and
and performing linking area transition processing on adjacent exposure subareas, setting a linking transition area taking a linking boundary as a center in two adjacent exposure subareas, and adjusting the pixel value of a pixel point positioned on one side of the linking boundary to be the weighted sum of the original pixel value and the pixel value of a boundary pixel point adjacent to the linking boundary on the other side of the linking boundary in the linking transition area to complete the fusion of subarea exposure images.
The invention also provides an imaging device, which comprises a processor and a memory, wherein the memory is used for storing at least one instruction, and the processor is used for reading the at least one instruction and executing the method.
The present invention also provides a computer storage medium having at least one instruction stored therein, the at least one instruction being loaded and executed by a processor to implement the above method.
The invention discloses an in-chip subarea exposure image fusion method, an imaging device and a computer storage medium, wherein brightness of an original image to be adjusted is adjusted; carrying out pixel value mapping on the brightness adjusting image; keeping the color saturation of the original image to be adjusted; and carrying out transition processing on the adjacent exposure subareas in the connection area, thereby realizing the fusion of the subarea exposure images. The invention can automatically complete the fusion task of the subarea exposure image without manual intervention, is suitable for gray level images and color images, and is suitable for the subarea exposure fusion of continuous video images.
In order to make the aforementioned and other objects, features and advantages of the invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
FIG. 1 is a flowchart illustrating the steps of an on-chip exposure image fusion method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of images corresponding to each step of an on-chip subarea exposure image fusion method according to an embodiment of the invention;
FIG. 3 is a flowchart illustrating steps of a brightness adjustment method according to an embodiment of the invention;
FIG. 4 is a flowchart illustrating steps of a pixel value mapping method according to an embodiment of the invention;
FIG. 5 is a flowchart illustrating a method for maintaining color saturation according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating a transition processing method for a join area according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a boundary pixel point in a transition processing method of a join region according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
As shown in fig. 1 and fig. 2, the method for fusing in-chip area exposure images according to the present invention includes the following steps.
Step S100: the method comprises the steps of carrying out brightness adjustment on an original image Src to be adjusted, dividing the original image Src to be adjusted into a plurality of exposure partitions, designating one exposure partition as a reference partition for primary adjustment, adjusting the brightness of adjacent exposure partitions according to the reference partition, enabling the brightness of different exposure partitions to be consistent, and obtaining a brightness adjustment image Src _ adj.
Step S200: pixel value mapping is carried out on each partition of the brightness adjustment image Src _ adj, and according to pixel values of a pixel point to be mapped and other pixel points in a set neighborhood where the pixel point is located, the pixel value of each pixel point of the brightness adjustment image Src _ adj is mapped into a range of [0,2^ n-1], so that a pixel value mapping image Src _ mapping of each partition is obtained. Wherein n is the bit width of the original image Src to be adjusted. In the embodiment of the present invention, n is 8, so the pixel value mapping range is [0,255 ]. Those skilled in the art will appreciate that n may take on other bit widths such as 9, 10, 12, etc. in other embodiments.
Step S300: and carrying out color saturation maintenance on the original image Src to be regulated, giving the value of the high-brightness pixel point larger than a set brightness threshold value to the corresponding pixel point in the pixel value mapping image Src _ mapping, weighting the edge pixel point of the assigned high-brightness pixel point in the Src _ mapping image with the value of the pixel point in the set neighborhood, finishing transition processing, and obtaining a color saturation maintenance image Src _ ret.
Step S400: and performing linking area transition processing on adjacent exposure subareas, setting a linking transition area taking a linking boundary as a center in two adjacent exposure subareas, and adjusting the pixel value of a pixel point positioned on one side of the linking boundary to be the weighted sum of the original pixel value and the pixel value of a boundary pixel point adjacent to the linking boundary on the other side of the linking boundary in the linking transition area to complete the fusion of subarea exposure images.
In one embodiment, the color saturation maintaining in step S300 is not necessary, and the color saturation maintaining process may not be performed on the original image Src to be adjusted, and the transition process of the connected region in step S400 may be directly performed.
As shown in fig. 3, in one embodiment, in the brightness adjustment in step S100, the following steps are specifically included, with the middle partition of the original image Src to be adjusted as the reference partition for the first brightness adjustment.
Step S101: the middle partition of the Src image is selected as the reference partition.
Step S102: and judging whether Src is a gray image or not.
Step S103: and if the Src is a gray level image, assigning the Src image to the original gray level image Src _ gray to be adjusted, and entering step S105, otherwise, entering step S104.
Step S104: the Src image is converted into a gray-scale original image to be adjusted Src _ gray using the following formula (1):
Gray=R*0.299+G*0.587+B*0.114 (1)
wherein, R, G and B are pixel values of R channel, G channel and B channel of the Src image respectively.
Step S105: the average brightness value ave _ ref of the Src _ gray image reference partition (in this embodiment, an overexposed pixel with a pixel value greater than 200 is not considered) and the average brightness value ave _ adj of the neighboring partition to be adjusted (in this embodiment, an overexposed pixel with a pixel value greater than 200 is not considered) are counted.
In the embodiment of the present invention, the average brightness value ave _ ref of the reference partition side of the original to-be-adjusted image Src _ gray near the N lines of the joining boundary and the average brightness value ave _ adj of the adjacent to-be-adjusted partition side near the N lines of the joining boundary are specifically counted.
When the average brightness value ave _ adj of adjacent partitions to be regulated is counted, firstly, the average value ave _ adj _ all of all pixel points in N rows is calculated, then the pixel points with the pixel values smaller than min { (2^ N-1) × A1 and ave _ adj _ all _ A2} in the N rows are considered, N is the bit width of the image to be processed, and finally the average value of the points is calculated to obtain ave _ adj. Wherein min { (2^ n-1) × A1, ave _ adj _ all × A2} represents the smaller of these two values. In one embodiment, the setting parameter A1 is 0.9 and the setting parameter A2 is 1.5.
When the average brightness value ave _ ref of the reference partition is counted, firstly, the average value ave _ ref _ all of all the pixel points in the N rows is calculated, then the pixel points with the pixel values smaller than min { (2^ N-1) × B1 ave _ ref _ all { (B2) } in the N rows are considered, N is the bit width of the image currently processed, and finally the average value of the points is calculated to obtain ave _ ref. Wherein min { (2^ n-1) × B1, ave _ ref _ all × B2} represents taking the smaller of the two values. In one embodiment, the setting parameter B1 is 0.9, and the setting parameter B2 is 1.5.
The value range of N is [1, min { Height _ region _ adj, Height _ region _ ref } ], wherein Height _ region _ adj represents the Height of an adjacent partition to be adjusted, and Height _ region _ ref represents the Height of a reference partition. If the partition to be adjusted has a highlight point at a position close to the linking boundary, taking a larger value, for example, more than 20 lines, of N within the value range [1, min { Height _ region _ adj, Height _ region _ ref } ] to weaken the influence of the highlight point on the average brightness; otherwise, N may take a smaller value within the value range [1, min { Height _ region _ adj, Height _ region _ ref } ], for example, within 10 lines, so as to reduce the amount of computation.
Step S106: and judging whether the difference value between ave _ adj and ave _ ref is smaller than a set first threshold, wherein in one embodiment, the first threshold is set to 0.5, if | ave _ adj-ave _ ref | <0.5, the brightness adjustment of the to-be-adjusted partition is completed, and the step S108 is executed, otherwise, the step S107 is executed.
Step S107: otherwise, calculating a ratio rate of the average luminance values of the reference partition and the partition to be adjusted, that is, rate — ave _ ref/ave _ adj, in an embodiment, limiting the ratio of the exposure gain of each partition to be no more than 16, that is, the ratio rate is less than or equal to 16, so if the bit width of the reference partition is 8, extending the bit width of the pixel value of the partition to be adjusted by 4 bits, that is, the bit width of the pixel value of the partition to be adjusted is 12, then multiplying the pixel value of the partition to be adjusted of the Src image by the ratio, and entering step S102.
Step S108: and the Src image refers to the adjacent subareas of the subareas, and brightness adjustment is completed, and whether other unadjusted subareas exist is judged.
Step S109: if the brightness adjustment of the adjacent partition of the Src image reference partition is completed and there are other unadjusted partitions, the adjusted partition of the Src image with the adjacent unadjusted partition is used as the reference partition, and the adjacent unadjusted partition is used as the partition to be adjusted, and the process proceeds to step S102.
And if the brightness adjustment of all the subareas is finished, ending the brightness adjustment, and obtaining the brightness adjustment image Src _ adj with the consistent brightness of each subarea.
As shown in fig. 4, in one embodiment, in the pixel value mapping of step S200, calculating a brightness threshold divides pixel points of each partition of the brightness adjustment image Src _ adj into a low brightness point, a medium brightness point and a high brightness point; and according to pixel points to be mapped in each partition of the image Src _ adj and pixel values in a set neighborhood (for example, 9 x 9), compressing a high dynamic range by adopting a logarithmic equation, and mapping the pixel values to a range of [0,2^ n-1 ].
Step S201: performing gray level conversion on each partition of the brightness adjusting image Src _ adj by using a formula (1), obtaining a gray level brightness adjusting image Src _ adj _ gray of each partition of the image Src _ adj, wherein the gray level brightness adjusting image Src _ adj _ gray is used for representing the brightness of the image Src _ adj, and calculating a logarithmic average value lg _ ave of pixel values of the image Src _ adj _ gray by using a following formula (2), namely:
Figure BDA0002964566070000061
where Num is the total number of pixels in the image Src _ adj _ gray, and Src _ adj _ gray (x, y) represents the pixel value of the image Src _ adj _ gray in x rows and y columns, and in one embodiment, the setting parameter α is 0.0001.
Step S202: calculating the brightness threshold Key, namely:
Figure BDA0002964566070000071
where, grayMax and grayMin denote the maximum pixel value and the minimum pixel value of the image Src _ adj _ gray, respectively.
Step S203: dividing the normalized Src _ adj _ gray image pixel points into low-brightness points, medium-brightness points and high-brightness points according to the brightness threshold Key:
L t =L max -[C1+(1-C1)*Key]*(L max -L min ) (4)
L h =L min +[C2+(1-C2)*(1-Key)]*(L max -L min ) (5)
wherein L is max And L min Maximum and minimum values after image Src _ adj _ gray normalization, respectively, normalization mapPixel value less than L in a pixel t The pixel points of (2) are low-brightness points which are larger than L h The pixel points of (1) are high-brightness points, and the middle is a middle-brightness point. The value range of C1 is set to be 0.5-1, and the value of C2 is set to be less than the value of C1. In one embodiment, C1 is 0.9 and C2 is 0.6.
Step S204: it is determined whether the image Src _ adj is a grayscale image.
Step S205: if the image Src _ adj is a gray image, normalizing the image Src _ adj to obtain a normalized brightness adjustment image Src _ adj _ norm.
Step S206: if the image Src _ adj is a color image, normalizing the three color channels of the image Src _ adj respectively to obtain a normalized brightness adjustment image Src _ adj _ norm.
Step S207: for each pixel point of the normalized image Src _ adj _ norm, the pixel points in a set neighborhood (for example, a window of m × m) with the pixel point as the center are considered, and the ratio rates of low, medium and high brightness points in the window are respectively calculated l ,rate m ,rate h
Step S208: mapping pixel values of the low, medium and high brightness points by using the following formula (6); namely:
Figure BDA0002964566070000072
wherein the parameter s i ,q i ,k i Is a positive value greater than 1, i ∈ [ l, m, h]Corresponding to low, medium and high brightness points, respectively, parameter s i The value increases from low brightness point to high brightness point, and the parameter q i 、k i The value decreases from the low brightness point to the high brightness point, L n Is the pixel value, L, of the image Src _ adj _ norm nmax Is the maximum pixel value, L, of the image Src _ adj _ norm i Is the value after mapping. In one embodiment, s i Has a value in the range of 2 to 15, q i Has a value in the range of 20 to 500, k i The value range of (A) is 20-500. In one embodiment, s of the low luminance point, the medium luminance point and the high luminance point i ,q i ,k i The values of (d) are shown in table 1.
Is low with s l =2 q l =50 k l =50
By s m =5 q m =45 k mn =45
Height of s h =5 q h =30 k h =30
Table 1
Step S209: normalizing each pixel point of the image Src _ adj _ norm according to three groups of different s i , q i ,k i Value, calculating the mapping value L of low brightness point, middle brightness point and high brightness point l ,L m ,L h And thus obtaining the mapping value of the pixel point of the image Src _ adj as follows:
L=(L l *rate t +L m *rate m +L h *rate h )*(2^n-1) (7)
in one embodiment, the pixel value mapped image is denoted as Src _ mapping.
As shown in fig. 5, in one embodiment, in the color saturation maintenance in step S300, in the original image Src to be adjusted, the value of the high-brightness pixel point greater than the set brightness threshold is assigned to the corresponding pixel point in the pixel value mapping image Src _ mapping, and the edge pixel point of the assigned high-brightness pixel point in the Src _ mapping image is weighted with the value of the pixel point in the set neighborhood, so as to complete the transition processing.
Step S301: creating a template image Mask, initializing a pixel value of the Mask image to be 0, traversing the Gray image, if the pixel value is greater than (2^ n-1) × D1, for example, when n is 8 and a parameter D1 is set to be 0.8, if the pixel value is greater than 200, setting the pixel value of the position corresponding to the pixel value in the Mask image to be 255, and setting the pixel value of the position corresponding to the Src _ mapping image to be the value of the pixel point at the same position of the Src image. In this embodiment, the bit width of the template image Mask is greater than or equal to 8.
Step S302: the Mask image is subjected to morphological dilation processing, and is subjected to mean filtering of a set neighborhood (e.g., 11 × 11).
Step S303: traversing the Mask image, and when the pixel value of the Mask image is not 0 and not 255, adjusting the pixel value of the corresponding position in the Src _ mapping image as shown in formula (8), that is:
Figure BDA0002964566070000091
where, (x, y) represents pixel point positions in the Mask image whose median is not 0 and is not 255.
In one embodiment, the image after the color saturation preserving process is denoted Src _ ret.
As shown in fig. 6 and 7, in one embodiment, the transition processing of the splicing region in step S400 specifically includes the following steps.
Step S401: and setting the first boundary pixel point (x1, y1) and the second boundary pixel point (x2, y2) as boundary pixel points on two sides of the connecting boundary position of the adjacent first exposure subarea and the second exposure subarea respectively.
Step S402: at the (x1, y1) side, in the set splicing transition region, splicing M, such as 20, pixel point sets { (x _ t 1) in the radial direction of the edge i ,y_t1 i ) And i equals to 1,2, …,20}, and processing a pixel point at a corresponding position in the Src _ ret image according to the following formula (9):
Figure BDA0002964566070000092
wherein Dis1 represents the pixel distance of the currently processed pixel distance (x2, y 2); src _ ret (x2, y2) is the pixel of the second border pixel point (x2, y2) that holds the image at color saturation.
Step S403: at the (x2, y2) side, in the set splicing transition region, splicing M, such as 20, pixel sets { (x _ t 2) in the edge radial direction i ,y_t2 i ) And i equals to 1,2, …,20}, and processing a pixel point at a corresponding position in the Src _ ret image according to the following formula (10):
Figure BDA0002964566070000093
wherein Dis2 represents the pixel distance of the currently processed pixel distance (x1, y 1); src _ ret (x1, y1) is the pixel value of the first boundary pixel (x1, y1) at which the image is maintained at color saturation.
Step S404: and carrying out Gaussian filtering on the connection transition region.
The invention discloses an in-chip partition exposure image fusion method, which is characterized in that brightness adjustment is carried out on an original image to be adjusted; carrying out pixel value mapping on the brightness adjusting image; keeping the color saturation of the original image to be adjusted; and carrying out transition processing on the adjacent exposure subareas in the connection area, thereby realizing the fusion of the subarea exposure images. The invention can automatically complete the fusion task of the subarea exposure image without manual intervention, is suitable for gray level images and color images, and is suitable for the subarea exposure fusion of continuous video images.
In some embodiments, there is also provided an imaging apparatus including a processor and a memory, the memory storing a plurality of instructions, and the processor being configured to read the plurality of instructions and execute the on-chip partition exposure image fusion method, for example, including: adjusting the brightness of an original image to be adjusted; carrying out pixel value mapping on the brightness adjusting image; maintaining the color saturation of the original image to be adjusted; and performing transition processing on the connection area of the adjacent exposure subareas.
In some embodiments, there is also provided a computer readable storage medium storing a plurality of instructions readable by a processor and executable by the processor to perform the on-chip zoned exposure image fusion method described above, for example, comprising: adjusting the brightness of the original image to be adjusted; carrying out pixel value mapping on the brightness adjusting image; keeping the color saturation of the original image to be adjusted; and performing transition processing on the connection area of the adjacent exposure subareas.
All possible combinations of the technical features of the above embodiments may not be described for the sake of brevity, but should be considered as within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of an element by the phrase "comprising an … …" does not exclude the presence of additional like elements in the process, method, article, or apparatus that comprises the element, and further, where similarly-named elements, features, or elements in different embodiments of the invention may have the same meaning, or may have different meanings, that particular meaning should be determined by their interpretation in the embodiment or further by context with the embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, items, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
The present invention is not limited to the above preferred embodiments, and any modifications, equivalents or improvements made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (22)

1. An on-chip partition exposure image fusion method is characterized by comprising the following steps:
adjusting the brightness of an original image to be adjusted, dividing the original image to be adjusted into a plurality of exposure subareas, designating one exposure subarea as a reference subarea for primary adjustment, and adjusting the brightness of adjacent exposure subareas according to the reference subareas to make the brightness of different exposure subareas consistent to obtain a brightness adjustment image;
mapping pixel values of all partitions of the brightness adjusting image, and mapping the pixel values of all pixels of the brightness adjusting image into the range of [0,2^ n-1] according to the pixel values of the pixel to be mapped and other pixels in a set neighborhood to obtain the pixel value mapping image of each partition; and
and performing linking area transition processing on adjacent exposure subareas, setting a linking transition area taking a linking boundary as a center in two adjacent exposure subareas, and adjusting the pixel value of a pixel point positioned on one side of the linking boundary to be the weighted sum of the original pixel value and the pixel value of a boundary pixel point adjacent to the linking boundary on the other side of the linking boundary in the linking transition area to complete the fusion of subarea exposure images.
2. The on-chip zoned-exposure image fusion method according to claim 1, wherein the brightness adjustment comprises the steps of:
and taking the middle subarea of the original image to be adjusted as a reference subarea for brightness adjustment.
3. The on-chip zoned-exposure image fusion method according to claim 1, wherein the brightness adjustment comprises the steps of:
and if the original image to be adjusted is a color image, converting the original image to be adjusted into a gray original image to be adjusted.
4. The on-chip subarea exposure image fusion method according to claim 3, wherein it is converted into a gray-scale original image to be adjusted according to the following formula:
Gray=R*0.299+G*0.587+B*0.114
wherein, R, G, B are pixel values of red pixels, green pixels and blue pixels in the original image to be adjusted, respectively.
5. The on-chip zoned-exposure image fusion method according to claim 1, wherein the brightness adjustment further comprises the steps of:
counting the average brightness value of the partition to be adjusted and the average brightness value of the reference partition, and calculating the ratio of the average brightness value of the reference partition to the average brightness value of the partition to be adjusted;
judging the difference value between the average brightness value of the partition to be regulated and the average brightness value of the reference partition; and
and if the absolute value of the difference value between the average brightness value of the to-be-adjusted partition and the average brightness value of the reference partition is smaller than the set first threshold, the brightness adjustment of the to-be-adjusted partition is finished.
6. The on-chip zoned-exposure image fusion method according to claim 5, wherein the brightness adjustment comprises the steps of:
if the difference value between the average brightness value of the to-be-adjusted partition and the average brightness value of the reference partition is larger than the first threshold, calculating the ratio of the average brightness values of the reference partition and the to-be-adjusted partition; and
and multiplying the pixel value of the subarea to be adjusted by the ratio.
7. The on-chip partition exposure image fusion method according to claim 6, wherein said "counting the average brightness values of the partitions to be adjusted and the reference partition" means:
and counting the average brightness value (ave _ ref) of the N rows close to the joint boundary on the side of the reference partition of the original image to be adjusted and counting the average brightness value (ave _ adj) of the N rows close to the joint boundary on the side of the adjacent partition to be adjusted of the original image to be adjusted.
8. The on-chip partition exposure image fusing method of claim 7, wherein the counting of the average luminance value (ave _ ref) of the reference partition side of the original image to be adjusted near the N rows of the joining boundary comprises:
when the average brightness value (ave _ adj) of the adjacent to-be-adjusted partition is counted, firstly calculating the average value (ave _ adj _ all) of all pixel points in N rows, then considering the pixel points of which the pixel values in the N rows are smaller than min { (2^ N-1) × A1 and ave _ adj _ all _ A2}, wherein N is the bit width of the current processed image, and finally calculating the average value of the pixel points to obtain the average brightness value (ave _ adj) of the adjacent to-be-adjusted partition;
wherein, min { (2^ n-1) × A1, ave _ adj _ all × A2} represents to take the smaller of the two values, A1 and A2 are set parameters;
the value range of N is [1, min { Height _ region _ adj, Height _ region _ ref } ], wherein Height _ region _ adj represents the Height of an adjacent partition to be adjusted, and Height _ region _ ref represents the Height of a reference partition.
9. The on-chip partition exposure image fusion method according to claim 7, wherein the step of counting the average brightness value (ave _ adj) of the adjacent partition to be adjusted side of the original image to be adjusted near the N rows of the joining boundary comprises the steps of:
when the average brightness value (ave _ ref) of the reference partition is counted, firstly calculating the average value (ave _ ref _ all) of all pixel points in N rows, then considering the pixel points with the pixel values smaller than min { (2^ N-1) × B1 and ave _ ref _ all × B2} in the N rows, wherein N is the bit width of the image currently processed, and finally calculating the average value of the pixel points to obtain the average brightness value (ave _ ref) of the reference partition;
wherein, min { (2^ n-1) × B1, ave _ ref _ all × B2} represents to take the smaller one of the two values, and B1 and B2 are set parameters;
the value range of N is [1, min { Height _ region _ adj, Height _ region _ ref } ], wherein Height _ region _ adj represents the Height of an adjacent partition to be adjusted, and Height _ region _ ref represents the Height of a reference partition.
10. The on-chip zoned-exposure image fusion method according to claim 1, wherein the brightness adjustment comprises the steps of:
and if the brightness adjustment of the adjacent partitions of the reference partition is finished and other unadjusted partitions exist, taking the adjusted partition of the adjacent unadjusted partition in the original image to be adjusted as the reference partition, and taking the adjacent unadjusted partition as the partition to be adjusted for brightness adjustment.
11. The on-chip zoned-exposure image fusion method according to claim 1, wherein the pixel value mapping comprises the steps of:
calculating a brightness threshold value to divide pixel points of the brightness adjusting image into low brightness points, medium brightness points and high brightness points; and
and mapping the pixel value of each pixel point of the brightness adjusting image into the range of [0,2^ n-1] according to the pixel value of the pixel point to be mapped, the ratio of the low, medium and high brightness points in the set neighborhood and the mapping value.
12. The on-chip zoned-exposure image fusion method according to claim 11, wherein the calculating the brightness threshold comprises the steps of:
converting the brightness adjustment image into a gray-scale brightness adjustment image, and calculating a logarithmic average lg _ ave of pixel values of the gray-scale brightness adjustment image by the following formula:
Figure FDA0002964566060000041
the Num is the total number of pixel points of the gray brightness adjusting image, Src _ adj _ gray (x, y) represents the pixel value of the gray brightness adjusting image in x rows and y columns, and α is a setting parameter.
13. The on-chip zoned-exposure image fusion method according to claim 12, wherein the calculating of the brightness threshold comprises the steps of:
the luminance threshold value Key is calculated according to the following formula, namely:
Figure FDA0002964566060000042
where, grayMax and grayMin denote a maximum pixel value and a minimum pixel value of the gradation luminance adjustment image, respectively.
14. The on-chip divisional exposure image fusion method of claim 11, wherein the division into low luminance points, medium luminance points and high luminance points comprises the steps of:
dividing pixel points of the normalized brightness adjustment image into low brightness points, medium brightness points and high brightness points according to the brightness threshold value by the following formula:
L t =L max -[C1+(1-C1)*Key]*(L max -L min )
L h =L min +[C2+(1-C2)*(1-Key)]*(L max -L min )
wherein Key is a brightness threshold value, the value range of C1 is 0.5-1, the value of C2 is less than the value of C1, and L is max And L min Respectively the maximum value and the minimum value of the normalized brightness regulating image, wherein the pixel value in the normalized brightness regulating image is less than L t The pixel points of (A) are low-brightness points which are more than L h The pixel points of (1) are high-brightness points, and the middle is a middle-brightness point.
15. The on-chip zoned-exposure image fusion method according to claim 11, wherein the pixel value mapping comprises the steps of:
if the brightness adjusting image is a gray image, normalizing the brightness adjusting image, and calculating the ratio rate of low, medium and high brightness points in a first set neighborhood taking each pixel point of the normalized brightness adjusting image as the center l ,rate m ,rate h (ii) a And
mapping pixel values of the low, medium and high brightness points by using the following formula;
Figure FDA0002964566060000051
wherein s is i ,q i ,k i Is a positive value greater than 1, i ∈ [1, m, h]1, m and h respectively correspond to a low-brightness point, a medium-brightness point and a high-brightness point; s is i The value of (b) increases progressively from the low brightness point to the high brightness point, q i 、k i The value of (A) is decreased progressively from the low brightness point to the high brightness point, L n Adjusting the pixel value of the image, L, for normalizing the brightness nmax Adjusting the maximum pixel value, L, of an image for normalized luminance i The pixel values of the low-luminance point, the medium-luminance point, and the high-luminance point after mapping.
16. The on-chip zoned-exposure image fusion method according to claim 15, wherein the pixel value mapping comprises the steps of:
if the brightness adjusting image is a color image, normalizing the red, green and blue color channels of the brightness adjusting image respectively to obtain a normalized brightness adjusting image, and then mapping.
17. The on-chip zoned-exposure image fusion method according to claim 15, wherein the pixel value mapping comprises the steps of:
for each pixel point of the normalized brightness adjustment image, three groups of different s corresponding to the low brightness point, the medium brightness point and the high brightness point i ,q i ,k i Respectively calculating the mapping values L of the low-brightness point, the medium-brightness point and the high-brightness point l ,L m ,L h And then calculating a mapping value L of a pixel point of the brightness adjustment image according to the following formula:
L=(L l *rate l +L m *rate m +L h *rate h )*255
wherein L is l 、L m 、L h Mapping values, rate, of low-brightness point, medium-brightness point and high-brightness point in the first set neighborhood where the pixel point to be mapped is located l 、rate m 、rate h The ratio of low brightness points, medium brightness points and high brightness points in a first set neighborhood where the pixel points to be mapped are located is determined.
18. The on-chip subarea exposure image fusion method of claim 1, further comprising color saturation preserving of an original image to be adjusted, comprising the steps of:
in the original image to be adjusted, the value of the high-brightness pixel point which is larger than the set brightness threshold value is given to the corresponding pixel point in the pixel value mapping image, the edge pixel point of the high-brightness pixel point which is assigned in the pixel value mapping image and the value of the pixel point in the set neighborhood are weighted, the transition processing is completed, and the color saturation maintaining image is obtained.
19. The on-chip zoned-exposure image fusion method according to claim 18, wherein the color saturation maintenance comprises the steps of:
creating a template image, initializing the pixel value of the template image to be 0, traversing the original image to be adjusted in gray scale, if the pixel value is greater than (0,2^ n-1) × D1, wherein D1 is a set parameter, setting the pixel value of the position corresponding to the pixel value in the template image to be 255, and setting the pixel value of the position corresponding to the pixel value mapping image to be the pixel value of the same position of the original image to be adjusted;
performing morphological expansion processing on the template image, and performing mean value filtering on the template image in a second set neighborhood; and
traversing the template image, and when the pixel value of the template image is not 0 and not 255, adjusting the pixel value of the corresponding position in the pixel value mapping image according to the following formula:
Figure FDA0002964566060000061
wherein, (x, y) represents the pixel point position of the template image with the pixel value not being 0 and not being 255, Src _ mapping (x, y) is the pixel value of the pixel point corresponding to the pixel value mapping image, Src (x, y) is the pixel value of the pixel point corresponding to the original image to be adjusted, and Mask (x, y) is the pixel value of the pixel point corresponding to the template image.
20. The on-chip zoned-exposure image fusion method according to claim 18, wherein the splicing region transition process includes the steps of:
setting a first boundary pixel point (x1, y1) and a second boundary pixel point (x2, y2) as boundary pixel points on two sides of the connection boundary position of the adjacent first exposure subarea and the second exposure subarea respectively;
on one side of the first boundary pixel (x1, y1), connecting M pixel sets { (x _ t 1) in the radial direction of the boundary i ,y_t1 i ) I |, 1,2, …, M }, and the color saturation is maintained as the pixel value of the pixel point at the corresponding position in the image
Figure FDA0002964566060000062
The process is performed according to the following formula:
Figure FDA0002964566060000063
on one side of the second boundary pixel (x2, y2), connecting M pixel sets { (x _ t 2) in the radial direction of the boundary i ,y_t2 i ) 1,2, …, M } and keeping the color saturation degree as the pixel value of the pixel point at the corresponding position in the image
Figure FDA0002964566060000071
The treatment is performed according to the following formula:
Figure FDA0002964566060000072
wherein Dis1 represents the currently processed pixel point
Figure FDA0002964566060000073
A pixel distance from a second border pixel point (x2, y 2); src _ ret (x2, y2) is the pixel value of the second boundary pixel point (x2, y2) at the color saturation preserving image; wherein Dis2 represents the currently processed pixel point
Figure FDA0002964566060000074
A pixel distant from the first boundary pixel point (x1, y1)Distance, Src _ ret (x1, y1) is the pixel value of the first boundary pixel point (x1, y1) at which the image is maintained at color saturation.
21. An imaging apparatus comprising a processor and a memory, the memory storing at least one instruction, the processor configured to read the at least one instruction and perform the method of any of claims 1 to 20.
22. A computer storage medium having stored therein at least one instruction which is loaded and executed by a processor to implement the method of any one of claims 1-20.
CN202110247361.6A 2021-03-05 2021-03-05 On-chip partition exposure image fusion method, imaging device and computer storage medium Pending CN115018743A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110247361.6A CN115018743A (en) 2021-03-05 2021-03-05 On-chip partition exposure image fusion method, imaging device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110247361.6A CN115018743A (en) 2021-03-05 2021-03-05 On-chip partition exposure image fusion method, imaging device and computer storage medium

Publications (1)

Publication Number Publication Date
CN115018743A true CN115018743A (en) 2022-09-06

Family

ID=83064890

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110247361.6A Pending CN115018743A (en) 2021-03-05 2021-03-05 On-chip partition exposure image fusion method, imaging device and computer storage medium

Country Status (1)

Country Link
CN (1) CN115018743A (en)

Similar Documents

Publication Publication Date Title
US9710715B2 (en) Image processing system, image processing device, and image processing method
JP5761946B2 (en) Image processing apparatus, image processing method, and storage medium
JP5677113B2 (en) Image processing device
CN112565636B (en) Image processing method, device, equipment and storage medium
US8284271B2 (en) Chroma noise reduction for cameras
US20140267883A1 (en) Method of selecting a subset from an image set for generating high dynamic range image
CN103518223A (en) White balance optimization with high dynamic range images
RU2496250C1 (en) Image processing apparatus and method
WO2021143300A1 (en) Image processing method and apparatus, electronic device and storage medium
CN113888437A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
JP5804856B2 (en) Image processing apparatus, image processing method, and program
CN111209775B (en) Signal lamp image processing method, device, equipment and storage medium
JP5804857B2 (en) Image processing apparatus, image processing method, and program
JP6083974B2 (en) Image processing apparatus, image processing method, and program
CN112351218A (en) Automatic exposure method, device, electronic equipment and computer readable storage medium
WO2017099150A1 (en) Image processing apparatus and method, program, and recording medium
CN112907497B (en) Image fusion method and image fusion device
CN111724447B (en) Image processing method, system, electronic equipment and storage medium
CN115018743A (en) On-chip partition exposure image fusion method, imaging device and computer storage medium
CN115037882B (en) Image brightness adjusting method, imaging device and computer storage medium
CN115018714A (en) Image pixel value mapping method, imaging device and computer storage medium
US11805326B2 (en) Image processing apparatus, control method thereof, and storage medium
CN113222869B (en) Image processing method
US11317069B2 (en) Image processing method based on sensor characteristics
CN112700752B (en) Brightness adjusting method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination