CN114257741B - A fast-response approach to in-vehicle HDR - Google Patents

A fast-response approach to in-vehicle HDR Download PDF

Info

Publication number
CN114257741B
CN114257741B CN202111534053.8A CN202111534053A CN114257741B CN 114257741 B CN114257741 B CN 114257741B CN 202111534053 A CN202111534053 A CN 202111534053A CN 114257741 B CN114257741 B CN 114257741B
Authority
CN
China
Prior art keywords
pixel
frame
value
target
values
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111534053.8A
Other languages
Chinese (zh)
Other versions
CN114257741A (en
Inventor
杜乐谦
叶志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202111534053.8A priority Critical patent/CN114257741B/en
Publication of CN114257741A publication Critical patent/CN114257741A/en
Application granted granted Critical
Publication of CN114257741B publication Critical patent/CN114257741B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a quick-response vehicle-mounted HDR method, which comprises the following steps: selecting a first frame of an image to be processed as a sampling frame, selecting an interested area on the sampling frame, and calculating a Target value Target according to pixel values of specific pixel rows in the interested area; comparing the pixel values of all pixels in the sampling frame with Target to obtain a weight matrix, wherein elements in the weight matrix represent weight values of corresponding pixels; recording the next frame of the sampling frame as an operation frame, and adjusting the pixel value of each pixel in the operation frame according to the weight matrix to obtain an adjusted operation frame image; and taking the operation frame before adjustment as a new sampling frame, taking the next frame of the operation frame as a new operation frame, and repeating the steps until the image to be processed is processed. The method and the device can enhance the image processing effect while reducing the chip area and the power consumption, and the generated HDR image effect is better represented in detail.

Description

一种快速反应的车载HDR方法A fast-response approach to in-vehicle HDR

技术领域technical field

本发明属于车载高动态(HDR)算法、快速自动曝光(AE,auto exposure)方式领域,具体涉及一种快速反应的车载高动态方法。The invention belongs to the field of vehicle-mounted high dynamic (HDR) algorithms and fast automatic exposure (AE, auto exposure) methods, and in particular relates to a fast-response vehicle-mounted high dynamic method.

背景技术Background technique

曝光时间以及增益系数的确定对HDR图像的生成至关重要。The determination of exposure time and gain factor is crucial to the generation of HDR images.

传统的AE算法都是对整幅图像进行逐行曝光,对所有像素点进行统计求和再平均,得到一个亮度平均值。然后用这个亮度平均值去计算得出曝光时间T和增益系数Ratio。还有一些AE算法是采用加权灰度熵差的分区域曝光算法,但是这种算法计算量很大,会带来巨大的面积以及功耗的浪费,不适合应用在芯片硬件端。为了快速估计整幅图像的亮度信息,那就需要调整AE算法,尽可能在保证图像效果的同时减少计算时间。车载传感器不同于手机或者相机传感器,手机或者相机传感器可以通过改变焦点位置来获取感兴趣区域,在感兴趣区域进行AE计算,但车载传感器往往是固定焦距的,它关注的就是图像中间的一小块区域,因为大部分的视觉信息都集中在图像中间。所以快速AE算法的核心就是把关注点放在图像中心区域,当中心区域不满足要求时,将图像靠上以及靠下区域的像素信息加入进来作为补充计算,让AE算法的结果更加准确。现有的图像处理芯片都是通过AE模块来测光并对当前的环境的亮度进行一个评价,采用的方法是帧平均法或者是全局分区曝光法,当前帧的曝光时间和增益系数都是根据上一帧的整体亮度值来确定的。The traditional AE algorithm is to expose the entire image line by line, and perform statistical summation and averaging on all pixels to obtain an average brightness. Then use this brightness average to calculate the exposure time T and gain coefficient Ratio. There are also some AE algorithms that use the weighted gray entropy difference sub-regional exposure algorithm, but this algorithm has a large amount of calculation, which will bring huge waste of area and power consumption, and is not suitable for application on the chip hardware side. In order to quickly estimate the brightness information of the entire image, it is necessary to adjust the AE algorithm to reduce the calculation time as much as possible while ensuring the image effect. Vehicle-mounted sensors are different from mobile phone or camera sensors. Mobile phone or camera sensors can obtain the region of interest by changing the focus position, and perform AE calculations in the region of interest. block regions, since most of the visual information is concentrated in the middle of the image. Therefore, the core of the fast AE algorithm is to focus on the central area of the image. When the central area does not meet the requirements, the pixel information of the upper and lower areas of the image is added as a supplementary calculation to make the results of the AE algorithm more accurate. Existing image processing chips use the AE module to measure light and evaluate the brightness of the current environment. The method used is the frame average method or the global partition exposure method. The exposure time and gain coefficient of the current frame are based on Determined by the overall brightness value of the previous frame.

首先进行图像曝光,然后采样量化,接着对整幅图像进行像素值求和平均,得到AP。First, image exposure is performed, then sampling and quantization are performed, and then the pixel values of the entire image are summed and averaged to obtain AP.

Figure GDA0003764398030000021
Figure GDA0003764398030000021

通过AP来计算曝光时间和增益系数。这样的做法计算量很大,整体的响应时间偏长,并且在某些场景下,没法给出一些区域需要的曝光时间和增益系数,不利于车载图像的快速准确反应。Exposure time and gain factor are calculated by AP. This method requires a lot of calculations, and the overall response time is too long, and in some scenarios, it is impossible to give the exposure time and gain factor required by some areas, which is not conducive to the rapid and accurate response of vehicle images.

同时,CMOS图像处理传感器对同一幅图像,都是给定同样的曝光时间和增益的,当前在芯片硬件上实现HDR的方法,基本是采用多次曝光进行图像融合处理,这些计算会带来很大的芯片面积开销和功耗开销。同一幅图的曝光和增益都是相同的,所以没法对每一个像素点做精细化处理。例如,对于隧道场景来说,进入隧道前和出隧道时的图像场景是非常复杂的,对于同一行的像素,在隧道里和隧道外的像素值差别很大,传统的HDR算法即使每行都给特定的曝光时间和增益系数,还是没法改善这种由于特定场景带来的像素值差异,从而导致生成的HDR图像效果在一些细节上表现不好。如果用在辅助驾驶和自动驾驶判断中,容易误判导致安全问题。At the same time, CMOS image processing sensors are given the same exposure time and gain for the same image. The current method of implementing HDR on chip hardware basically uses multiple exposures for image fusion processing. These calculations will bring a lot of problems. Large chip area overhead and power consumption overhead. The exposure and gain of the same image are the same, so it is impossible to fine-tune each pixel. For example, for a tunnel scene, the image scene before entering the tunnel and when exiting the tunnel is very complicated. For pixels in the same row, the pixel values inside and outside the tunnel are very different. The traditional HDR algorithm even if each row Given a specific exposure time and gain factor, it is still impossible to improve the pixel value difference caused by a specific scene, resulting in the generated HDR image effect being poor in some details. If it is used in the judgment of assisted driving and automatic driving, it is easy to misjudgment and cause safety problems.

发明内容Contents of the invention

本发明要解决的问题是:提出一种快速反应的车载HDR方法,生成关注区域高质量的HDR图像。内容包括的快速AE算法、用AE算法的结果去生成权重矩阵以及新的HDR算法。The problem to be solved by the present invention is to propose a fast-response vehicle-mounted HDR method to generate a high-quality HDR image of an area of interest. The content includes the fast AE algorithm, using the results of the AE algorithm to generate the weight matrix and the new HDR algorithm.

本发明的技术方案是:Technical scheme of the present invention is:

一种快速反应的车载HDR方法,其特征在于,步骤包括:A fast-response vehicle-mounted HDR method, characterized in that the steps include:

(1)快速AE算法:选取欲处理影像的第一帧为采样帧,在采样帧上选取感兴趣区域,通过感兴趣区域中特定像素行的像素值计算出目标值Target;(1) Fast AE algorithm: select the first frame of the image to be processed as the sampling frame, select the region of interest on the sampling frame, and calculate the target value Target through the pixel value of a specific pixel row in the region of interest;

(2)获取权重矩阵:将采样帧中所有像素的像素值与Target进行比较,得到权重矩阵,权重矩阵中的元素表示相应像素的权重值;(2) Obtain the weight matrix: compare the pixel values of all pixels in the sampling frame with the Target to obtain the weight matrix, and the elements in the weight matrix represent the weight values of the corresponding pixels;

(3)像素级HDR算法:记采样帧的下一帧为操作帧,根据权重矩阵对操作帧中每一像素的像素值进行调整,得到调整后的操作帧图像;(3) Pixel-level HDR algorithm: record the next frame of the sampling frame as the operation frame, adjust the pixel value of each pixel in the operation frame according to the weight matrix, and obtain the adjusted operation frame image;

(4)将调整前的操作帧作为新的采样帧,操作帧的下一帧作为新的操作帧,重复上述步骤(1)-(3),直至欲处理影像处理完毕。(4) The operation frame before adjustment is used as a new sampling frame, and the next frame of the operation frame is used as a new operation frame, and the above steps (1)-(3) are repeated until the processing of the image to be processed is completed.

进一步地,步骤(1)快速AE算法中,所述感兴趣区域的选取方法为:Further, in the step (1) fast AE algorithm, the selection method of the region of interest is:

记采样帧的像素总行数为N,各行分别编号0,1,2,……,N-1;Note that the total number of rows of pixels in the sampling frame is N, and each row is numbered 0, 1, 2,..., N-1;

选取编号范围为

Figure GDA0003764398030000031
的行为区域1;Select a number range of
Figure GDA0003764398030000031
Behavior area 1;

选取编号范围为

Figure GDA0003764398030000032
的行为区域2;Select a number range of
Figure GDA0003764398030000032
Behavior area 2;

选取编号范围为

Figure GDA0003764398030000033
的行为区域3;Select a number range of
Figure GDA0003764398030000033
Behavior area 3;

区域1、2、3共同构成感兴趣区域;其中,n1、n2、n3分别为区域1、2、3的行数,n1=n3,分别占总行数N的8%-12%,n2占总行数N的25%-35%。Areas 1, 2, and 3 together constitute the area of interest; where n 1 , n 2 , and n 3 are the number of rows in area 1, 2, and 3 respectively, and n 1 =n 3 , which account for 8%-12 of the total number of rows N %, n 2 accounts for 25%-35% of the total row number N.

进一步地,步骤(1)快速AE算法中,所述特定像素行的选取方法为:选取感兴趣区域中的所有行,或编号为奇数的行,或编号为偶数的行。Further, in step (1) fast AE algorithm, the selection method of the specific pixel row is: select all rows in the region of interest, or rows with odd numbers, or rows with even numbers.

进一步地,步骤(1)快速AE算法中,目标值Target的计算方法为:Further, in the step (1) fast AE algorithm, the calculation method of the target value Target is:

首先计算区域1、区域2特定像素行中所有像素像素值的平均值,分别记为AP1、AP2;First calculate the average value of all pixel values in the specific pixel row of area 1 and area 2, which are recorded as AP1 and AP2 respectively;

如果Threshold1≤AP2≤Threshold2,则If Threshold1≤AP2≤Threshold2, then

Target=AP2;Target = AP2;

如果AP2>Threshold2或AP2<Threshold1,则计算区域3特定像素行中所有像素像素值的平均值,记为AP3,If AP2>Threshold2 or AP2<Threshold1, calculate the average value of all the pixel values of all pixels in a specific pixel row in area 3, denoted as AP3,

Target=(AP1+AP2+AP3)/3;Target = (AP1+AP2+AP3)/3;

其中,Threshold1为下阈值,Threshold1=128-t;Threshold2为上阈值,Threshold2=128+t,其中4≤t≤32。Wherein, Threshold1 is the lower threshold, Threshold1=128-t; Threshold2 is the upper threshold, Threshold2=128+t, where 4≤t≤32.

进一步地,步骤(2)中,权重矩阵的获得方法可以选取多种方式,分别如下:Further, in step (2), the weight matrix can be obtained in various ways, which are as follows:

第一种权重矩阵的获得方法为,设置位宽1bit的数来表示像素的权重值:The first way to obtain the weight matrix is to set a number with a bit width of 1 bit to represent the weight value of the pixel:

如果Pi,j≥Target,则Ai,j=1;如果Pi,j<Target,则Ai,j=0;If P i,j ≥ Target, then A i,j =1; if P i,j <Target, then A i,j =0;

其中,Pi,j为第i行第j列像素点的像素值,Ai,j为第i行第j列像素点的权重值,即如下表:Among them, P i, j is the pixel value of the pixel point in the i-th row and the j-th column, and A i, j is the weight value of the i-th row and the j-th column pixel point, as shown in the following table:

位宽=1bitBit width = 1bit P<sub>i,j</sub>≥TargetP<sub>i,j</sub>≥Target P<sub>i,j</sub><TargetP<sub>i,j</sub><Target A<sub>i,j</sub>A<sub>i,j</sub> 11 00

采样帧中所有像素的权重值构成权重矩阵。The weight values of all pixels in the sampled frame form a weight matrix.

第二种权重矩阵的获得方法为,设置位宽2bit的数来表示像素的权重值:The second way to obtain the weight matrix is to set a number with a bit width of 2 bits to represent the weight value of the pixel:

如果Pi,j≥Target+V,则Ai,j=3;If P i,j ≥ Target+V, then A i,j =3;

如果Target≤Pi,j<Target+V,则Ai,j=2;If Target≤P i,j <Target+V, then A i,j =2;

如果Target-V≤Pi,j<Target,则Ai,j=1;If Target-V≤P i,j <Target, then A i,j =1;

如果Pi,j<Target-V,则Ai,j=0;If P i,j <Target-V, then A i,j =0;

其中,Pi,j为第i行第j列像素点的像素值,Ai,j为第i行第j列像素点的权重值,V为精细化系数,通常取V≤32,即如下表:Among them, P i, j is the pixel value of the pixel point in the i-th row and the j-th column, A i, j is the weight value of the i-th row and the j-th column pixel point, and V is the refinement coefficient, usually V≤32, namely as follows surface:

Figure GDA0003764398030000051
Figure GDA0003764398030000051

采样帧中所有像素的权重值构成权重矩阵。The weight values of all pixels in the sampled frame form a weight matrix.

在芯片面积允许的条件下,也可以采取更高位宽的权重值(如3bit、4bit……),位宽越大,图像的像素值划分越细腻,其效果越好。当权重矩阵位宽增大,可以描述更多细节时,就可以对应的增加精细化系数V。Under the condition that the chip area allows, a weight value with a higher bit width (such as 3bit, 4bit...) can also be adopted. The larger the bit width, the finer the pixel value division of the image, and the better the effect. When the bit width of the weight matrix increases and more details can be described, the refinement coefficient V can be correspondingly increased.

上述两种方法所需的芯片资源都比较大,但是考虑到实际的芯片面积开销,往往会降低权重矩阵存储的内容,这里提出第三种方式。The chip resources required by the above two methods are relatively large, but considering the actual chip area overhead, the storage content of the weight matrix will often be reduced, and the third method is proposed here.

第三种权重矩阵的获得方法为,多个像素点共用一个权重:The third way to obtain the weight matrix is that multiple pixels share one weight:

由于图像里面的像素点值,大部分都不是突变的,所以相邻点的亮度信息是比较接近的一个值。因此,将n×n个相邻的像素点划分为一个整体,该整体中所有像素点的平均像素值视作整体中任一像素点的像素值,其中n可以选取采样帧行数和列数的任意公约数,实际操作时,可根据具体的需求来定。n越小,存储信息越多越准确,但是芯片面积成倍增加。反之,存储信息越少,但是芯片面积成倍降低。采样帧中的所有像素点均进行相应划分;Since most of the pixel values in the image are not abrupt, the brightness information of adjacent points is a relatively close value. Therefore, n×n adjacent pixels are divided into a whole, and the average pixel value of all pixels in the whole is regarded as the pixel value of any pixel in the whole, where n can select the number of rows and columns of the sampling frame Any common divisor of , in actual operation, can be determined according to specific requirements. The smaller n is, the more accurate the stored information will be, but the chip area will double. Conversely, the less information is stored, but the chip area is doubled. All pixels in the sampling frame are divided accordingly;

如果Pi,j≥Target,则Ai,j=1;如果Pi,j<Target,则Ai,j=0;If P i,j ≥ Target, then A i,j =1; if P i,j <Target, then A i,j =0;

其中,Pi,j为第i行第j列像素点的像素值,Ai,j为第i行第j列像素点的权重值,采样帧中所有像素的权重值构成权重矩阵。多个像素点共用一个权重,这样做的结果会导致图片效果稍微差一点,但是可以节省大量的存储面积。Among them, P i,j is the pixel value of the pixel point in row i and column j, A i,j is the weight value of pixel point in row i and column j, and the weight values of all pixels in the sampling frame form a weight matrix. Multiple pixels share one weight, which will result in a slightly worse picture effect, but save a lot of storage area.

进一步地,步骤(3)中,对操作帧中的像素值进行调整的方法为:Further, in step (3), the method for adjusting the pixel value in the operation frame is:

如果像素点在权重矩阵中所对应的权重值为1,则将其像素值减去补偿值VP1;如果像素点的权重值为0,则将其像素值加上补偿值VP1;选取的VP1需保证操作后的像素值均位于8-250之间;对操作帧中的所有像素点进行操作,得到调整后的操作帧图像。If the corresponding weight value of the pixel in the weight matrix is 1, then subtract the compensation value VP1 from its pixel value; if the weight value of the pixel point is 0, add the compensation value VP1 to its pixel value; the selected VP1 needs Ensure that the pixel values after the operation are all between 8-250; operate on all the pixels in the operation frame to obtain the adjusted operation frame image.

进一步地,步骤(3)中,对操作帧中的像素值进行调整的方法为:Further, in step (3), the method for adjusting the pixel value in the operation frame is:

如果像素点在权重矩阵中所对应的权重值为3,则将其像素值减去补偿值VP2;如果像素点的权重值为2,则将其像素值减去补偿值VP3;如果像素点的权重值为1,则将其像素值加上补偿值VP3;如果像素点的权重值为0,则将其像素值加上补偿值VP2;其中VP2>VP3,选取的VP2和VP3需保证操作后的像素值均位于8-250之间;对操作帧中的所有像素点进行操作,得到调整后的操作帧图像。If the corresponding weight value of the pixel in the weight matrix is 3, subtract the compensation value VP2 from its pixel value; if the weight value of the pixel is 2, subtract the compensation value VP3 from its pixel value; If the weight value is 1, add the compensation value VP3 to its pixel value; if the weight value of the pixel point is 0, add the compensation value VP2 to its pixel value; where VP2>VP3, the selected VP2 and VP3 need to ensure that after operation The pixel values of all are between 8-250; all pixels in the operation frame are operated to obtain the adjusted operation frame image.

与现有技术相比,本发明的有益效果为:Compared with prior art, the beneficial effect of the present invention is:

1.对于每一帧图像的功耗来说,一旦区域2的AP2满足对应条件,那么区域3的AP3是可以不用计算的。同时,在计算target时,既可以选择感兴趣区域中的全部行,也可以仅选择一半(奇数或偶数)的行,这样的做法在大部分图像场景里对结果影响不大。使用以上方法,可以极大地降低芯片的功耗和面积,以及计算时间。1. For the power consumption of each frame of image, once AP2 in area 2 satisfies the corresponding conditions, AP3 in area 3 does not need to be calculated. At the same time, when calculating the target, you can either select all the rows in the region of interest, or you can select only half (odd or even) of the rows, which has little effect on the results in most image scenes. Using the above method, the power consumption and area of the chip, as well as the calculation time can be greatly reduced.

2.在获得权重矩阵时,既可以通过增大权重值的位宽来增加精度,增强图像处理效果,也可以通过多个像素点共用一个权重值,来减小芯片面积,加快计算速度,方便根据芯片的实际需求和应用场景进行适应性调整。2. When obtaining the weight matrix, you can increase the precision and enhance the image processing effect by increasing the bit width of the weight value, or you can reduce the chip area and speed up the calculation speed by sharing one weight value with multiple pixels, which is convenient Adaptive adjustments are made according to the actual needs and application scenarios of the chip.

3.本发明的HDR算法对单个像素点进行操作,对于同一行像素的像素值差异可以很好地进行区分和处理,生成的HDR图像效果在细节上表现更好。3. The HDR algorithm of the present invention operates on a single pixel point, and can distinguish and process differences in pixel values of pixels in the same row, and the generated HDR image effect is better in details.

附图说明Description of drawings

图1是实施例1中选取的感兴趣区域示意图。FIG. 1 is a schematic diagram of the region of interest selected in Example 1.

图2是实施例2中采样帧的像素值矩阵。FIG. 2 is a matrix of pixel values of a sampled frame in Embodiment 2.

图3是实施例2中采样帧的权重矩阵。FIG. 3 is a weight matrix of sampled frames in Embodiment 2.

图4是实施例2中操作帧在修正前的像素值矩阵。FIG. 4 is the pixel value matrix of the operation frame before correction in the second embodiment.

图5是实施例2中操作帧在修正后的像素值矩阵。FIG. 5 is the corrected pixel value matrix of the operation frame in the second embodiment.

图6是多个相邻点采取相同权重的一个实例。Figure 6 is an example where multiple adjacent points take the same weight.

具体实施方式detailed description

下面结合附图对本发明做进一步的说明:Below in conjunction with accompanying drawing, the present invention will be further described:

实施例1Example 1

本实施例主要通过1280*960的图像展示快速AE算法的操作方法,如图1所示。In this embodiment, the operation method of the fast AE algorithm is mainly displayed through an image of 1280*960, as shown in FIG. 1 .

对于1280*960的图像来说,先计算区域1中192~288行之间奇数行的像素平均值AP1,接着计算区域2中336~624行之间奇数行的像素平均值AP2,这时候判断AP2是否满足120≤AP2≤136,如果满足,那么Target=AP2,如果不满足,那么继续计算区域3中672~768行之间奇数行的像素平均值AP3,Target=(AP1+AP2+AP3)/3。这样可以极大地缩短AE算法的计算量,加快了评价AE的速度并且降低了芯片面积的开销。For a 1280*960 image, first calculate the pixel average AP1 of the odd-numbered lines between 192 and 288 lines in area 1, and then calculate the pixel average AP2 of the odd-numbered lines between 336 and 624 lines in area 2. At this time, judge Whether AP2 satisfies 120≤AP2≤136, if yes, then Target=AP2, if not, then continue to calculate the pixel average value AP3 of odd-numbered lines between 672 and 768 lines in area 3, Target=(AP1+AP2+AP3) /3. This can greatly shorten the calculation amount of the AE algorithm, speed up the evaluation of AE and reduce the cost of the chip area.

实施例2Example 2

本实施例主要以7*7的图像为例,展示权重矩阵的生成,以及的像素级HDR算法操作方法。This embodiment mainly takes a 7*7 image as an example to demonstrate the generation of the weight matrix and the operation method of the pixel-level HDR algorithm.

权重矩阵的生成:Generation of weight matrix:

权重值是对一整幅图像的像素值进行一个等级评估,由上一帧图像(采样帧)产生,对下一帧图像(操作帧)起作用。如此操作的原因在于,通常的视频图像刷新率不会低于60FPS,那么就是每秒刷新60次以上,图像在相邻两帧之间的亮度变化几乎可以忽略不计。The weight value is a level evaluation of the pixel value of an entire image, which is generated by the previous frame image (sampling frame) and has an effect on the next frame image (operation frame). The reason for this operation is that the usual video image refresh rate will not be lower than 60FPS, so it is more than 60 times per second, and the brightness change of the image between two adjacent frames is almost negligible.

在7*7大小的图像中,给每个像素都设置一个权重值进行像素值的等级评估。当权重值为1bit的值时,通过快速AE模块给出一个评价当前亮度值的目标值Target,让采样帧图像的每个像素值与这个目标值Target进行比较,如果大于目标值Target,则这个像素值的权重给1,否则就为0。这样就可以生成一张7*7大小的权重矩阵,1代表图像中比较亮的像素,0代表图像中稍暗的像素。In a 7*7 image, a weight value is set for each pixel to evaluate the level of the pixel value. When the weight value is 1bit, a target value Target for evaluating the current brightness value is given by the fast AE module, and each pixel value of the sampling frame image is compared with the target value Target. If it is greater than the target value Target, then this The weight of the pixel value is given as 1, otherwise it is 0. In this way, a weight matrix of 7*7 size can be generated, 1 represents the brighter pixels in the image, and 0 represents the slightly darker pixels in the image.

如图2所示,为7*7图像的采样帧像素值矩阵,其下方的像素值普遍偏低,上方像素值普遍偏高,可以视作对隧道场景的模拟。选取中间3行为感兴趣区域,统计其像素平均值为92,并把它确定为Target值。像素值低于92的像素赋予权重值0,反之,赋予权重值1。经过比较后生成的权重矩阵为图3所示,这个权重矩阵对于每个像素来说只有1bit信息,它反应了该位置处的像素值在整幅图像中的亮度信息是偏暗还是偏亮。As shown in Figure 2, it is a sample frame pixel value matrix of a 7*7 image, the pixel values below it are generally low, and the pixel values above are generally high, which can be regarded as a simulation of a tunnel scene. Select the middle 3 rows as the region of interest, count the average value of its pixels as 92, and determine it as the Target value. Pixels with a pixel value lower than 92 are assigned a weight value of 0, and vice versa, a weight value of 1 is assigned. The weight matrix generated after comparison is shown in Figure 3. This weight matrix has only 1 bit information for each pixel, which reflects whether the brightness information of the pixel value at this position in the entire image is darker or brighter.

像素级HDR算法:Pixel-level HDR algorithm:

有了上面所得的权重矩阵之后,操作帧图像,如图4所示,就可以通过利用这个权重矩阵来调节,得到一幅更好的图像。对操作帧进行下述操作:将权重矩阵中标记为1的像素像素值减去VP=16,来降低其亮度;标记为0的像素像素值加VP=16,来提高其亮度。After having the weight matrix obtained above, operate the frame image, as shown in Figure 4, you can use this weight matrix to adjust to get a better image. Perform the following operations on the operation frame: subtract VP=16 from the pixel value of the pixel marked as 1 in the weight matrix to reduce its brightness; add VP=16 to the pixel value of the pixel marked as 0 to increase its brightness.

如图5所示,经过调整后的图像像素值矩阵,隧道内很暗的像素值有了不小的抬升,并且可以细化到每一行的各个像素点。这样可以使得后续做其他识别算法的时候,减少误判的几率。As shown in Figure 5, after the adjusted image pixel value matrix, the very dark pixel values in the tunnel have been greatly improved, and can be refined to each pixel point of each row. This can reduce the chance of misjudgment when doing other recognition algorithms in the future.

如图6所示,展示了16个相邻点采取相同权重的实例。As shown in Figure 6, it shows an instance where 16 adjacent points take the same weight.

上述具体实施方式用来解释说明本发明,而不是对本发明进行限制,在本发明的精神和权利要求的保护范围内,对本发明做出的任何修改和改变,都落入本发明的保护范围。The specific embodiments above are used to explain the present invention, rather than to limit the present invention. Within the spirit of the present invention and the protection scope of the claims, any modification and change made to the present invention will fall into the protection scope of the present invention.

Claims (6)

1. A fast-response vehicle-mounted HDR method is characterized by comprising the following steps:
(1) Fast AE algorithm: selecting a first frame of an image to be processed as a sampling frame, selecting an interested area on the sampling frame, and calculating a Target value Target according to pixel values of specific pixel rows in the interested area;
(2) Acquiring a weight matrix: comparing the pixel values of all pixels in the sampling frame with Target to obtain a weight matrix, wherein elements in the weight matrix represent weight values of corresponding pixels;
(3) Pixel level HDR algorithm: recording the next frame of the sampling frame as an operation frame, and adjusting the pixel value of each pixel in the operation frame according to the weight matrix to obtain an adjusted operation frame image;
(4) Taking the operation frame before adjustment as a new sampling frame, taking the next frame of the operation frame as a new operation frame, and repeating the steps (1) - (3) until the image to be processed is processed;
in the step (1), the method for selecting the region of interest comprises the following steps:
recording the total line number of pixels of a sampling frame as N, wherein the line numbers are respectively 0,1,2, \8230 \ 8230;, N-1;
selecting a number range of
Figure FDA0003764398020000011
Behavior region 1 of (1);
selecting a number range of
Figure FDA0003764398020000012
Behavior region 2 of (1);
select the number range of
Figure FDA0003764398020000013
Behavior region 3;
regions 1,2, 3 together constitute a region of interest; wherein n is 1 、n 2 、n 3 Number of rows, n, of regions 1,2, 3, respectively 1 =n 3 Respectively account for 8% -12% of the total number of lines N, N 2 Accounting for 25-35% of the total number of rows N;
the selection method of the specific pixel row comprises the following steps: selecting all lines in the region of interest, or the lines with odd numbers, or the lines with even numbers;
the calculation method of the Target value Target comprises the following steps:
firstly, calculating the average values of all pixel values in specific pixel rows of the area 1 and the area 2, and respectively recording the average values as AP1 and AP2;
if Threshold1 is less than or equal to AP2 is less than or equal to Threshold2, then
Target=AP2;
If AP2> Threshold2 or AP2< Threshold1, the average of all pixel values in a particular pixel row of region 3 is calculated, denoted as AP3,
Target=(AP1+AP2+AP3)/3;
wherein Threshold1 is a lower Threshold, threshold1=128-t; threshold2 is an upper Threshold, threshold2=128+ t, where t is greater than or equal to 4 and less than or equal to 32.
2. The fast reacting vehicle HDR method as claimed in claim 1, wherein in the step (2), the weight matrix is obtained by:
if P i,j Equal to or greater than Target, then A i,j =1; if P i,j < Target, then A i,j =0;
Wherein, P i,j Is the pixel value of the ith row and jth column pixel point, A i,j And for the weighted values of the ith row and jth column of pixel points, the weighted values of all the pixels in the sampling frame form a weighted matrix.
3. The fast reactive vehicle HDR method as claimed in claim 1, wherein in the step (2), the weight matrix is obtained by:
if P i,j Greater than or equal to Target + V, then A i,j =3;
If Target is less than or equal to P i,j < Target + V, then A i,j =2;
If Target-V is less than or equal to P i,j < Target, then A i,j =1;
If P is i,j < Target-V, then A i,j =0;
Wherein, P i,j Is the pixel value of the ith row and jth column pixel point, A i,j And the weight values of the ith row and the jth column of pixel points are the weight values, V is a refinement coefficient and is less than or equal to 32, and the weight values of all pixels in the sampling frame form a weight matrix.
4. The fast reacting vehicle HDR method as claimed in claim 1, wherein in the step (2), the weight matrix is obtained by:
dividing n multiplied by n adjacent pixel points into a whole, taking the average pixel value of all the pixel points in the whole as the pixel value of any pixel point in the whole, wherein n is a common divisor of the row number and the column number of a sampling frame; all pixel points in the sampling frame are correspondingly divided;
if P is i,j Greater than or equal to Target, then A i,j =1; if P i,j < Target, then A i,j =0;
Wherein, P i,j Is the pixel value of the ith row and jth column pixel point, A i,j And for the weighted values of the ith row and jth column of pixel points, the weighted values of all the pixels in the sampling frame form a weighted matrix.
5. The fast-response vehicle HDR method as claimed in claim 2 or 4, wherein in step (3), the method for adjusting the pixel values in the operation frame is:
if the weighted value of the pixel point in the weight matrix is 1, subtracting the compensation value VP1 from the pixel value; if the weighted value of the pixel point is 0, adding the compensation value VP1 to the pixel value; the selected VP1 ensures that the pixel values after operation are all between 8 and 250; and operating all pixel points in the operation frame to obtain an adjusted operation frame image.
6. The fast-response vehicle HDR method as claimed in claim 3, wherein in step (3), the method for adjusting the pixel value in the operation frame is:
if the weighted value of the pixel point in the weight matrix is 3, subtracting the compensation value VP2 from the pixel value; if the weighted value of the pixel point is 2, subtracting a compensation value VP3 from the pixel value; if the weighted value of the pixel point is 1, adding the pixel value of the pixel point with a compensation value VP3; if the weighted value of the pixel point is 0, adding the pixel value of the pixel point with a compensation value VP2; wherein VP2 is larger than VP3, and the selected VP2 and VP3 ensure that the pixel values after operation are both between 8 and 250; and operating all pixel points in the operation frame to obtain an adjusted operation frame image.
CN202111534053.8A 2021-12-15 2021-12-15 A fast-response approach to in-vehicle HDR Active CN114257741B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111534053.8A CN114257741B (en) 2021-12-15 2021-12-15 A fast-response approach to in-vehicle HDR

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111534053.8A CN114257741B (en) 2021-12-15 2021-12-15 A fast-response approach to in-vehicle HDR

Publications (2)

Publication Number Publication Date
CN114257741A CN114257741A (en) 2022-03-29
CN114257741B true CN114257741B (en) 2022-12-06

Family

ID=80792393

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111534053.8A Active CN114257741B (en) 2021-12-15 2021-12-15 A fast-response approach to in-vehicle HDR

Country Status (1)

Country Link
CN (1) CN114257741B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI601122B (en) * 2016-11-15 2017-10-01 晨星半導體股份有限公司 Image compensation method applied to display and associated control circuit
CN108898566B (en) * 2018-07-20 2022-05-17 南京邮电大学 A Low-Illumination Color Video Enhancement Method Using Spatio-temporal Illuminance Map
CN109345485B (en) * 2018-10-22 2021-04-16 北京达佳互联信息技术有限公司 Image enhancement method and device, electronic equipment and storage medium
CN112419181B (en) * 2020-11-19 2023-12-08 中国科学院西安光学精密机械研究所 Method for enhancing detail of wide dynamic infrared image

Also Published As

Publication number Publication date
CN114257741A (en) 2022-03-29

Similar Documents

Publication Publication Date Title
CN106339196B (en) Data compression, decompression method and the Mura compensation method of DeMura table
CN109686342B (en) Image processing method and device
CN102413283B (en) Infrared chart digital signal processing system and method
KR102022812B1 (en) Image Contrast Reinforcement
CN101527038B (en) Improved method for enhancing picture contrast based on histogram
US8982251B2 (en) Image processing apparatus, image processing method, photographic imaging apparatus, and recording device recording image processing program
CN101783963A (en) Nighttime image enhancing method with highlight inhibition
JP2008092462A (en) Outline correction method, image processing apparatus, and display apparatus
EP4095793B1 (en) Method and apparatus for generating low bit width hdr image, storage medium, and terminal
CN103295182B (en) Realize Circuits System and the method thereof of infrared image being carried out to contrast stretching process
CN106531088A (en) Control method for optimizing dynamic backlight of local area of liquid crystal display equipment
CN103237168A (en) Method for processing high-dynamic-range image videos on basis of comprehensive gains
US8675963B2 (en) Method and apparatus for automatic brightness adjustment of image signal processor
CN114257741B (en) A fast-response approach to in-vehicle HDR
JP2014010776A (en) Image processing apparatus, image processing method, and program
CN106686320A (en) A Tone Mapping Method Based on Number Density Equalization
US10909669B2 (en) Contrast adjustment system and contrast adjustment method
CN111064897B (en) Statistical method and imaging device for exposure evaluation value
CN116863877A (en) Backlight brightness calculating method, display device and computer readable storage medium
CN113240590B (en) Image processing method and device
US8300970B2 (en) Method for video enhancement and computer device using the method
US9013626B2 (en) Signal processing circuit of solid-state imaging element, signal processing method of solid-state imaging element, and electronic apparatus
CN103841384A (en) Image-quality optimization method and device
Chang et al. Perceptual contrast enhancement of dark images based on textural coefficients
CN115767281B (en) Automatic exposure control method for realizing image entropy value based on FPGA

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant