CN113781317A - Brightness equalization method of panoramic all-around system - Google Patents
Brightness equalization method of panoramic all-around system Download PDFInfo
- Publication number
- CN113781317A CN113781317A CN202110880913.7A CN202110880913A CN113781317A CN 113781317 A CN113781317 A CN 113781317A CN 202110880913 A CN202110880913 A CN 202110880913A CN 113781317 A CN113781317 A CN 113781317A
- Authority
- CN
- China
- Prior art keywords
- optimal
- camera
- cameras
- value
- view
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000013507 mapping Methods 0.000 claims abstract description 80
- 239000000654 additive Substances 0.000 claims abstract description 41
- 230000000996 additive effect Effects 0.000 claims abstract description 41
- 238000012805 post-processing Methods 0.000 claims abstract description 10
- 230000008569 process Effects 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims abstract description 7
- 240000004050 Pentaglottis sempervirens Species 0.000 claims description 18
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 claims description 18
- 238000004364 calculation method Methods 0.000 claims description 13
- 238000007781 pre-processing Methods 0.000 claims description 13
- 238000005070 sampling Methods 0.000 claims description 10
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 3
- 230000000903 blocking effect Effects 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 5
- 230000004927 fusion Effects 0.000 abstract description 2
- 230000008859 change Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本发明涉及数字图像处理技术领域,具体涉及一种全景环视系统的亮度均衡方法。The invention relates to the technical field of digital image processing, in particular to a brightness equalization method of a panoramic surround view system.
背景技术Background technique
近年由于汽车数量的增长,停车场车位紧张,泊车空间狭窄,驾驶员在倒车入库、侧方位停车时,由于汽车左、右后视镜存在视野盲区,给驾驶员造成泊车障碍。全景环视系统可以展示车身四周环境的环视图像,给驾驶员提供车身四周的路面信息,辅助驾驶员进行驾驶判断,提高车辆在泊车时和复杂环境中的安全性能。In recent years, due to the increase in the number of cars, the parking spaces in the parking lot are tight and the parking space is narrow. When the driver reverses the car into the garage and parks on the side, the left and right rearview mirrors of the car have blind spots, causing parking obstacles for the driver. The panoramic surround view system can display the surround view images of the surrounding environment of the vehicle, provide the driver with road information around the vehicle body, assist the driver in making driving judgments, and improve the safety performance of the vehicle when parking and in complex environments.
由于全景环视系统中不同相机的光照条件不同,每个相机自动曝光(AE)和自动白平衡(AWB)参数也不同,因此合成的环视视图在相邻视图之间存在明显的边界,影响视觉效果,更会影响驾驶员对路面状况的判断。Since the lighting conditions of different cameras in the panoramic surround view system are different, and the parameters of automatic exposure (AE) and automatic white balance (AWB) of each camera are also different, the synthetic surround view has obvious boundaries between adjacent views, which affects the visual effect. , it will affect the driver's judgment of the road conditions.
发明内容SUMMARY OF THE INVENTION
本发明的目的是为了解决现有技术中的上述缺陷,提供一种全景环视系统的亮度均衡方法,降低亮度均衡算法的计算复杂度,有利于嵌入式系统实时实现。The purpose of the present invention is to solve the above-mentioned defects in the prior art, and provide a brightness equalization method for a panoramic surround view system, which reduces the computational complexity of the brightness equalization algorithm and is beneficial to the real-time implementation of an embedded system.
本发明的目的可以通过采取如下技术方案达到:The purpose of the present invention can be achieved by adopting the following technical solutions:
一种全景环视系统的亮度均衡方法,所述亮度均衡方法包括以下步骤:A brightness equalization method for a panoramic surround view system, the brightness equalization method comprising the following steps:
S1、图像格式转换:将输入的全景环视系统的P个相机的环视鸟瞰图转换为YUV格式,检查输入的全景环视系统的P个相机的环视鸟瞰图的格式,如果输入环视鸟瞰图不是YUV格式,将P个环视鸟瞰图进行颜色空间转换,转换为YUV格式,得到P个相机的输入视图Im,对应的Y、U、V分量分别为其中m为相机索引,m=0,1,...,P-1;如果输入环视鸟瞰图是YUV格式,直接得到输入视图Im和对应的Y、U、V分量和 S1. Image format conversion: Convert the bird's-eye views of the input P cameras of the panoramic surround view system to YUV format, and check the format of the bird's-eye views of the P cameras of the input panoramic view system. If the input bird's-eye view is not in YUV format , the P look-around bird's-eye views are converted into color space, converted into YUV format, and the input views Im of P cameras are obtained, and the corresponding Y, U, and V components are respectively where m is the camera index, m=0, 1, . and
S2、块平均降采样:只考虑亮度Y分量,相机m的输入视图与相邻相机n的输入视图的重叠区域图像记为Ωmn,其中,n为按顺时针方向的下一个相机索引,n≡(m+1)modP,符号“mod”表示求模运算,符号“≡”表示求模运算中的恒等于;相机n的输入视图与相邻相机m的输入视图的重叠区域图像记为Ωnm,对每组相邻相机的重叠区域图像Ωmn和Ωnm分别分块,每个子块大小为N×N个像素,取每个子块的平均值作为对应子块的输出,得到由子块的平均值组成的降采样图像Φmn和Φnm;S2, block average downsampling: only consider the luminance Y component, the input view of camera m Input view with neighboring camera n The image of the overlapping area of is denoted as Ω mn , where n is the next camera index in the clockwise direction, n≡(m+1)modP, the symbol "mod" represents the modulo operation, and the symbol "≡" represents the modulo operation. is equal to; the input view of camera n Input view with neighboring camera m The overlapping area image of , is denoted as Ω nm , and the overlapping area images Ω mn and Ω nm of each group of adjacent cameras are divided into blocks, each sub-block size is N×N pixels, and the average value of each sub-block is taken as the corresponding sub-block The output of , obtains down-sampled images Φmn and Φnm consisting of the average value of the sub-blocks;
S3、样本选择,选择差异较小的内点,获得样本索引集合;基于每组相邻相机重叠区域的降采样图像Φmn和Φnm的对应像素灰度值的差异,选择差异较小的内点的坐标组成样本索引集合Smn:S3. Sample selection, select inliers with small differences to obtain a sample index set; based on the difference between the gray values of the corresponding pixels of the down-sampled images Φmn and Φnm in the overlapping area of each group of adjacent cameras, select the inliers with small differences. The coordinates of the points make up the sample index set S mn :
Smn={[i,j]|(Φmn[i,j]-Φnm[i,j])2<Th,[i,j]∈[1,Nmn]×[1,Mmn]}S mn = {[i, j]|(Φ mn [i, j]-Φ nm [i, j]) 2 <Th, [i, j]∈[1, N mn ]×[1, M mn ] }
其中,i、j分别表示图像的横坐标和纵坐标,[i,j]表示图像的坐标,Φmn[i,j]表示降采样图像Φmn对应坐标[i,j]的像素灰度值,Φnm[i,j]表示降采样图像Φnm对应坐标[i,j]的像素灰度值,Th表示内点阈值,Nmn和Mmn分别表示降采样图像Φmn和Φnm的列数和行数;Among them, i and j represent the abscissa and ordinate of the image respectively, [i, j] represent the coordinates of the image, Φ mn [i, j] represents the pixel gray value of the down-sampled image Φ mn corresponding to the coordinates [i, j] , Φ nm [i, j] represents the pixel gray value of the down-sampled image Φ nm corresponding to coordinates [i, j], Th represents the interior point threshold, N mn and M mn represent the columns of the down-sampled image Φ mn and Φ nm , respectively number and number of lines;
S4、计算P个相机的最优加性增益使得调整后的相邻相机重叠区域的降采样图像Φmn和Φnm的对应内点像素灰度值的均方误差最小,满足以下公式:S4. Calculate the optimal additive gain of P cameras The mean square error of the corresponding inner pixel gray values of the adjusted down-sampled images Φ mn and Φ nm in the overlapping area of adjacent cameras is minimized, and the following formula is satisfied:
其中,gm为第m个相机的加性增益,表示第m个相机的最优加性增益;where g m is the additive gain of the mth camera, represents the optimal additive gain of the mth camera;
S5、加性增益模型亮度均衡预处理,每个相机的输入视图的Y分量加上最优加性增益进行亮度均衡预处理,公式如下:S5. Additive gain model brightness equalization preprocessing. The Y component of the input view of each camera is added with the optimal additive gain to perform brightness equalization preprocessing. The formula is as follows:
其中,表示第m个输入视图的Y分量经最优加性增益预处理后的输出视图;in, represents the output view of the Y component of the mth input view preprocessed by the optimal additive gain;
S6、修正降采样图像,每个降采样图像Φmn和Φnm分别加上对应的最优加性增益,公式如下:S6. Correct the down-sampling image, add the corresponding optimal additive gain to each down-sampling image Φ mn and Φ nm respectively, the formula is as follows:
其中,和分别表示降采样图像Φmn和Φnm经最优加性增益修正后的降采样图像;in, and represent the down-sampled images Φ mn and Φ nm corrected by the optimal additive gain, respectively;
S7、更新样本索引集合,使用步骤S3中方法重新获得相邻相机重叠区域的修正降采样图像和的内点的样本索引集合满足以下公式:S7, update the sample index set, and use the method in step S3 to re-obtain the corrected down-sampling image of the overlapping area of the adjacent cameras and A collection of sample indices for interior points of The following formulas are satisfied:
其中,表示修正降采样图像对应坐标[i,j]的像素灰度值,表示修正降采样图像对应坐标[i,j]的像素灰度值;in, Represents a corrected downsampled image the pixel gray value corresponding to the coordinates [i, j], Represents a corrected downsampled image The pixel gray value corresponding to the coordinates [i, j];
S8、计算最优映射曲线函数,第m个相机的映射曲线函数y=Tm(x)将[0,255]之间的输入像素值x映射到[0,255]之间的输出值y,P个相机的最优映射曲线函数Tm()通过最优化下面的公式得到:S8. Calculate the optimal mapping curve function. The mapping curve function y=T m (x) of the mth camera maps the input pixel value x between [0, 255] to the output value y between [0, 255] , the optimal mapping curve function T m () of the P cameras is obtained by optimizing the following formula:
其中,Tm()表示第m个相机的映射曲线函数,表示对应的最优映射曲线函数,β为惩罚因子,约束映射曲线函数的输入值和输出值的偏离程度;Among them, T m ( ) represents the mapping curve function of the mth camera, Represents the corresponding optimal mapping curve function, β is the penalty factor, which constrains the degree of deviation between the input value and the output value of the mapping curve function;
S9、最优映射曲线函数亮度均衡后处理,对P个相机的最优加性增益预处理视图通过最优映射曲线函数进行亮度均衡后处理,得到输出环视鸟瞰图的Y分量公式如下:S9, the optimal mapping curve function brightness equalization post-processing, the optimal additive gain preprocessing view for the P cameras The brightness equalization post-processing is carried out through the optimal mapping curve function, and the Y component of the output look-around bird's-eye view is obtained. The formula is as follows:
最后得到P个相机经亮度均衡处理后的输出环视鸟瞰图的Y、U、V分量分别为 Finally, the Y, U, and V components of the output look-around bird's-eye view of the P cameras after brightness equalization processing are respectively:
进一步地,所述亮度均衡方法在经过步骤S9中最优映射曲线函数亮度均衡后处理得到输出环视鸟瞰图的Y分量之后,还包括以下步骤:Further, the brightness equalization method obtains the Y component of the output look-around bird's-eye view after the brightness equalization of the optimal mapping curve function in step S9. After that, the following steps are also included:
S10、图像格式还原,如果输入的环视鸟瞰图不是YUV格式,将亮度均衡处理后的输出环视鸟瞰图进行颜色空间转换,转换为原来输入的格式。S10, restore the image format, if the input look-around bird's-eye view is not in YUV format, convert the output look-around bird's-eye view after brightness equalization processing to the color space, and convert it to the original input format.
进一步地,所述步骤S8中最优映射曲线函数计算过程如下:Further, the calculation process of the optimal mapping curve function in the step S8 is as follows:
S801、第m个相机的映射曲线函数使用线性分段映射函数表示,线性分段映射函数使用一组锚点定义,锚点共d+1个,第k个锚点的坐标为k为锚点索引,k=0,1,...,d,第m个相机的映射曲线函数Tm()可以表示为:S801. The mapping curve function of the mth camera is represented by a linear piecewise mapping function, the linear piecewise mapping function is defined by a set of anchor points, there are d+1 anchor points in total, and the coordinate of the kth anchor point is k is the anchor point index, k=0, 1, . . . , d, the mapping curve function T m ( ) of the m-th camera can be expressed as:
其中,x表示输入像素值,Tm(x)表示映射曲线函数的输出映射像素值;where x represents the input pixel value, and T m (x) represents the output mapped pixel value of the mapping curve function;
S802、第m个相机的最优线性分段映射函数的第0个锚点固定为第d个锚点固定为根据修正降采样图像和的内点的像素灰度值的分布范围,确定图像像素灰度值的最小值和最大值,得到其他锚点的横坐标范围为其中和通过下式确定:S802, the 0th anchor point of the optimal linear piecewise mapping function of the mth camera is fixed as The d-th anchor point is fixed as Downsample the image according to the correction and The distribution range of the pixel gray value of the inner point, determine the minimum and maximum value of the pixel gray value of the image, and obtain the abscissa range of other anchor points as in and Determined by:
其中,l为按顺时针方向的前一个相机索引,l≡(m-1)mod P,和分别对应第m个相机按顺时针方向的两个重叠区域的修正降采样图像,和分别为和的内点的样本索引集合,max()表示取最大值运算,min()表示取最小值运算;Among them, l is the previous camera index in the clockwise direction, l≡(m-1)mod P, and respectively correspond to the corrected down-sampled images of the two overlapping regions of the mth camera in the clockwise direction, and respectively and The sample index set of the interior points of , max() represents the operation of taking the maximum value, and min() represents the operation of taking the minimum value;
其他锚点均匀地分布在横坐标范围内,通过下式确定:other anchors Evenly distributed in the abscissa range is determined by the following formula:
其中,为第m个相机的第k个锚点的横坐标;in, is the abscissa of the k-th anchor point of the m-th camera;
S803、计算样本索引集合和第m个相机的第k个锚点纵坐标的优化,只涉及像素灰度值在区间的修正降采样图像和的内点;分别选择像素灰度值在区间的修正降采样图像和的内点的坐标组成样本索引集合和 S803. Calculate the sample index set and The ordinate of the kth anchor point of the mth camera The optimization involves only the pixel gray value in the Corrected downsampled image of the interval and The interior points of ; respectively select the pixel gray value in Corrected downsampled image of the interval and The coordinates of the interior points make up the sample index set and
其中,表示修正降采样图像对应坐标[i,j]的像素灰度值,和分别为和的内点的样本索引集合,表示第m个相机的第k-1个锚点的横坐标,表示第k+1个锚点的横坐标;in, Represents a corrected downsampled image the pixel gray value corresponding to the coordinates [i, j], and respectively and the set of sample indices of the interior points of , represents the abscissa of the k-1th anchor point of the mth camera, Represents the abscissa of the k+1th anchor point;
S804、按顺序依次迭代求解第m个相机的最优线性分段映射函数的d-1个锚点纵坐标得到第m个相机的最优线性分段映射函数;S804, iteratively solve the ordinates of the d-1 anchor points of the optimal linear piecewise mapping function of the mth camera in sequence Obtain the optimal linear piecewise mapping function of the mth camera;
S805、重复执行步骤S804,直到所有相机的所有锚点纵坐标的前后两次迭代变化量的绝对值小于设定阈值Tanchor,停止迭代,输出由锚点确定的最优线性分段映射函数作为最优映射曲线函数。S805. Repeat step S804 until the absolute value of the change amount of the two iterations before and after the ordinates of all anchor points of all cameras is less than the set threshold T anchor , stop the iteration, and output the optimal linear piecewise mapping function determined by the anchor points as Optimal mapping curve function.
进一步地,所述步骤S804中按顺序依次迭代求解第m个相机的最优线性分段映射函数的d-1个锚点纵坐标得到第m个相机的最优线性分段映射函数,过程如下:Further, in the step S804, iteratively solves the ordinates of the d-1 anchor points of the optimal linear piecewise mapping function of the mth camera in sequence. To obtain the optimal linear piecewise mapping function of the mth camera, the process is as follows:
S80401、固定其他锚点的纵坐标,更新第m个相机的最优线性分段映射函数的奇数组的锚点的纵坐标 S80401. Fix the ordinate of other anchor points, and update the ordinate of the anchor point of the odd group of the optimal linear piecewise mapping function of the mth camera
第k个锚点的纵坐标的更新计算公式如下:The ordinate of the kth anchor point The update calculation formula of is as follows:
其中,A、B、C均为中间计算变量,定义如下:Among them, A, B, and C are all intermediate calculation variables, which are defined as follows:
其中,和为相机m和n重叠区域的修正降采样图像,和为相机l和m重叠区域的修正降采样图像,Tn()和Tl()分别为第n,l个相机的映射曲线函数,函数Fmk()和Y’mk()为分段函数,定义如下:in, and is the corrected downsampled image for the overlapping area of cameras m and n, and is the corrected down-sampling image of the overlapping area of cameras l and m, T n () and T l () are the mapping curve functions of the nth and lth cameras, respectively, and the functions F mk () and Y' mk () are piecewise functions , defined as follows:
S80402、固定其他锚点的纵坐标,更新第m个相机的最优线性分段映射函数的偶数组的锚点的纵坐标 S80402. Fix the ordinate of other anchor points, and update the ordinate of the anchor point of the even group of the optimal linear piecewise mapping function of the mth camera
本发明相对于现有技术具有如下的优点及效果:Compared with the prior art, the present invention has the following advantages and effects:
1、现有技术需要处理全景环视系统环视鸟瞰图的所有颜色分量通道,本发明只对亮度Y分量进行亮度均衡,计算量是现有技术的1/3,降低了算法的计算复杂度。1. The prior art needs to deal with all the color component channels of the panoramic bird's-eye view of the panoramic view system. The present invention only performs luminance equalization on the luminance Y component, and the calculation amount is 1/3 of the prior art, which reduces the computational complexity of the algorithm.
2、现有技术采用加性增益模型,具有显式数学解,本发明结合加性增益模型和最优线性分段映射函数,利用加性增益模型进行预处理,利用最优线性分段映射函数进行后处理,进一步改善亮度均衡效果,降低了全景环视系统不同相机视图重叠区域的亮度差异。2. The prior art adopts an additive gain model and has an explicit mathematical solution. The present invention combines the additive gain model and the optimal linear piecewise mapping function, utilizes the additive gain model for preprocessing, and utilizes the optimal linear piecewise mapping function. Post-processing is performed to further improve the brightness balance effect and reduce the brightness difference in the overlapping areas of different camera views of the panoramic surround view system.
附图说明Description of drawings
图1是本发明公开的全景环视系统的亮度均衡方法的步骤流程图;Fig. 1 is the step flow chart of the brightness equalization method of the panoramic surround view system disclosed by the present invention;
图2是本发明公开的块平均降采样示意图;2 is a schematic diagram of block average downsampling disclosed in the present invention;
图3是本发明公开的线性分段映射函数的示意图。FIG. 3 is a schematic diagram of the linear piecewise mapping function disclosed in the present invention.
具体实施方式Detailed ways
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purposes, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments These are some embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
实施例Example
本实施例公开了一种全景环视系统的亮度均衡方法,如图1所示,包括以下步骤:This embodiment discloses a brightness equalization method for a panoramic surround view system, as shown in FIG. 1 , including the following steps:
S1、图像格式转换:检查输入的全景环视系统的P个相机的环视鸟瞰图的格式,如果输入环视鸟瞰图不是YUV格式,将P个环视鸟瞰图进行颜色空间转换,转换为YUV格式,得到P个相机的输入视图Im,对应的Y、U、V分量分别为其中m为相机索引,m=0,1,...,P-1;如果输入环视鸟瞰图是YUV格式,直接得到输入视图Im和对应的Y、U、V分量和在本实施例中,P取4。S1. Image format conversion: Check the format of the surround-view bird's-eye view of the P cameras of the input panoramic surround-view system. If the input surround-view bird's-eye view is not in YUV format, perform color space conversion on the P surround-view bird's-eye views, convert them to YUV format, and get P The input view Im of each camera, the corresponding Y, U, V components are respectively where m is the camera index, m=0, 1, . and In this embodiment, P is 4.
S2、块平均降采样:如图2所示,只考虑亮度Y分量,相机m的输入视图与相邻相机n的输入视图的重叠区域图像记为Ωmn,其中,m为相机索引,n为按顺时针方向的下一个相机索引,n≡(m+1)mod P,符号“mod”表示求模运算,符号“≡”表示求模运算中的恒等于;相机n的输入视图与相邻相机m的输入视图的重叠区域图像记为Ωnm,对每组相邻相机的重叠区域图像Ωmn和Ωnm分别分块,每个子块大小为N×N个像素,取每个子块的平均值作为对应子块的输出,得到由子块的平均值组成的降采样图像Φmn和Φnm;在本实施例中,N取4。S2, block average downsampling: As shown in Figure 2, only the luminance Y component is considered, the input view of camera m Input view with neighboring camera n The image of the overlapping area of is denoted as Ω mn , where m is the camera index, n is the next camera index in the clockwise direction, n≡(m+1)mod P, the symbol "mod" represents the modulo operation, and the symbol "≡ ” means constant in the modulo operation; the input view of camera n Input view with neighboring camera m The overlapping area image of , is denoted as Ω nm , and the overlapping area images Ω mn and Ω nm of each group of adjacent cameras are divided into blocks, each sub-block size is N×N pixels, and the average value of each sub-block is taken as the corresponding sub-block , the down-sampled images Φ mn and Φ nm composed of the average value of the sub-blocks are obtained; in this embodiment, N is 4.
S3、样本选择,选择差异较小的内点,获得样本索引集合;基于每组相邻相机重叠区域的降采样图像Φmn和Φnm的对应像素灰度值的差异,选择差异较小的内点的坐标组成样本索引集合Smn:S3. Sample selection, select inliers with small differences to obtain a sample index set; based on the difference between the gray values of the corresponding pixels of the down-sampled images Φmn and Φnm in the overlapping area of each group of adjacent cameras, select the inliers with small differences. The coordinates of the points make up the sample index set S mn :
Smn={[i,j]|(Φmn[i,j]-Φmm[i,j])2<Th,[i,j]∈[1,Nmn]×[1,Mmn]}S mn = {[i, j]|(Φ mn [i, j]-Φ mm [i, j]) 2 <Th, [i, j]∈[1, N mn ]×[1, M mn ] }
其中,i、j分别表示图像的横坐标和纵坐标,[i,j]表示图像的坐标,Φmn[i,j]表示降采样图像Φmn对应坐标[i,j]的像素灰度值,Φnm[i,j]表示降采样图像Φnm对应坐标[i,j]的像素灰度值,Th表示内点阈值,Nmn和Mmn分别表示降采样图像Φmn和Φnm的列数和行数;在本实施例中,Th取500。Among them, i and j represent the abscissa and ordinate of the image respectively, [i, j] represent the coordinates of the image, Φ mn [i, j] represents the pixel gray value of the down-sampled image Φ mn corresponding to the coordinates [i, j] , Φ nm [i, j] represents the pixel gray value of the down-sampled image Φ nm corresponding to coordinates [i, j], Th represents the interior point threshold, N mn and M mn represent the columns of the down-sampled image Φ mn and Φ nm , respectively number and number of rows; in this embodiment, Th is taken as 500.
S4、计算P个相机的最优加性增益使得调整后的相邻相机重叠区域的降采样图像Φmn和Φnm的对应内点像素灰度值的均方误差最小,满足以下公式:S4. Calculate the optimal additive gain of P cameras The mean square error of the corresponding inner pixel gray values of the adjusted down-sampled images Φ mn and Φ nm in the overlapping area of adjacent cameras is minimized, and the following formula is satisfied:
其中,gm为第m个相机的加性增益,表示对应的最优加性增益,Smn表示相邻相机重叠区域的降采样图像Φmn和Φnm的内点的样本索引集合;where g m is the additive gain of the mth camera, represents the corresponding optimal additive gain, and S mn represents the sample index set of the interior points of the down-sampled images Φ mn and Φ nm of the overlapping area of adjacent cameras;
P个相机的最优加性增益的计算流程如下:Optimal additive gain for P cameras The calculation process is as follows:
S401、选择具有中等亮度的相机输入视图作为参考视图,参考视图的索引ref的计算方法如下:S401. Select a camera input view with medium brightness as a reference view, and the calculation method of the index ref of the reference view is as follows:
S40101、计算降采样图像Φmn和Φnm的内点的样本均值Amn和Anm:S40101. Calculate the sample mean values A mn and A nm of the interior points of the down-sampled images Φ mn and Φ nm :
其中,Φmn[i,j]表示降采样图像Φmn对应坐标[i,j]的像素灰度值,Φnm[i,j]表示降采样图像Φnm对应坐标[i,j]的像素灰度值,ρmn表示样本索引集合Smn的索引数量;Among them, Φ mn [i, j] represents the pixel gray value of the down-sampled image Φ mn corresponding to coordinates [i, j], Φ nm [i, j] represents the pixel of the down-sampled image Φ nm corresponding to coordinates [i, j] Gray value, ρ mn represents the index number of the sample index set S mn ;
S40102、计算第m个相机的输入视图Y分量的重叠区域亮度均值Am:S40102. Calculate the Y component of the input view of the mth camera The average brightness of the overlapping area Am :
Am=Amn+Aml A m =A mn +A ml
其中,n≡(m+1)mod P,l为按顺时针方向的前一个相机索引,l≡(m-1)modP,Amn和Aml分别对应第m个相机按顺时针方向的两个重叠区域的降采样图像Φmn和Φml的内点样本均值;Among them, n≡(m+1)mod P, l is the index of the previous camera in the clockwise direction, l≡(m-1)modP, A mn and A ml respectively correspond to the two clockwise directions of the mth camera. The mean of the inlier samples of the down-sampled images Φmn and Φml of the overlapping regions;
S40103、计算P个相机的亮度均值A: S40103. Calculate the brightness mean value A of the P cameras:
S40104、计算参考视图的相机索引ref:将第ref个视图作为参考视图;S40104. Calculate the camera index ref of the reference view: Use the ref-th view as the reference view;
S402、计算降采样图像Φmn和Φnn的内点的样本均值差dm:dm=Amn-Anm;S402, calculate the sample mean difference d m of the inner points of the down-sampled images Φ mn and Φ nn : d m =A mn −A nm ;
S403、计算所有相机的样本均值差的平均值d: S403. Calculate the average value d of the sample mean difference of all cameras:
S404、最优加性增益通过以下公式计算:S404. Optimal additive gain Calculated by the following formula:
其中,ref为参考视图的相机索引,dm为第m个样本均值差;Among them, ref is the camera index of the reference view, and d m is the mean difference of the mth sample;
S5、加性增益模型亮度均衡预处理,每个相机的输入视图的Y分量加上最优加性增益进行亮度均衡预处理,公式如下:S5. Additive gain model brightness equalization preprocessing. The Y component of the input view of each camera is added with the optimal additive gain to perform brightness equalization preprocessing. The formula is as follows:
其中,表示第m个输入视图的Y分量经最优加性增益预处理后的输出视图,表示第m个相机的最优加性增益;in, represents the output view of the Y component of the mth input view preprocessed by the optimal additive gain, represents the optimal additive gain of the mth camera;
S6、修正降采样图像,每个降采样图像Φmn和Φnm分别加上对应的最优加性增益,公式如下:S6. Correct the down-sampling image, add the corresponding optimal additive gain to each down-sampling image Φ mn and Φ nm respectively, the formula is as follows:
其中,和分别表示降采样图像Φmn和Φnm经最优加性增益修正后的降采样图像;in, and represent the down-sampled images Φ mn and Φ nm corrected by the optimal additive gain, respectively;
S7、更新样本索引集合;使用步骤S3中所描述的方法重新获得修正降采样图像和的内点的样本索引集合满足以下公式:S7, update the sample index set; use the method described in step S3 to re-obtain the corrected down-sampling image and A collection of sample indices for interior points of The following formulas are satisfied:
其中,表示修正降采样图像对应坐标[i,j]的像素灰度值,表示修正降采样图像对应坐标[i,j]的像素灰度值;in, Represents a corrected downsampled image the pixel gray value corresponding to the coordinates [i, j], Represents a corrected downsampled image The pixel gray value corresponding to the coordinates [i, j];
S8、计算最优映射曲线函数,第m个相机的映射曲线函数y=Tm(x)将[0,255]之间的输入像素值x映射到[0,255]之间的输出值y,P个相机的最优映射曲线函数Tm()通过最优化下面的公式得到:S8. Calculate the optimal mapping curve function. The mapping curve function y=T m (x) of the mth camera maps the input pixel value x between [0, 255] to the output value y between [0, 255] , the optimal mapping curve function T m () of the P cameras is obtained by optimizing the following formula:
其中,Tm()表示第m个相机的映射曲线函数,表示对应的最优映射曲线函数,n为按顺时针方向的下一个相机索引,β为惩罚因子,约束映射曲线函数的输入值和输出值的偏离程度,和为相邻相机重叠区域的修正降采样图像;Among them, T m ( ) represents the mapping curve function of the mth camera, represents the corresponding optimal mapping curve function, n is the next camera index in the clockwise direction, β is the penalty factor, which constrains the degree of deviation between the input value and the output value of the mapping curve function, and A corrected downsampled image for the overlapping area of adjacent cameras;
P个相机的最优映射曲线函数y=Tm(x)的计算流程如下:The calculation process of the optimal mapping curve function y=T m (x) of the P cameras is as follows:
S801、第m个相机的映射曲线函数使用线性分段映射函数表示,如图3所示,线性分段映射函数使用一组锚点定义,锚点共d+1个,在本实施例中,d取5,第k个锚点的坐标为k为锚点索引,k=0,1,...,d,第m个相机的映射曲线函数Tm()可以表示为:S801. The mapping curve function of the mth camera is represented by a linear piecewise mapping function. As shown in FIG. 3, the linear piecewise mapping function is defined by a set of anchor points, and there are d+1 anchor points in total. In this embodiment, d is set to 5, and the coordinates of the kth anchor point are k is the anchor point index, k=0, 1, . . . , d, the mapping curve function T m ( ) of the m-th camera can be expressed as:
当 when
其中,m表示相机索引,x表示输入像素值,Tm(x)表示映射曲线函数的输出映射像素值;where m represents the camera index, x represents the input pixel value, and T m (x) represents the output mapped pixel value of the mapping curve function;
S802、第m个相机的最优线性分段映射函数的第0个锚点固定为第d个锚点固定为根据修正降采样图像和的内点的像素灰度值的分布范围,确定图像像素灰度值的最小值和最大值,可以得到其他锚点的横坐标范围为其中和通过下式确定:S802, the 0th anchor point of the optimal linear piecewise mapping function of the mth camera is fixed as The d-th anchor point is fixed as Downsample the image according to the correction and The distribution range of the pixel gray value of the inner point, determine the minimum and maximum value of the pixel gray value of the image, and the abscissa range of other anchor points can be obtained as in and Determined by:
其中,m为相机索引,n为按顺时针方向的下一个相机索引,n≡(m+1)modP,l为按顺时针方向的前一个相机索引,l≡(m-1)mod P,和分别对应第m个相机按顺时针方向的两个重叠区域的修正降采样图像,和分别为和的内点的样本索引集合,max()表示取最大值运算,min()表示取最小值运算;Among them, m is the camera index, n is the next camera index in the clockwise direction, n≡(m+1)modP, l is the previous camera index in the clockwise direction, l≡(m-1)mod P, and respectively correspond to the corrected down-sampled images of the two overlapping regions of the mth camera in the clockwise direction, and respectively and The sample index set of the interior points of , max() represents the operation of taking the maximum value, and min() represents the operation of taking the minimum value;
其他锚点均匀地分布在横坐标范围内,通过下式确定:other anchors Evenly distributed in the abscissa range is determined by the following formula:
其中,为第m个相机的第k个锚点的横坐标;in, is the abscissa of the k-th anchor point of the m-th camera;
S803、计算样本索引集合和第m个相机的第k个锚点纵坐标的优化,只涉及像素灰度值在区间的修正降采样图像和的内点;分别选择像素灰度值在区间的修正降采样图像和的内点的坐标组成样本索引集合和 S803. Calculate the sample index set and The ordinate of the kth anchor point of the mth camera The optimization involves only the pixel gray value in the Corrected downsampled image of the interval and The interior points of ; respectively select the pixel gray value in Corrected downsampled image of the interval and The coordinates of the interior points make up the sample index set and
其中,k为锚点索引,k=1,2,...,d-1,表示修正降采样图像对应坐标[i,j]的像素灰度值,表示修正降采样图像对应坐标[i,j]的像素灰度值,和分别为和的内点的样本索引集合,表示第m个相机的第k-1个锚点的横坐标,表示第k+1个锚点的横坐标;Among them, k is the anchor index, k=1, 2, ..., d-1, Represents a corrected downsampled image the pixel gray value corresponding to the coordinates [i, j], Represents a corrected downsampled image the pixel gray value corresponding to the coordinates [i, j], and respectively and the set of sample indices of the interior points of , represents the abscissa of the k-1th anchor point of the mth camera, Represents the abscissa of the k+1th anchor point;
S804、按顺序依次迭代求解第m个相机的最优线性分段映射函数,m=0,1,...,P-1;S804, iteratively solve the optimal linear piecewise mapping function of the mth camera in sequence, m=0, 1, . . . , P-1;
迭代求解第m个相机的最优线性分段映射函数的d-1个锚点纵坐标过程如下:Iteratively solve the d-1 ordinates of the d-1 anchor points of the optimal linear piecewise mapping function of the mth camera The process is as follows:
S80401、固定其他锚点的纵坐标,更新第m个相机的最优线性分段映射函数的奇数组的锚点的纵坐标 S80401. Fix the ordinate of other anchor points, and update the ordinate of the anchor point of the odd group of the optimal linear piecewise mapping function of the mth camera
第k个锚点的纵坐标的更新计算公式如下:The ordinate of the kth anchor point The update calculation formula of is as follows:
其中,A、B、C均为中间计算变量,定义如下:Among them, A, B, and C are all intermediate calculation variables, which are defined as follows:
其中,和为相机m和n重叠区域的修正降采样图像,和为相机l和m重叠区域的修正降采样图像,Tn()和Tl()分别为第n、l个相机的映射曲线函数,函数Fmk()和Y’mk()为分段函数,定义如下:in, and is the corrected downsampled image for the overlapping area of cameras m and n, and is the modified down-sampled image of the overlapping area of cameras l and m, T n () and T l () are the mapping curve functions of the nth and lth cameras, respectively, and the functions F mk () and Y' mk () are piecewise functions , defined as follows:
S80402、固定其他锚点的纵坐标,更新第m个相机的最优线性分段映射函数的偶数组的锚点的纵坐标 S80402. Fix the ordinate of other anchor points, and update the ordinate of the anchor point of the even group of the optimal linear piecewise mapping function of the mth camera
S805、重复执行步骤S804,直到所有相机的所有锚点纵坐标的前后两次迭代变化量的绝对值小于设定阈值Tanchor,停止迭代,输出由锚点确定的最优线性分段映射函数作为最优映射曲线函数;在本实施例中,Tanchor取1.0e-6;S805. Repeat step S804 until the absolute value of the change amount of the two iterations before and after the ordinates of all anchor points of all cameras is less than the set threshold T anchor , stop the iteration, and output the optimal linear piecewise mapping function determined by the anchor points as Optimal mapping curve function; In this embodiment, T anchor takes 1.0e-6;
S9、最优映射曲线函数亮度均衡后处理;对P个相机的最优加性增益预处理视图通过最优映射曲线函数进行亮度均衡后处理,得到输出环视鸟瞰图的Y分量公式如下:S9, optimal mapping curve function brightness equalization post-processing; optimal additive gain preprocessing view for P cameras The brightness equalization post-processing is carried out through the optimal mapping curve function, and the Y component of the output look-around bird's-eye view is obtained. The formula is as follows:
S10、图像格式还原;P个相机经亮度均衡处理后的输出环视鸟瞰图的Y、U、V分量分别为如果输入的环视鸟瞰图不是YUV格式,将亮度均衡处理后的输出环视鸟瞰图进行颜色空间转换,转换为原来输入的格式。S10. Image format restoration; the Y, U, and V components of the output look-around bird's-eye view of the P cameras after brightness equalization processing are respectively: If the input look-around bird's-eye view is not in YUV format, perform color space conversion on the output look-around bird's-eye image after brightness equalization to convert it to the original input format.
综上所述,由于全景环视系统中不同相机的光照条件不同,每个相机自动曝光(AE)和自动白平衡(AWB)参数也不同,合成的环视视图在相邻视图之间存在明显的边界,影响视觉效果,针对上述现有技术问题,本实施例公开了一种全景环视系统的亮度均衡方法,该方法只处理输入视图的亮度Y分量,对相邻相机视图的重叠区域进行块平均降采样,选择内点,得到的内点样本索引集合用于计算最优加性增益;然后利用最优加性增益进行亮度均衡预处理;最后重新选择预处理后的内点样本,计算最优线性分段映射函数,进行亮度均衡后处理。现有技术在所有颜色分量通道采用加性增益模型,具有显式数学解,本发明结合加性增益模型和最优线性分段映射函数,利用加性增益模型进行预处理,利用最优线性分段映射函数进行后处理,进一步改善亮度均衡效果,改善图像拼接融合效果;本发明只对输入视图的亮度分量进行处理,计算量是现有技术的1/3,降低了计算复杂度。To sum up, due to the different lighting conditions of different cameras in the panoramic surround view system, and the automatic exposure (AE) and automatic white balance (AWB) parameters of each camera are also different, the synthetic surround view has obvious boundaries between adjacent views. , affecting the visual effect. In view of the above problems in the prior art, this embodiment discloses a brightness equalization method for a panoramic surround view system. The method only processes the brightness Y component of the input view, and performs block average reduction on the overlapping area of adjacent camera views. Sampling, select interior points, and the obtained interior point sample index set is used to calculate the optimal additive gain; then use the optimal additive gain to perform brightness equalization preprocessing; finally re-select the preprocessed interior point samples to calculate the optimal linearity Segment mapping function for post-processing of luminance equalization. The prior art adopts the additive gain model in all color component channels, and has an explicit mathematical solution. The present invention combines the additive gain model and the optimal linear piecewise mapping function, uses the additive gain model for preprocessing, and uses the optimal linear The segment mapping function is post-processed to further improve the brightness equalization effect and the image splicing and fusion effect; the present invention only processes the brightness component of the input view, and the calculation amount is 1/3 of the prior art, which reduces the calculation complexity.
上述实施例为本发明较佳的实施方式,但本发明的实施方式并不受上述实施例的限制,其他的任何未背离本发明的精神实质与原理下所作的改变、修饰、替代、组合、简化,均应为等效的置换方式,都包含在本发明的保护范围之内。The above-mentioned embodiments are preferred embodiments of the present invention, but the embodiments of the present invention are not limited by the above-mentioned embodiments, and any other changes, modifications, substitutions, combinations, The simplification should be equivalent replacement manners, which are all included in the protection scope of the present invention.
Claims (4)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110880913.7A CN113781317B (en) | 2021-08-02 | 2021-08-02 | A Brightness Equalization Method for Panoramic Surround View System |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110880913.7A CN113781317B (en) | 2021-08-02 | 2021-08-02 | A Brightness Equalization Method for Panoramic Surround View System |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113781317A true CN113781317A (en) | 2021-12-10 |
CN113781317B CN113781317B (en) | 2023-08-18 |
Family
ID=78836472
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110880913.7A Active CN113781317B (en) | 2021-08-02 | 2021-08-02 | A Brightness Equalization Method for Panoramic Surround View System |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113781317B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117237237A (en) * | 2023-11-13 | 2023-12-15 | 深圳元戎启行科技有限公司 | Luminosity balancing method and device for vehicle-mounted 360-degree panoramic image |
CN118918198A (en) * | 2024-10-11 | 2024-11-08 | 深圳市云希谷科技有限公司 | Photo preview method, electronic device and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060177150A1 (en) * | 2005-02-01 | 2006-08-10 | Microsoft Corporation | Method and system for combining multiple exposure images having scene and camera motion |
US20090231447A1 (en) * | 2008-03-12 | 2009-09-17 | Chung-Ang University Industry-Academic Cooperation Foundation | Apparatus and method for generating panorama images and apparatus and method for object-tracking using the same |
CN109166076A (en) * | 2018-08-10 | 2019-01-08 | 深圳岚锋创视网络科技有限公司 | Luminance regulating method, device and the portable terminal of polyphaser splicing |
-
2021
- 2021-08-02 CN CN202110880913.7A patent/CN113781317B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060177150A1 (en) * | 2005-02-01 | 2006-08-10 | Microsoft Corporation | Method and system for combining multiple exposure images having scene and camera motion |
US20090231447A1 (en) * | 2008-03-12 | 2009-09-17 | Chung-Ang University Industry-Academic Cooperation Foundation | Apparatus and method for generating panorama images and apparatus and method for object-tracking using the same |
CN109166076A (en) * | 2018-08-10 | 2019-01-08 | 深圳岚锋创视网络科技有限公司 | Luminance regulating method, device and the portable terminal of polyphaser splicing |
Non-Patent Citations (1)
Title |
---|
范翔;夏顺仁;: "基于特征的显微图像全自动拼接", 浙江大学学报(工学版) * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117237237A (en) * | 2023-11-13 | 2023-12-15 | 深圳元戎启行科技有限公司 | Luminosity balancing method and device for vehicle-mounted 360-degree panoramic image |
CN118918198A (en) * | 2024-10-11 | 2024-11-08 | 深圳市云希谷科技有限公司 | Photo preview method, electronic device and storage medium |
CN118918198B (en) * | 2024-10-11 | 2024-12-06 | 深圳市云希谷科技有限公司 | Photo preview method, electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113781317B (en) | 2023-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102750674B (en) | Video image defogging method based on self-adapting allowance | |
CN110148095A (en) | A kind of underwater picture Enhancement Method and enhancement device | |
US9509909B2 (en) | Method and apparatus for a surround view camera system photometric alignment | |
CN113781317A (en) | Brightness equalization method of panoramic all-around system | |
KR20120136813A (en) | Apparatus and method for image processing | |
WO2018168539A1 (en) | Learning method and program | |
CN104252700A (en) | Histogram equalization method for infrared image | |
WO2023206343A1 (en) | Image super-resolution method based on image pre-training strategy | |
CN112767243B (en) | Method and system for realizing super-resolution of hyperspectral image | |
CN115115516B (en) | Real world video super-resolution construction method based on Raw domain | |
CN109886906B (en) | Detail-sensitive real-time low-light video enhancement method and system | |
CN112104847A (en) | SONY-RGBW array color reconstruction method based on residual error and high-frequency replacement | |
CN112529813B (en) | Image defogging processing method and device and computer storage medium | |
CN104933687B (en) | A kind of multiple dimensioned emergence algorithm of jointing line for considering region of variation | |
CN111932594A (en) | Billion pixel video alignment method and device based on optical flow and medium | |
CN115731132A (en) | Image restoration method, device, equipment and medium | |
CN113191959B (en) | A method to improve the limit image quality of digital imaging system based on degradation calibration | |
CN109767407A (en) | A Quadratic Estimation Method for Atmospheric Transmittance Image in Dehazing Process | |
CN114155173A (en) | Image defogging method and device and nonvolatile storage medium | |
CN117372307B (en) | A distributed image enhancement method for collaborative detection by multiple UAVs | |
CN118195960B (en) | Multi-layer subnet multi-task image restoration method combining spatial aggregation attention | |
CN105160627B (en) | Super-resolution image acquisition method and system based on classification self-learning | |
CN109934342B (en) | Neural network model training method, depth image restoration method and system | |
Huang et al. | An end-to-end dehazing network with transitional convolution layer | |
KR101230909B1 (en) | Apparatus and method for processing wide angle image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |