CN113781317A - Brightness equalization method of panoramic all-around system - Google Patents

Brightness equalization method of panoramic all-around system Download PDF

Info

Publication number
CN113781317A
CN113781317A CN202110880913.7A CN202110880913A CN113781317A CN 113781317 A CN113781317 A CN 113781317A CN 202110880913 A CN202110880913 A CN 202110880913A CN 113781317 A CN113781317 A CN 113781317A
Authority
CN
China
Prior art keywords
optimal
camera
cameras
value
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110880913.7A
Other languages
Chinese (zh)
Other versions
CN113781317B (en
Inventor
林耀荣
曾赞云
郑晓雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202110880913.7A priority Critical patent/CN113781317B/en
Publication of CN113781317A publication Critical patent/CN113781317A/en
Application granted granted Critical
Publication of CN113781317B publication Critical patent/CN113781317B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a brightness balancing method of a panoramic all-round system, which comprises the following steps: firstly, only processing the brightness Y component of an input view, carrying out block average downsampling on an overlapping area of adjacent camera views, selecting an interior point, and using an obtained interior point sample index set for calculating optimal additive gain; then, brightness equalization pretreatment is carried out by utilizing the optimal additive gain; and finally, reselecting the preprocessed interior point samples, calculating an optimal linear piecewise mapping function, and carrying out brightness equalization post-processing. The method combines the additive gain model and the optimal linear piecewise mapping function, only processes the brightness component of the input view, reduces the computational complexity and improves the image splicing and fusion effect.

Description

Brightness equalization method of panoramic all-around system
Technical Field
The invention relates to the technical field of digital image processing, in particular to a brightness balancing method of a panoramic all-around system.
Background
In recent years, due to the increase of the number of automobiles, parking spaces in a parking lot are tense, parking spaces are narrow, and when a driver backs a car and parks at a side position, the driver is caused with parking obstacles due to the fact that vision blind areas exist on left and right rearview mirrors of the automobile. The panoramic all-round looking system can display all-round looking images of the surrounding environment of the vehicle body, provide road surface information of the surrounding of the vehicle body for a driver, assist the driver in driving judgment and improve the safety performance of the vehicle in parking and in a complex environment.
Because the lighting conditions of different cameras in the panoramic looking-around system are different, and the Automatic Exposure (AE) and Automatic White Balance (AWB) parameters of each camera are also different, the synthesized looking-around view has obvious boundaries between adjacent views, thereby influencing the visual effect and further influencing the judgment of a driver on the road surface condition.
Disclosure of Invention
The invention aims to solve the defects in the prior art, provides a brightness equalization method of a panoramic all-around system, reduces the calculation complexity of a brightness equalization algorithm, and is beneficial to the real-time realization of an embedded system.
The purpose of the invention can be achieved by adopting the following technical scheme:
a brightness equalization method for a panoramic surround view system, said brightness equalization method comprising the steps of:
s1, image format conversion: converting the input circular view of the P cameras of the panoramic circular view system into YUV format, checking the format of the input circular view of the P cameras of the panoramic circular view system, if the input circular view is not YUV format, performing color space conversion on the P circular view, converting the color space conversion into YUV format, and obtaining the input view I of the P camerasmThe corresponding Y, U, V components are respectively
Figure BDA0003191976740000021
Wherein m is a camera index, m is 0, 1. If the input look-around aerial view is in YUV format, the input view I is directly obtainedmAnd corresponding Y, U, V component
Figure BDA0003191976740000022
And
Figure BDA0003191976740000023
s2, block average down-sampling: input view of camera m, taking into account only the luminance Y component
Figure BDA0003191976740000024
Input views with neighboring cameras n
Figure BDA0003191976740000025
Is recorded as omegamnWherein n is the next camera index in the clockwise direction, n ≡ (m +1) mod P, the symbol "mod" represents the modulo operation, and the symbol "≡" represents identity in the modulo operation; input views for camera n
Figure BDA0003191976740000026
Input views with neighboring cameras m
Figure BDA0003191976740000027
Is recorded as omeganmFor each set of overlapping area images omega of adjacent camerasmn and ΩnmRespectively partitioning, wherein the size of each subblock is N multiplied by N pixels, taking the average value of each subblock as the output of the corresponding subblock, and obtaining a downsampled image phi consisting of the average values of the subblocksmn and Φnm
S3, selecting a sample, selecting an inner point with smaller difference, and obtaining a sample index set; downsampled image phi based on overlapping area of each group of adjacent camerasmn and ΦnmSelecting the coordinates of the inner points with smaller difference to form a sample index set Smn
Smn={[i,j]|(Φmn[i,j]-Φnm[i,j])2<Th,[i,j]∈[1,Nmn]×[1,Mmn]}
Wherein i, j represent the abscissa and ordinate of the image, respectively, [ i, j]Coordinates representing the image, [ phi ]mn[i,j]Representing a down-sampled image phimnCorresponding coordinate [ i, j]Pixel gray value of phinm[i,j]Representing a down-sampled image phinmCorresponding coordinate [ i, j]Th represents an inner point threshold, Nmn and MmnRespectively representing down-sampled images phimn and ΦnmThe number of columns and rows;
s4, calculating the optimal additive gains of the P cameras
Figure BDA0003191976740000028
Down-sampled image phi of adjusted adjacent camera overlap regionmn and ΦnmThe mean square error of the corresponding interior point pixel gray value is the minimum, and the following formula is satisfied:
Figure BDA0003191976740000031
wherein ,gmFor the additive gain of the m-th camera,
Figure BDA0003191976740000032
represents the optimal additive gain for the mth camera;
s5, performing brightness equalization preprocessing by an additive gain model, and performing brightness equalization preprocessing by adding the optimal additive gain to the Y component of the input view of each camera, wherein the formula is as follows:
Figure BDA0003191976740000033
wherein ,
Figure BDA0003191976740000034
an output view which represents the Y component of the mth input view after the optimal additive gain preprocessing;
s6, modifying the downsampled image, each downsampled image phimn and ΦnmRespectively adding the corresponding optimal additive gains, wherein the formula is as follows:
Figure BDA0003191976740000035
Figure BDA0003191976740000036
wherein ,
Figure BDA0003191976740000037
and
Figure BDA0003191976740000038
respectively representing down-sampled images phimn and ΦnmThe down-sampled image is corrected by the optimal additive gain;
s7, updating the sample index set, and obtaining the modified down-sampled image of the overlapping area of the adjacent cameras again by using the method in the step S3
Figure BDA0003191976740000039
And
Figure BDA00031919767400000310
sample index set of inliers of
Figure BDA00031919767400000311
The following formula is satisfied:
Figure BDA00031919767400000312
wherein ,
Figure BDA00031919767400000313
representing modified downsampled images
Figure BDA00031919767400000314
Corresponding coordinate [ i, j]The gray value of the pixel of (a),
Figure BDA00031919767400000315
representing modified downsampled images
Figure BDA00031919767400000316
Corresponding coordinate [ i, j]Pixel gray scale value of (a);
s8, calculating the optimal mapping curve function, wherein the mapping curve function y of the mth camera is Tm(x) Will [0,255]The input pixel value x in between maps to 0,255]Output value y between, optimal mapping curve function T of P camerasm() Obtained by optimizing the following formula:
Figure BDA0003191976740000041
wherein ,Tm() A mapping curve function representing the mth camera,
Figure BDA0003191976740000042
representing a corresponding optimal mapping curve function, wherein beta is a penalty factor and restricts the deviation degree of an input value and an output value of the mapping curve function;
s9, brightness equalization post-processing of the optimal mapping curve function, and view preprocessing of the optimal additive gain of the P cameras
Figure BDA0003191976740000043
Performing brightness equalization post-processing through the optimal mapping curve function to obtain a Y component of the output all-round aerial view
Figure BDA0003191976740000044
The formula is as follows:
Figure BDA0003191976740000045
finally, Y, U, V components of the output circular bird's-eye view image of the P cameras after brightness equalization processing are respectively obtained
Figure BDA0003191976740000046
Further, the brightness equalization method obtains the Y component of the output all-round bird' S-eye view through the brightness equalization post-processing of the optimal mapping curve function in step S9
Figure BDA0003191976740000047
Then, the method also comprises the following steps:
and S10, restoring the image format, and if the input circular view is not in the YUV format, performing color space conversion on the output circular view after the brightness equalization processing to convert the output circular view into the original input format.
Further, the optimal mapping curve function calculation process in step S8 is as follows:
s801, the mapping curve function of the mth camera is expressed by a linear piecewise mapping function, the linear piecewise mapping function is defined by a group of anchor points, the number of the anchor points is d +1, and the coordinate of the kth anchor point is
Figure BDA0003191976740000048
k is anchor point index, k is 0,1m() Can be expressed as:
Figure BDA0003191976740000051
where x represents the input pixel value, Tm(x) An output mapped pixel value representing a mapping curve function;
s802, fixing the 0 th anchor point of the optimal linear piecewise mapping function of the mth camera
Figure BDA0003191976740000052
The d anchor point is fixed as
Figure BDA0003191976740000053
Down-sampling images based on correction
Figure BDA0003191976740000054
And
Figure BDA0003191976740000055
determining the minimum value and the maximum value of the image pixel gray value to obtain the abscissa range of other anchor points as
Figure BDA0003191976740000056
wherein
Figure BDA0003191976740000057
And
Figure BDA0003191976740000058
is determined by the following formula:
Figure BDA0003191976740000059
where l is the previous camera index in the clockwise direction, l ≡ (m-1) mod P,
Figure BDA00031919767400000510
and
Figure BDA00031919767400000511
modified down-sampled images corresponding to two overlapping regions of the mth camera in the clockwise direction respectively,
Figure BDA00031919767400000512
and
Figure BDA00031919767400000513
are respectively as
Figure BDA00031919767400000514
And
Figure BDA00031919767400000515
max () represents the maximum value taking operation, min () represents the minimum value taking operation;
other anchor points
Figure BDA00031919767400000516
Uniformly distributed in the range of abscissa
Figure BDA00031919767400000517
And (d) is determined by the following formula:
Figure BDA00031919767400000518
wherein ,
Figure BDA00031919767400000519
is the abscissa of the kth anchor point of the mth camera;
s803, calculating a sample index set
Figure BDA00031919767400000520
And
Figure BDA00031919767400000521
kth anchor point ordinate of mth camera
Figure BDA00031919767400000522
Only with respect to the grey value of the pixel
Figure BDA00031919767400000523
Modified downsampled image of interval
Figure BDA00031919767400000524
And
Figure BDA00031919767400000525
the inner point of (a); selecting the gray value of the pixel at
Figure BDA00031919767400000526
Modified downsampled image of interval
Figure BDA00031919767400000527
And
Figure BDA00031919767400000528
the coordinates of the inliers of (a) constitute a sample index set
Figure BDA00031919767400000529
And
Figure BDA00031919767400000530
Figure BDA00031919767400000531
Figure BDA00031919767400000532
wherein ,
Figure BDA0003191976740000061
representing modified downsampled images
Figure BDA0003191976740000062
Corresponding coordinate [ i, j]The gray value of the pixel of (a),
Figure BDA0003191976740000063
and
Figure BDA0003191976740000064
are respectively as
Figure BDA0003191976740000065
And
Figure BDA0003191976740000066
is determined by the sample index set of the inliers,
Figure BDA0003191976740000067
the abscissa representing the anchor point of the (k-1) th camera,
Figure BDA0003191976740000068
represents the abscissa of the (k +1) th anchor point;
s804, sequentially and iteratively solving d-1 anchor point vertical coordinates of the optimal linear piecewise mapping function of the mth camera in sequence
Figure BDA0003191976740000069
Obtaining an optimal linear piecewise mapping function of the mth camera;
s805, the step S804 is repeatedly executed until the absolute value of the iteration variation of the front and back two times of all the anchor point vertical coordinates of all the cameras is smaller than the set threshold value TanchorAnd stopping iteration, and outputting the optimal linear piecewise mapping function determined by the anchor point as an optimal mapping curve function.
Further, in the step S804, the d-1 anchor point vertical coordinates of the optimal linear piecewise mapping function of the mth camera are sequentially solved in an iterative manner
Figure BDA00031919767400000610
Obtaining the optimal linear piecewise mapping function of the mth camera, wherein the process is as follows:
s80401, fixing the vertical coordinates of other anchor points, and updating the vertical coordinates of the anchor points of the odd array of the optimal linear piecewise mapping function of the mth camera
Figure BDA00031919767400000611
Ordinate of the kth anchor point
Figure BDA00031919767400000612
The updated calculation formula of (2) is as follows:
Figure BDA00031919767400000613
wherein A, B, C are intermediate calculation variables, and are defined as follows:
Figure BDA00031919767400000614
Figure BDA0003191976740000071
wherein ,
Figure BDA0003191976740000072
and
Figure BDA0003191976740000073
for a modified down-sampled image of the overlapping region of cameras m and n,
Figure BDA0003191976740000074
and
Figure BDA0003191976740000075
for modified downsampled images of the overlapping region of cameras l and m, Tn() and Tl() Mapping curve function, function F, for the n, l cameras respectivelymk() and Y’mk() For the piecewise function, the following is defined:
Figure BDA0003191976740000076
Figure BDA0003191976740000077
s80402, fixing the vertical coordinates of other anchor points, and updating the vertical coordinates of the anchor points of the even group of the optimal linear piecewise mapping function of the mth camera
Figure BDA0003191976740000078
Compared with the prior art, the invention has the following advantages and effects:
1. in the prior art, all color component channels of the panoramic looking-around system looking-around aerial view need to be processed, the invention only performs brightness equalization on the brightness Y component, the calculated amount is 1/3 in the prior art, and the calculation complexity of the algorithm is reduced.
2. The method combines the additive gain model and the optimal linear piecewise mapping function, utilizes the additive gain model to carry out preprocessing, utilizes the optimal linear piecewise mapping function to carry out postprocessing, further improves the brightness balancing effect, and reduces the brightness difference of the overlapping areas of different camera views of the panoramic all-round looking system.
Drawings
FIG. 1 is a flow chart illustrating the steps of a brightness equalization method for a panoramic surround view system;
FIG. 2 is a block average downsampling schematic diagram disclosed in the present invention;
FIG. 3 is a schematic diagram of a linear piecewise mapping function disclosed in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
The embodiment discloses a brightness equalization method of a panoramic all-round system, as shown in fig. 1, comprising the following steps:
s1, image format conversion: checking the format of the input circular view of the P cameras of the panoramic circular view system, if the input circular view is not in YUV format, performing color space conversion on the P circular view, converting the color space conversion into YUV format, and obtaining the input view I of the P camerasmThe corresponding Y, U, V components are respectively
Figure BDA0003191976740000081
Wherein m is a camera index, m is 0, 1. If the input look-around aerial view is in YUV format, the input view I is directly obtainedmAnd corresponding Y, U, V component
Figure BDA0003191976740000082
And
Figure BDA0003191976740000083
in this embodiment, P is 4.
S2, block average down-sampling: as shown in FIG. 2, considering only the luminance Y component, the input view of camera m
Figure BDA0003191976740000084
Input views with neighboring cameras n
Figure BDA0003191976740000085
Is recorded as omegamnWherein m is the camera index, n is the next camera index in the clockwise direction, n ≡ (m +1) mod P, the symbol "mod" represents the modulo operation, and the symbol "≡" represents the identity in the modulo operation; input views for camera n
Figure BDA0003191976740000086
Input views with neighboring cameras m
Figure BDA0003191976740000087
Is recorded as omeganmFor each set of overlapping area images omega of adjacent camerasmn and ΩnmRespectively partitioning, wherein the size of each subblock is N multiplied by N pixels, taking the average value of each subblock as the output of the corresponding subblock, and obtaining a downsampled image phi consisting of the average values of the subblocksmn and Φnm(ii) a In this embodiment, N is 4.
S3, selecting a sample, selecting an inner point with smaller difference, and obtaining a sample index set; downsampled image phi based on overlapping area of each group of adjacent camerasmn and ΦnmSelecting the coordinates of the inner points with smaller difference to form a sample index set Smn
Smn={[i,j]|(Φmn[i,j]-Φmm[i,j])2<Th,[i,j]∈[1,Nmn]×[1,Mmn]}
Wherein i, j represent the abscissa and ordinate of the image, respectively, [ i, j]Coordinates representing the image, [ phi ]mn[i,j]Representing a down-sampled image phimnCorresponding coordinate [ i, j]Pixel gray value of phinm[i,j]Representing a down-sampled image phinmCorresponding coordinate [ i, j]Th represents an inner point threshold, Nmn and MmnRespectively representing down-sampled images phimn and ΦnmThe number of columns and rows; in this implementationIn the example, Th was 500.
S4, calculating the optimal additive gains of the P cameras
Figure BDA0003191976740000091
Down-sampled image phi of adjusted adjacent camera overlap regionmn and ΦnmThe mean square error of the corresponding interior point pixel gray value is the minimum, and the following formula is satisfied:
Figure BDA0003191976740000092
wherein ,gmFor the additive gain of the m-th camera,
Figure BDA0003191976740000093
represents the corresponding optimal additive gain, SmnDownsampled image phi representing overlapping area of adjacent camerasmn and ΦnmA sample index set of inliers of (a);
optimal additive gain for P cameras
Figure BDA0003191976740000094
The calculation flow of (2) is as follows:
s401, selecting a camera input view with medium brightness as a reference view, wherein the index ref of the reference view is calculated as follows:
s40101, calculating a downsampled image phimn and ΦnmSample mean a of the interior points of (1)mn and Anm
Figure BDA0003191976740000095
Figure BDA0003191976740000101
wherein ,Φmn[i,j]Representing a down-sampled image phimnCorresponding coordinate [ i, j]Pixel gray value of phinm[i,j]Representing a down-sampled image phinmCorresponding coordinate [ i, j]Pixel gray value of pmnRepresenting a sample index set SmnThe number of indexes of (d);
s40102, calculating an input view Y component of the mth camera
Figure BDA0003191976740000102
Overlap region luminance mean value Am
Am=Amn+Aml
Where n ≡ (m +1) mod P, l is the previous camera index in the clockwise direction, l ≡ (m-1) mod P, Amn and AmlDown-sampled images phi respectively corresponding to two overlapping regions of the m-th camera in the clockwise directionmn and ΦmlThe interior point sample mean of (2);
s40103, calculating a brightness mean value A of the P cameras:
Figure BDA0003191976740000103
s40104, calculating a camera index ref of the reference view:
Figure BDA0003191976740000104
taking the ref view as a reference view;
s402, calculating a down-sampling image phimn and ΦnnSample mean difference d of inner points of (1)m:dm=Amn-Anm
S403, calculating the average value d of the sample average value differences of all the cameras:
Figure BDA0003191976740000105
s404, optimal additive gain
Figure BDA0003191976740000106
Calculated by the following formula:
Figure BDA0003191976740000107
where ref is the camera index of the reference view, dmMean difference for the mth sample;
s5, performing brightness equalization preprocessing by an additive gain model, and performing brightness equalization preprocessing by adding the optimal additive gain to the Y component of the input view of each camera, wherein the formula is as follows:
Figure BDA0003191976740000108
wherein ,
Figure BDA0003191976740000111
an output view representing the m-th input view after the Y component is subjected to optimal additive gain preprocessing,
Figure BDA0003191976740000112
represents the optimal additive gain for the mth camera;
s6, modifying the downsampled image, each downsampled image phimn and ΦnmRespectively adding the corresponding optimal additive gains, wherein the formula is as follows:
Figure BDA0003191976740000113
Figure BDA0003191976740000114
wherein ,
Figure BDA0003191976740000115
and
Figure BDA0003191976740000116
respectively representing down-sampled images phimn and ΦnmThe down-sampled image is corrected by the optimal additive gain;
s7, updating the sample index set; in step S3Method described for retrieving a modified downsampled image
Figure BDA0003191976740000117
And
Figure BDA0003191976740000118
sample index set of inliers of
Figure BDA0003191976740000119
The following formula is satisfied:
Figure BDA00031919767400001110
wherein ,
Figure BDA00031919767400001111
representing modified downsampled images
Figure BDA00031919767400001112
Corresponding coordinate [ i, j]The gray value of the pixel of (a),
Figure BDA00031919767400001113
representing modified downsampled images
Figure BDA00031919767400001114
Corresponding coordinate [ i, j]Pixel gray scale value of (a);
s8, calculating the optimal mapping curve function, wherein the mapping curve function y of the mth camera is Tm(x) Will [0,255]The input pixel value x in between maps to 0,255]Output value y between, optimal mapping curve function T of P camerasm() Obtained by optimizing the following formula:
Figure BDA00031919767400001115
wherein ,Tm() A mapping curve function representing the mth camera,
Figure BDA00031919767400001116
representing the corresponding optimal mapping curve function, n being the next camera index in the clockwise direction, β being a penalty factor, constraining the degree of deviation of the input and output values of the mapping curve function,
Figure BDA00031919767400001117
and
Figure BDA00031919767400001118
a modified downsampled image for an overlapping region of adjacent cameras;
optimal mapping curve function y ═ T of P camerasm(x) The calculation flow of (2) is as follows:
s801, the mapping curve function of the mth camera is expressed by using a linear piecewise mapping function, as shown in fig. 3, the linear piecewise mapping function is defined by using a group of anchor points, d +1 anchor points are total, in this embodiment, d is 5, and the coordinate of the kth anchor point is
Figure BDA0003191976740000121
k is anchor point index, k is 0,1m() Can be expressed as:
Figure BDA0003191976740000122
when in use
Figure BDA0003191976740000123
Where m denotes a camera index, x denotes an input pixel value, Tm(x) An output mapped pixel value representing a mapping curve function;
s802, fixing the 0 th anchor point of the optimal linear piecewise mapping function of the mth camera
Figure BDA0003191976740000124
The d anchor point is fixed as
Figure BDA0003191976740000125
Down-sampling images based on correction
Figure BDA0003191976740000126
And
Figure BDA0003191976740000127
the distribution range of the pixel gray value of the inner point, the minimum value and the maximum value of the pixel gray value of the image are determined, and the abscissa ranges of other anchor points can be obtained as
Figure BDA0003191976740000128
wherein
Figure BDA0003191976740000129
And
Figure BDA00031919767400001210
is determined by the following formula:
Figure BDA00031919767400001211
where m is the camera index, n is the next camera index in the clockwise direction, n ≡ (m +1) modP, l is the previous camera index in the clockwise direction, l ≡ (m-1) mod P,
Figure BDA00031919767400001212
and
Figure BDA00031919767400001213
modified down-sampled images corresponding to two overlapping regions of the mth camera in the clockwise direction respectively,
Figure BDA00031919767400001214
and
Figure BDA00031919767400001215
are respectively as
Figure BDA00031919767400001216
And
Figure BDA00031919767400001217
max () represents the maximum value taking operation, min () represents the minimum value taking operation;
other anchor points
Figure BDA00031919767400001218
Uniformly distributed in the range of abscissa
Figure BDA00031919767400001219
And (d) is determined by the following formula:
Figure BDA00031919767400001220
wherein ,
Figure BDA00031919767400001221
is the abscissa of the kth anchor point of the mth camera;
s803, calculating a sample index set
Figure BDA0003191976740000131
And
Figure BDA0003191976740000132
kth anchor point ordinate of mth camera
Figure BDA0003191976740000133
Only with respect to the grey value of the pixel
Figure BDA0003191976740000134
Modified downsampled image of interval
Figure BDA0003191976740000135
And
Figure BDA0003191976740000136
the inner point of (a); selecting the gray value of the pixel at
Figure BDA0003191976740000137
Modified downsampled image of interval
Figure BDA0003191976740000138
And
Figure BDA0003191976740000139
the coordinates of the inliers of (a) constitute a sample index set
Figure BDA00031919767400001310
And
Figure BDA00031919767400001311
Figure BDA00031919767400001312
Figure BDA00031919767400001313
where k is the anchor index, k is 1, 2., d-1,
Figure BDA00031919767400001314
representing modified downsampled images
Figure BDA00031919767400001315
Corresponding coordinate [ i, j]The gray value of the pixel of (a),
Figure BDA00031919767400001316
representing modified downsampled images
Figure BDA00031919767400001317
Corresponding coordinate [ i, j]The gray value of the pixel of (a),
Figure BDA00031919767400001318
and
Figure BDA00031919767400001319
are respectively as
Figure BDA00031919767400001320
And
Figure BDA00031919767400001321
is determined by the sample index set of the inliers,
Figure BDA00031919767400001322
the abscissa representing the anchor point of the (k-1) th camera,
Figure BDA00031919767400001323
represents the abscissa of the (k +1) th anchor point;
s804, sequentially and iteratively solving an optimal linear piecewise mapping function of the mth camera in sequence, where m is 0, 1.
Iterative solution of d-1 anchor point ordinates of optimal linear piecewise mapping function of mth camera
Figure BDA00031919767400001324
The process is as follows:
s80401, fixing the vertical coordinates of other anchor points, and updating the vertical coordinates of the anchor points of the odd array of the optimal linear piecewise mapping function of the mth camera
Figure BDA00031919767400001325
Ordinate of the kth anchor point
Figure BDA00031919767400001326
The updated calculation formula of (2) is as follows:
Figure BDA00031919767400001327
wherein A, B, C are intermediate calculation variables, and are defined as follows:
Figure BDA00031919767400001328
Figure BDA0003191976740000141
wherein ,
Figure BDA0003191976740000142
and
Figure BDA0003191976740000143
for a modified down-sampled image of the overlapping region of cameras m and n,
Figure BDA0003191976740000144
and
Figure BDA0003191976740000145
for modified downsampled images of the overlapping region of cameras l and m, Tn() and Tl() The mapping curve function of the n and l cameras, function Fmk() and Y’mk() For the piecewise function, the following is defined:
Figure BDA0003191976740000146
Figure BDA0003191976740000147
s80402, fixing the vertical coordinates of other anchor points, and updating the vertical coordinates of the anchor points of the even group of the optimal linear piecewise mapping function of the mth camera
Figure BDA0003191976740000148
S805, the step S804 is repeatedly executed until the absolute value of the iteration variation of the front and back two times of all the anchor point vertical coordinates of all the cameras is smaller than the set threshold value TanchorStopping the iteration and outputting the optimum determined by the anchor pointTaking the linear piecewise mapping function as an optimal mapping curve function; in the present embodiment, TanchorTaking 1.0 e-6;
s9, performing brightness equalization post-processing on the optimal mapping curve function; optimal additive gain pre-processing view for P cameras
Figure BDA0003191976740000149
Performing brightness equalization post-processing through the optimal mapping curve function to obtain a Y component of the output all-round aerial view
Figure BDA00031919767400001410
The formula is as follows:
Figure BDA00031919767400001411
s10, restoring the image format; the Y, U, V components of the output circular bird's-eye view image of the P cameras after brightness equalization processing are respectively
Figure BDA00031919767400001412
And if the input all-round looking aerial view is not in the YUV format, performing color space conversion on the output all-round looking aerial view after brightness equalization processing, and converting the output all-round looking aerial view into the original input format.
In summary, because the illumination conditions of different cameras in the panoramic all-around view system are different, and the parameters of Automatic Exposure (AE) and Automatic White Balance (AWB) of each camera are also different, and the synthesized all-around view has an obvious boundary between adjacent views, which affects the visual effect, for the above-mentioned problems in the prior art, this embodiment discloses a luminance equalization method of the panoramic all-around view system, which only processes the luminance Y component of the input view, performs block average downsampling on the overlapping area of the views of the adjacent cameras, selects the interior points, and obtains an index set of interior point samples for calculating the optimal additive gain; then, brightness equalization pretreatment is carried out by utilizing the optimal additive gain; and finally, reselecting the preprocessed interior point samples, calculating an optimal linear piecewise mapping function, and carrying out brightness equalization post-processing. In the prior art, an additive gain model is adopted in all color component channels, and an explicit mathematical solution is provided; the invention only processes the brightness component of the input view, and the calculation amount is 1/3 in the prior art, thereby reducing the calculation complexity.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (4)

1. A brightness equalization method for a panoramic surround view system, said brightness equalization method comprising the steps of:
s1, image format conversion: converting the input panoramic all-round looking aerial view of the P cameras of the panoramic all-round looking system into a YUV format to obtain an input view I of the P camerasmThe corresponding Y, U, V components are respectively
Figure FDA0003191976730000011
Wherein m is a camera index, m is 0,1, …, P-1;
s2, block average down-sampling: input view of camera m, taking into account only the luminance Y component
Figure FDA0003191976730000012
Input views with neighboring cameras n
Figure FDA0003191976730000013
Is recorded as omegamnWherein n is the next camera index in the clockwise direction, n ≡ (m +1) mod P, the symbol "mod" represents the modulo operation, and the symbol "≡" represents identity in the modulo operation; input views for camera n
Figure FDA0003191976730000014
Input views with neighboring cameras m
Figure FDA0003191976730000015
Is recorded as omeganmFor each set of overlapping area images omega of adjacent camerasmn and ΩnmRespectively blocking, each subblock is N multiplied by N pixels, taking the average value of each subblock as the output of the corresponding subblock, and obtaining the downsampled image phi consisting of the average values of the subblocksmn and Фnm
S3, selecting a sample, selecting an inner point with smaller difference, and obtaining a sample index set; downsampled image phi based on overlapping area of each group of adjacent camerasmn and ФnmSelecting the coordinates of the inner points with smaller difference to form a sample index set Smn
Smn={[i,j]|(Φmn[i,j]-Φnm[i,j])2<Th,[i,j]∈[1,Nmn]X[1,Mmn]}
Wherein i, j represent the abscissa and ordinate of the image, respectively, [ i, j]Coordinates, phi, representing imagesmn[i,j]Representing a downsampled image phimnCorresponding coordinate [ i, j]Pixel gray value of (1), phinm[i,j]Representing a downsampled image phinmCorresponding coordinate [ i, j]Th represents an inner point threshold, Nmn and MmnRespectively representing down-sampled images phimn and ФnmThe number of columns and rows;
s4, calculating the optimal additive gains of the P cameras
Figure FDA0003191976730000016
Down-sampled image phi enabling adjusted overlap area of adjacent camerasmn and ФnmThe mean square error of the corresponding interior point pixel gray value is the minimum, and the following formula is satisfied:
Figure FDA0003191976730000021
wherein ,gmFor the additive gain of the m-th camera,
Figure FDA0003191976730000022
represents the optimal additive gain for the mth camera;
s5, performing brightness equalization preprocessing by an additive gain model, and performing brightness equalization preprocessing by adding the optimal additive gain to the Y component of the input view of each camera, wherein the formula is as follows:
Figure FDA0003191976730000023
wherein ,
Figure FDA0003191976730000024
an output view which represents the Y component of the mth input view after the optimal additive gain preprocessing;
s6, correcting the down-sampled images, each down-sampled image phimn and ФnmRespectively adding the corresponding optimal additive gains, wherein the formula is as follows:
Figure FDA0003191976730000025
Figure FDA0003191976730000026
wherein ,
Figure FDA0003191976730000027
and
Figure FDA0003191976730000028
respectively representing down-sampled images phimn and ФnmThe down-sampled image is corrected by the optimal additive gain;
s7, updating the sample index set, and obtaining the modified down-sampled image of the overlapping area of the adjacent cameras again by using the method in the step S3
Figure FDA0003191976730000029
And
Figure FDA00031919767300000210
sample index set of inliers of
Figure FDA00031919767300000211
The following formula is satisfied:
Figure FDA00031919767300000212
wherein ,
Figure FDA00031919767300000213
representing modified downsampled images
Figure FDA00031919767300000214
Corresponding coordinate [ i, j]The gray value of the pixel of (a),
Figure FDA00031919767300000215
representing modified downsampled images
Figure FDA00031919767300000216
Corresponding coordinate [ i, j]Pixel gray scale value of (a);
s8, calculating the optimal mapping curve function, wherein the mapping curve function y of the mth camera is Tm(x) Will [0,255 ]]The input pixel value x in between maps to [0,255%]Output value y between, optimal mapping curve function T of P camerasm() Obtained by optimizing the following formula:
Figure FDA0003191976730000031
wherein ,Tm() A mapping curve function representing the mth camera,
Figure FDA0003191976730000032
representing a corresponding optimal mapping curve function, wherein beta is a penalty factor and restricts the deviation degree of an input value and an output value of the mapping curve function;
s9, brightness equalization post-processing of the optimal mapping curve function, and view preprocessing of the optimal additive gain of the P cameras
Figure FDA0003191976730000033
Performing brightness equalization post-processing through the optimal mapping curve function to obtain a Y component of the output all-round aerial view
Figure FDA0003191976730000034
The formula is as follows:
Figure FDA0003191976730000035
finally, Y, U, V components of the output circular bird's-eye view image of the P cameras after brightness equalization processing are respectively obtained
Figure FDA0003191976730000036
2. The brightness equalization method of the panoramic all-around view system according to claim 1, wherein the brightness equalization method obtains the Y component of the output all-around view through the brightness equalization post-processing of the optimal mapping curve function in step S9
Figure FDA0003191976730000037
Then, the method also comprises the following steps:
and S10, restoring the image format, and if the input circular view is not in the YUV format, performing color space conversion on the output circular view after the brightness equalization processing to convert the output circular view into the original input format.
3. The brightness equalizing method of a panoramic looking-around system according to claim 1, wherein the optimal mapping curve function in step S8 is calculated as follows:
s801, the mapping curve function of the mth camera is expressed by a linear piecewise mapping function, the linear piecewise mapping function is defined by a group of anchor points, the number of the anchor points is d +1, and the coordinate of the kth anchor point is
Figure FDA0003191976730000041
k is anchor point index, k is 0,1, …, d, mapping curve function T of mth cameram() Can be expressed as:
Figure FDA0003191976730000042
when in use
Figure FDA0003191976730000043
Where x represents the input pixel value, Tm(x) An output mapped pixel value representing a mapping curve function;
s802, fixing the 0 th anchor point of the optimal linear piecewise mapping function of the mth camera
Figure FDA0003191976730000044
The d anchor point is fixed as
Figure FDA0003191976730000045
Down-sampling images based on correction
Figure FDA0003191976730000046
And
Figure FDA0003191976730000047
determining the distribution range of the pixel gray value of the inner pointThe minimum value and the maximum value of the pixel gray value are obtained, and the abscissa ranges of other anchor points are obtained
Figure FDA0003191976730000048
wherein
Figure FDA0003191976730000049
And
Figure FDA00031919767300000410
is determined by the following formula:
Figure FDA00031919767300000411
where l is the previous camera index in the clockwise direction, l ≡ (m-1) mod P,
Figure FDA00031919767300000412
and
Figure FDA00031919767300000413
modified down-sampled images corresponding to two overlapping regions of the mth camera in the clockwise direction respectively,
Figure FDA00031919767300000414
and
Figure FDA00031919767300000415
are respectively as
Figure FDA00031919767300000416
And
Figure FDA00031919767300000417
max () represents the maximum value taking operation, min () represents the minimum value taking operation;
other anchor points
Figure FDA00031919767300000418
Uniformly distributed in the range of abscissa
Figure FDA00031919767300000419
And (d) is determined by the following formula:
Figure FDA00031919767300000420
wherein ,
Figure FDA00031919767300000421
is the abscissa of the kth anchor point of the mth camera;
s803, calculating a sample index set
Figure FDA00031919767300000422
And
Figure FDA00031919767300000423
kth anchor point ordinate of mth camera
Figure FDA00031919767300000424
Only with respect to the grey value of the pixel
Figure FDA00031919767300000425
Modified downsampled image of interval
Figure FDA00031919767300000426
And
Figure FDA00031919767300000427
the inner point of (a); selecting the gray value of the pixel at
Figure FDA00031919767300000428
Modified downsampled image of interval
Figure FDA00031919767300000429
And
Figure FDA00031919767300000430
the coordinates of the inliers of (a) constitute a sample index set
Figure FDA00031919767300000431
And
Figure FDA00031919767300000432
Figure FDA00031919767300000433
Figure FDA0003191976730000051
wherein ,
Figure FDA0003191976730000052
representing modified downsampled images
Figure FDA0003191976730000053
Corresponding coordinate [ i, j]The gray value of the pixel of (a),
Figure FDA0003191976730000054
and
Figure FDA0003191976730000055
are respectively as
Figure FDA0003191976730000056
And
Figure FDA0003191976730000057
is determined by the sample index set of the inliers,
Figure FDA0003191976730000058
the abscissa representing the anchor point of the (k-1) th camera,
Figure FDA0003191976730000059
represents the abscissa of the (k +1) th anchor point;
s804, sequentially and iteratively solving d-1 anchor point vertical coordinates of the optimal linear piecewise mapping function of the mth camera in sequence
Figure FDA00031919767300000510
Obtaining an optimal linear piecewise mapping function of the mth camera;
s805, the step S804 is repeatedly executed until the absolute value of the iteration variation of the front and back two times of all the anchor point vertical coordinates of all the cameras is smaller than the set threshold value TanchorAnd stopping iteration, and outputting the optimal linear piecewise mapping function determined by the anchor point as an optimal mapping curve function.
4. The brightness equalizing method of claim 3, wherein the d-1 vertical coordinates of anchor points of the optimal linear piecewise mapping function of the mth camera are sequentially solved by iteration in step S804
Figure FDA00031919767300000511
Obtaining the optimal linear piecewise mapping function of the mth camera, wherein the process is as follows:
s80401, fixing the vertical coordinates of other anchor points, and updating the vertical coordinates of the anchor points of the odd array of the optimal linear piecewise mapping function of the mth camera
Figure FDA00031919767300000512
Ordinate of the kth anchor point
Figure FDA00031919767300000513
The updated calculation formula of (2) is as follows:
Figure FDA00031919767300000514
wherein A, B, C are intermediate calculation variables, and are defined as follows:
Figure FDA00031919767300000515
Figure FDA0003191976730000061
Figure FDA0003191976730000062
wherein ,
Figure FDA0003191976730000063
and
Figure FDA0003191976730000064
for a modified down-sampled image of the overlapping region of cameras m and n,
Figure FDA0003191976730000065
and
Figure FDA0003191976730000066
for modified downsampled images of the overlapping region of cameras l and m, Tn() and Tl() Mapping curve function, function F, for the n, l cameras respectivelymk() and Y’mk() For the piecewise function, the following is defined:
Figure FDA0003191976730000067
Figure FDA0003191976730000068
s80402, fixing the vertical coordinates of other anchor points, and updating the vertical coordinates of the anchor points of the even group of the optimal linear piecewise mapping function of the mth camera
Figure FDA0003191976730000069
CN202110880913.7A 2021-08-02 2021-08-02 Luminance balancing method for panoramic all-around system Active CN113781317B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110880913.7A CN113781317B (en) 2021-08-02 2021-08-02 Luminance balancing method for panoramic all-around system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110880913.7A CN113781317B (en) 2021-08-02 2021-08-02 Luminance balancing method for panoramic all-around system

Publications (2)

Publication Number Publication Date
CN113781317A true CN113781317A (en) 2021-12-10
CN113781317B CN113781317B (en) 2023-08-18

Family

ID=78836472

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110880913.7A Active CN113781317B (en) 2021-08-02 2021-08-02 Luminance balancing method for panoramic all-around system

Country Status (1)

Country Link
CN (1) CN113781317B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237237A (en) * 2023-11-13 2023-12-15 深圳元戎启行科技有限公司 Luminosity balancing method and device for vehicle-mounted 360-degree panoramic image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060177150A1 (en) * 2005-02-01 2006-08-10 Microsoft Corporation Method and system for combining multiple exposure images having scene and camera motion
US20090231447A1 (en) * 2008-03-12 2009-09-17 Chung-Ang University Industry-Academic Cooperation Foundation Apparatus and method for generating panorama images and apparatus and method for object-tracking using the same
CN109166076A (en) * 2018-08-10 2019-01-08 深圳岚锋创视网络科技有限公司 Luminance regulating method, device and the portable terminal of polyphaser splicing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060177150A1 (en) * 2005-02-01 2006-08-10 Microsoft Corporation Method and system for combining multiple exposure images having scene and camera motion
US20090231447A1 (en) * 2008-03-12 2009-09-17 Chung-Ang University Industry-Academic Cooperation Foundation Apparatus and method for generating panorama images and apparatus and method for object-tracking using the same
CN109166076A (en) * 2018-08-10 2019-01-08 深圳岚锋创视网络科技有限公司 Luminance regulating method, device and the portable terminal of polyphaser splicing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
范翔;夏顺仁;: "基于特征的显微图像全自动拼接", 浙江大学学报(工学版) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237237A (en) * 2023-11-13 2023-12-15 深圳元戎启行科技有限公司 Luminosity balancing method and device for vehicle-mounted 360-degree panoramic image

Also Published As

Publication number Publication date
CN113781317B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
CN110148095A (en) A kind of underwater picture Enhancement Method and enhancement device
US9135688B2 (en) Method for brightness equalization of various images
JP7268001B2 (en) Arithmetic processing unit, object identification system, learning method, automobile, vehicle lamp
US20150138312A1 (en) Method and apparatus for a surround view camera system photometric alignment
TWI599989B (en) Image processing method and image system for transportation
CN104794705B (en) Image defogging method and device based on image local content characteristic
US11082631B2 (en) Image processing device
CN113344820B (en) Image processing method and device, computer readable medium and electronic equipment
CN113077505A (en) Optimization method of monocular depth estimation network based on contrast learning
CN112731436A (en) Multi-mode data fusion travelable area detection method based on point cloud up-sampling
CN112529813B (en) Image defogging processing method and device and computer storage medium
CN113781317A (en) Brightness equalization method of panoramic all-around system
US7995107B2 (en) Enhancement of images
CN115965531A (en) Model training method, image generation method, device, equipment and storage medium
CN113658058A (en) Brightness balancing method and system in vehicle-mounted all-round system
CN113256516A (en) Image enhancement method
US11523053B2 (en) Image processing apparatus
CN113840123B (en) Image processing device of vehicle-mounted image and automobile
CN111800586B (en) Virtual exposure processing method for vehicle-mounted image, vehicle-mounted image splicing processing method and image processing device
KR101230909B1 (en) Apparatus and method for processing wide angle image
CN114926331A (en) Panoramic image splicing method applied to vehicle
CN115443651A (en) Determining a current focal region of a camera image based on a position of a vehicle camera on a vehicle and based on current motion parameters
CN115100083B (en) Image brightness self-adaptive adjusting method for vehicle-mounted image
CN111915520B (en) Method, device and computer equipment for improving brightness of spliced image
CN113506218B (en) 360-degree video splicing method for multi-compartment ultra-long vehicle type

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant