CN114257741B - Vehicle-mounted HDR method with rapid response - Google Patents

Vehicle-mounted HDR method with rapid response Download PDF

Info

Publication number
CN114257741B
CN114257741B CN202111534053.8A CN202111534053A CN114257741B CN 114257741 B CN114257741 B CN 114257741B CN 202111534053 A CN202111534053 A CN 202111534053A CN 114257741 B CN114257741 B CN 114257741B
Authority
CN
China
Prior art keywords
pixel
value
frame
target
values
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111534053.8A
Other languages
Chinese (zh)
Other versions
CN114257741A (en
Inventor
杜乐谦
叶志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202111534053.8A priority Critical patent/CN114257741B/en
Publication of CN114257741A publication Critical patent/CN114257741A/en
Application granted granted Critical
Publication of CN114257741B publication Critical patent/CN114257741B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a quick-response vehicle-mounted HDR method, which comprises the following steps: selecting a first frame of an image to be processed as a sampling frame, selecting an interested area on the sampling frame, and calculating a Target value Target according to pixel values of specific pixel rows in the interested area; comparing the pixel values of all pixels in the sampling frame with Target to obtain a weight matrix, wherein elements in the weight matrix represent weight values of corresponding pixels; recording the next frame of the sampling frame as an operation frame, and adjusting the pixel value of each pixel in the operation frame according to the weight matrix to obtain an adjusted operation frame image; and taking the operation frame before adjustment as a new sampling frame, taking the next frame of the operation frame as a new operation frame, and repeating the steps until the image to be processed is processed. The method and the device can enhance the image processing effect while reducing the chip area and the power consumption, and the generated HDR image effect is better represented in detail.

Description

Vehicle-mounted HDR method with rapid response
Technical Field
The invention belongs to the field of vehicle-mounted High Dynamic (HDR) algorithms and Automatic Exposure (AE) modes, and particularly relates to a vehicle-mounted high dynamic method with quick response.
Background
The determination of the exposure time and the gain factor is crucial for the generation of an HDR image.
The traditional AE algorithm is to expose the whole image line by line, count and sum all pixel points and then average to obtain a brightness average value. This luminance average is then used to calculate the exposure time T and the gain factor Ratio. Still some AE algorithms are a divisional exposure algorithm using weighted gray entropy difference, but the calculation amount of the algorithm is large, which causes huge area and power consumption waste, and is not suitable for application to a chip hardware end. In order to estimate the brightness information of the whole image quickly, the AE algorithm needs to be adjusted to reduce the calculation time while ensuring the image effect as much as possible. The vehicle-mounted sensor is different from a mobile phone or a camera sensor, the mobile phone or the camera sensor can acquire an interested area by changing a focus position, and AE calculation is carried out in the interested area, but the vehicle-mounted sensor is usually fixed in focus and focuses on a small area in the middle of an image, because most visual information is concentrated in the middle of the image. Therefore, the core of the rapid AE algorithm is to place the focus in the central area of the image, and when the central area does not meet the requirement, the pixel information of the upper and lower areas of the image is added to be used as supplementary calculation, so that the result of the AE algorithm is more accurate. The existing image processing chip measures light through an AE module and evaluates the brightness of the current environment, the adopted method is a frame averaging method or a global partition exposure method, and the exposure time and the gain coefficient of the current frame are determined according to the integral brightness value of the previous frame.
Firstly, image exposure is carried out, then sampling quantization is carried out, and then pixel value summation and averaging are carried out on the whole image to obtain the AP.
Figure GDA0003764398030000021
The exposure time and gain factor are calculated by the AP. The method has large calculation amount, long overall response time, and the exposure time and the gain coefficient required by some areas cannot be given in some scenes, so that the method is not beneficial to the quick and accurate response of the vehicle-mounted image.
Meanwhile, the same exposure time and gain are given to the same image by the CMOS image processing sensor, and the current method for realizing HDR on chip hardware basically performs image fusion processing by using multiple exposures, which bring a large chip area overhead and power consumption overhead. The exposure and gain of the same image are the same, so that each pixel point cannot be refined. For example, for a tunnel scene, the image scene before entering the tunnel and when exiting the tunnel is very complex, the pixel values of the pixels in the same row are very different between the inside of the tunnel and the outside of the tunnel, and even if each row gives a specific exposure time and a specific gain coefficient, the conventional HDR algorithm cannot improve the pixel value difference caused by the specific scene, so that the generated HDR image is not good in some details. If the method is used in auxiliary driving and automatic driving judgment, misjudgment is easy to cause safety problems.
Disclosure of Invention
The invention aims to solve the problems that: a vehicle-mounted HDR method with quick response is provided, and HDR images with high quality in a region of interest are generated. The content includes a fast AE algorithm, a weight matrix generated by the result of the AE algorithm, and a new HDR algorithm.
The technical scheme of the invention is as follows:
a fast-response vehicle-mounted HDR method is characterized by comprising the following steps:
(1) Fast AE algorithm: selecting a first frame of an image to be processed as a sampling frame, selecting an interested area on the sampling frame, and calculating a Target value Target according to pixel values of specific pixel rows in the interested area;
(2) Obtaining a weight matrix: comparing the pixel values of all pixels in the sampling frame with Target to obtain a weight matrix, wherein elements in the weight matrix represent weight values of corresponding pixels;
(3) Pixel level HDR algorithm: recording the next frame of the sampling frame as an operation frame, and adjusting the pixel value of each pixel in the operation frame according to the weight matrix to obtain an adjusted operation frame image;
(4) And (4) taking the operation frame before adjustment as a new sampling frame and taking the next frame of the operation frame as a new operation frame, and repeating the steps (1) - (3) until the image to be processed is processed.
Further, in the step (1) of the rapid AE algorithm, the region of interest is selected by the following method:
the total line number of pixels of the sampling frame is N, and the number of each line is 0,1,2, \8230, 8230, N-1;
selecting a number range of
Figure GDA0003764398030000031
Behavior region 1 of (1);
selecting a number range of
Figure GDA0003764398030000032
Behavior region 2 of (1);
select the number range of
Figure GDA0003764398030000033
Behavior region 3;
the regions 1,2, 3 together constitute a region of interest; wherein n is 1 、n 2 、n 3 The number of rows, n, of regions 1,2, 3, respectively 1 =n 3 Respectively account for 8% -12% of the total number of rows N, N 2 Accounting for 25-35% of the total number of lines N.
Further, in the fast AE algorithm in step (1), the specific pixel row is selected by the following method: all lines in the region of interest are selected, either the odd numbered lines or the even numbered lines.
Further, in the fast AE algorithm in step (1), the calculation method of the Target value Target is as follows:
firstly, calculating the average values of all pixel values in specific pixel rows of the area 1 and the area 2, and respectively recording the average values as AP1 and AP2;
if Threshold1 is less than or equal to AP2 is less than or equal to Threshold2, then
Target=AP2;
If AP2> Threshold2 or AP2< Threshold1, the average of all pixel values in a particular pixel row of region 3 is calculated, denoted as AP3,
Target=(AP1+AP2+AP3)/3;
wherein Threshold1 is a lower Threshold, and Threshold1=128-t; threshold2 is an upper Threshold, threshold2=128+ t, where t is greater than or equal to 4 and less than or equal to 32.
Further, in the step (2), the method for obtaining the weight matrix may select a plurality of ways, which are respectively as follows:
the first method for obtaining the weight matrix is to set a bit width of 1bit to represent the weight value of a pixel:
if P is i,j Greater than or equal to Target, then A i,j =1; if P i,j < Target, then A i,j =0;
Wherein, P i,j Is the pixel value of the ith row and jth column pixel point, A i,j The weighted value of the ith row and jth column pixel is as follows:
bit width =1bit P i,j ≥Target P i,jTarget
A
i,j 1 0
And the weight values of all pixels in the sampling frame form a weight matrix.
The second method for obtaining the weight matrix is to set a bit width 2bit number to represent the weight value of the pixel:
if P i,j Greater than or equal to Target + V, then A i,j =3;
If Target is less than or equal to P i,j < Target + V, then A i,j =2;
If Target-V is less than or equal to P i,j < Target, then A i,j =1;
If P i,j < Target-V, then A i,j =0;
Wherein, P i,j Is the pixel value of the ith row and jth column pixel point, A i,j The weighted value of the ith row and jth column pixel point is V, which is a refinement coefficient, and is usually equal to or less than 32, that is, the following table:
Figure GDA0003764398030000051
and the weight values of all pixels in the sampling frame form a weight matrix.
Under the condition of permission of the chip area, weight values with higher bit width (such as 3bit, 4bit 8230; and the like) can be adopted, and the larger the bit width is, the finer the division of the pixel value of the image is, and the better the effect is. When the bit width of the weight matrix is increased, more details can be described, and the refinement coefficient V can be correspondingly increased.
The chip resources required by the two methods are relatively large, but the content of the weight matrix storage is often reduced in consideration of the actual chip area overhead, and a third method is provided here.
The third method for obtaining the weight matrix is that a plurality of pixel points share one weight:
since most of the pixel point values inside the image are not abrupt, the luminance information of neighboring points is a relatively close value. Therefore, n × n adjacent pixels are divided into a whole, the average pixel value of all pixels in the whole is regarded as the pixel value of any pixel in the whole, wherein n can select any common divisor of the number of rows and the number of columns of the sampling frame, and the average pixel value can be determined according to specific requirements during actual operation. The smaller n, the more accurate the stored information, but the chip area is multiplied. Conversely, the less information is stored, but the chip area is reduced by a factor of two. All pixel points in the sampling frame are correspondingly divided;
if P is i,j Greater than or equal to Target, then A i,j =1; if P i,j < Target, then A i,j =0;
Wherein, P i,j Is the pixel value of the ith row and jth column pixel point, A i,j And for the weighted values of the ith row and jth column of pixel points, the weighted values of all the pixels in the sampling frame form a weighted matrix. A plurality of pixel points share one weight, so that the picture effect is slightly poor, but a large amount of storage area can be saved.
Further, in step (3), the method for adjusting the pixel values in the operation frame includes:
if the corresponding weight value of the pixel point in the weight matrix is 1, subtracting a compensation value VP1 from the pixel value of the pixel point; if the weighted value of the pixel point is 0, adding the pixel value of the pixel point with a compensation value VP1; the selected VP1 ensures that the pixel values after operation are all between 8 and 250; and operating all pixel points in the operation frame to obtain an adjusted operation frame image.
Further, in step (3), the method for adjusting the pixel values in the operation frame includes:
if the weighted value of the pixel point in the weight matrix is 3, subtracting the compensation value VP2 from the pixel value; if the weighted value of the pixel point is 2, subtracting a compensation value VP3 from the pixel value of the pixel point; if the weighted value of the pixel point is 1, adding the pixel value of the pixel point with a compensation value VP3; if the weighted value of the pixel point is 0, adding a compensation value VP2 to the pixel value; wherein VP2 is larger than VP3, and the selected VP2 and VP3 are required to ensure that the pixel values after operation are both between 8 and 250; and operating all pixel points in the operation frame to obtain an adjusted operation frame image.
Compared with the prior art, the invention has the following beneficial effects:
1. for the power consumption of each frame image, once the AP2 of the area 2 satisfies the corresponding condition, the AP3 of the area 3 may not be calculated. Meanwhile, when calculating the target, all rows in the region of interest may be selected, or only half (odd or even) rows may be selected, which may not greatly affect the result in most image scenes. By using the method, the power consumption and the area of the chip can be greatly reduced, and the calculation time can be greatly shortened.
2. When the weight matrix is obtained, the precision can be increased by increasing the bit width of the weight value, the image processing effect is enhanced, the area of the chip can be reduced by sharing the weight value by a plurality of pixel points, the calculation speed is accelerated, and the adaptability adjustment can be conveniently carried out according to the actual requirements of the chip and the application scene.
3. The HDR algorithm of the invention operates a single pixel point, can well distinguish and process the pixel value difference of the same line of pixels, and the generated HDR image effect is better expressed in detail.
Drawings
Fig. 1 is a schematic view of a region of interest selected in example 1.
Fig. 2 is a matrix of pixel values of a sampling frame in embodiment 2.
Fig. 3 is a weight matrix of a sampling frame in embodiment 2.
Fig. 4 is a pixel value matrix of the operation frame before correction in embodiment 2.
Fig. 5 is a pixel value matrix of the operation frame after modification in embodiment 2.
Fig. 6 is an example where a plurality of adjacent points take the same weight.
Detailed Description
The invention is further described below with reference to the accompanying drawings:
example 1
The present embodiment mainly shows the operation method of the fast AE algorithm through an image of 1280 × 960, as shown in fig. 1.
For the 1280 × 960 image, the average pixel value AP1 of the odd-numbered lines between 192 and 288 lines in the area 1 is calculated, and then the average pixel value AP2 of the odd-numbered lines between 336 and 624 lines in the area 2 is calculated, at this time, it is determined whether AP2 satisfies 120 ≦ AP2 ≦ 136, if so, target = AP2, and if not, the average pixel value AP3 of the odd-numbered lines between 672 and 768 lines in the area 3 is calculated, and Target = (AP 1+ AP2+ AP 3)/3 continues to be calculated. Therefore, the calculation amount of the AE algorithm can be greatly shortened, the AE evaluation speed is increased, and the chip area overhead is reduced.
Example 2
The embodiment mainly takes an image of 7 × 7 as an example, and shows the generation of the weight matrix and the operation method of the pixel level HDR algorithm.
Generation of the weight matrix:
the weight value is a level evaluation of the pixel value of the whole image, which is generated by the previous frame image (sampling frame) and is acted on the next frame image (operation frame). The reason for this is that the refresh rate of the video image is usually not lower than 60FPS, and then more than 60 times per second, and the brightness variation of the image between two adjacent frames is almost negligible.
In an image of 7 × 7 size, a weight value is set to each pixel to perform a level evaluation of the pixel value. When the weight value is 1bit, a Target value Target for evaluating the current brightness value is given through the quick AE module, each pixel value of the sampling frame image is compared with the Target value Target, if the pixel value is larger than the Target value Target, the weight of the pixel value is given to 1, otherwise, the pixel value is 0. This results in a weight matrix of 7 x 7 size, with 1 representing the brighter pixels in the image and 0 representing the slightly darker pixels in the image.
As shown in fig. 2, the pixel value of the sampled frame matrix of the 7 × 7 image is generally low at the lower part and high at the upper part, and may be regarded as a simulation of a tunnel scene. And selecting a middle 3 behavior region of interest, counting the average value of pixels to be 92, and determining the average value to be a Target value. Pixels with pixel values below 92 are assigned a weight value of 0, whereas pixels with pixel values below 92 are assigned a weight value of 1. The weight matrix generated after the comparison is shown in fig. 3, and this weight matrix has only 1bit of information for each pixel, which reflects whether the brightness information of the pixel value at this position in the whole image is darker or brighter.
Pixel level HDR algorithm:
with the weight matrix obtained above, the frame image is manipulated, as shown in fig. 4, to obtain a better image by using the weight matrix to adjust. The following operations are performed on the operation frame: decreasing the luminance of the pixel value labeled 1 in the weight matrix by subtracting VP = 16; the pixel value labeled 0 plus VP =16 to increase its brightness.
As shown in fig. 5, the adjusted image pixel value matrix has a slightly increased dark pixel value in the tunnel, and can be thinned to each pixel point of each row. Therefore, the probability of misjudgment can be reduced when other recognition algorithms are carried out subsequently.
As shown in fig. 6, an example is shown in which 16 neighboring points take the same weight.
The above detailed description is intended to illustrate the present invention, not to limit the present invention, and any modifications and changes made within the spirit of the present invention and the scope of the claims fall within the scope of the present invention.

Claims (6)

1. A fast-response vehicle-mounted HDR method is characterized by comprising the following steps:
(1) Fast AE algorithm: selecting a first frame of an image to be processed as a sampling frame, selecting an interested area on the sampling frame, and calculating a Target value Target according to pixel values of specific pixel rows in the interested area;
(2) Acquiring a weight matrix: comparing the pixel values of all pixels in the sampling frame with Target to obtain a weight matrix, wherein elements in the weight matrix represent weight values of corresponding pixels;
(3) Pixel level HDR algorithm: recording the next frame of the sampling frame as an operation frame, and adjusting the pixel value of each pixel in the operation frame according to the weight matrix to obtain an adjusted operation frame image;
(4) Taking the operation frame before adjustment as a new sampling frame, taking the next frame of the operation frame as a new operation frame, and repeating the steps (1) - (3) until the image to be processed is processed;
in the step (1), the method for selecting the region of interest comprises the following steps:
recording the total line number of pixels of a sampling frame as N, wherein the line numbers are respectively 0,1,2, \8230 \ 8230;, N-1;
selecting a number range of
Figure FDA0003764398020000011
Behavior region 1 of (1);
selecting a number range of
Figure FDA0003764398020000012
Behavior region 2 of (1);
select the number range of
Figure FDA0003764398020000013
Behavior region 3;
regions 1,2, 3 together constitute a region of interest; wherein n is 1 、n 2 、n 3 Number of rows, n, of regions 1,2, 3, respectively 1 =n 3 Respectively account for 8% -12% of the total number of lines N, N 2 Accounting for 25-35% of the total number of rows N;
the selection method of the specific pixel row comprises the following steps: selecting all lines in the region of interest, or the lines with odd numbers, or the lines with even numbers;
the calculation method of the Target value Target comprises the following steps:
firstly, calculating the average values of all pixel values in specific pixel rows of the area 1 and the area 2, and respectively recording the average values as AP1 and AP2;
if Threshold1 is less than or equal to AP2 is less than or equal to Threshold2, then
Target=AP2;
If AP2> Threshold2 or AP2< Threshold1, the average of all pixel values in a particular pixel row of region 3 is calculated, denoted as AP3,
Target=(AP1+AP2+AP3)/3;
wherein Threshold1 is a lower Threshold, threshold1=128-t; threshold2 is an upper Threshold, threshold2=128+ t, where t is greater than or equal to 4 and less than or equal to 32.
2. The fast reacting vehicle HDR method as claimed in claim 1, wherein in the step (2), the weight matrix is obtained by:
if P i,j Equal to or greater than Target, then A i,j =1; if P i,j < Target, then A i,j =0;
Wherein, P i,j Is the pixel value of the ith row and jth column pixel point, A i,j And for the weighted values of the ith row and jth column of pixel points, the weighted values of all the pixels in the sampling frame form a weighted matrix.
3. The fast reactive vehicle HDR method as claimed in claim 1, wherein in the step (2), the weight matrix is obtained by:
if P i,j Greater than or equal to Target + V, then A i,j =3;
If Target is less than or equal to P i,j < Target + V, then A i,j =2;
If Target-V is less than or equal to P i,j < Target, then A i,j =1;
If P is i,j < Target-V, then A i,j =0;
Wherein, P i,j Is the pixel value of the ith row and jth column pixel point, A i,j And the weight values of the ith row and the jth column of pixel points are the weight values, V is a refinement coefficient and is less than or equal to 32, and the weight values of all pixels in the sampling frame form a weight matrix.
4. The fast reacting vehicle HDR method as claimed in claim 1, wherein in the step (2), the weight matrix is obtained by:
dividing n multiplied by n adjacent pixel points into a whole, taking the average pixel value of all the pixel points in the whole as the pixel value of any pixel point in the whole, wherein n is a common divisor of the row number and the column number of a sampling frame; all pixel points in the sampling frame are correspondingly divided;
if P is i,j Greater than or equal to Target, then A i,j =1; if P i,j < Target, then A i,j =0;
Wherein, P i,j Is the pixel value of the ith row and jth column pixel point, A i,j And for the weighted values of the ith row and jth column of pixel points, the weighted values of all the pixels in the sampling frame form a weighted matrix.
5. The fast-response vehicle HDR method as claimed in claim 2 or 4, wherein in step (3), the method for adjusting the pixel values in the operation frame is:
if the weighted value of the pixel point in the weight matrix is 1, subtracting the compensation value VP1 from the pixel value; if the weighted value of the pixel point is 0, adding the compensation value VP1 to the pixel value; the selected VP1 ensures that the pixel values after operation are all between 8 and 250; and operating all pixel points in the operation frame to obtain an adjusted operation frame image.
6. The fast-response vehicle HDR method as claimed in claim 3, wherein in step (3), the method for adjusting the pixel value in the operation frame is:
if the weighted value of the pixel point in the weight matrix is 3, subtracting the compensation value VP2 from the pixel value; if the weighted value of the pixel point is 2, subtracting a compensation value VP3 from the pixel value; if the weighted value of the pixel point is 1, adding the pixel value of the pixel point with a compensation value VP3; if the weighted value of the pixel point is 0, adding the pixel value of the pixel point with a compensation value VP2; wherein VP2 is larger than VP3, and the selected VP2 and VP3 ensure that the pixel values after operation are both between 8 and 250; and operating all pixel points in the operation frame to obtain an adjusted operation frame image.
CN202111534053.8A 2021-12-15 2021-12-15 Vehicle-mounted HDR method with rapid response Active CN114257741B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111534053.8A CN114257741B (en) 2021-12-15 2021-12-15 Vehicle-mounted HDR method with rapid response

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111534053.8A CN114257741B (en) 2021-12-15 2021-12-15 Vehicle-mounted HDR method with rapid response

Publications (2)

Publication Number Publication Date
CN114257741A CN114257741A (en) 2022-03-29
CN114257741B true CN114257741B (en) 2022-12-06

Family

ID=80792393

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111534053.8A Active CN114257741B (en) 2021-12-15 2021-12-15 Vehicle-mounted HDR method with rapid response

Country Status (1)

Country Link
CN (1) CN114257741B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI601122B (en) * 2016-11-15 2017-10-01 晨星半導體股份有限公司 Image compensation method applied to display and associated control circuit
CN108898566B (en) * 2018-07-20 2022-05-17 南京邮电大学 Low-illumination color video enhancement method using space-time illumination map
CN109345485B (en) * 2018-10-22 2021-04-16 北京达佳互联信息技术有限公司 Image enhancement method and device, electronic equipment and storage medium
CN112419181B (en) * 2020-11-19 2023-12-08 中国科学院西安光学精密机械研究所 Method for enhancing detail of wide dynamic infrared image

Also Published As

Publication number Publication date
CN114257741A (en) 2022-03-29

Similar Documents

Publication Publication Date Title
US11922639B2 (en) HDR image generation from single-shot HDR color image sensors
EP2987135B1 (en) Reference image selection for motion ghost filtering
US9113114B2 (en) Apparatus and method for automatically controlling image brightness in image photographing device
US9305372B2 (en) Method and device for image processing
US9288399B2 (en) Image processing apparatus, image processing method, and program
CN105407296B (en) Real-time video enhancement method and device
CN109686342B (en) Image processing method and device
US8982251B2 (en) Image processing apparatus, image processing method, photographic imaging apparatus, and recording device recording image processing program
US20090231446A1 (en) Method and apparatus for reducing motion blur in digital images
US11671715B2 (en) High dynamic range technique selection for image processing
CN110602467A (en) Image noise reduction method and device, storage medium and electronic equipment
US20230069014A1 (en) Method and apparatus for generating low bit width hdr image, storage medium, and terminal
CN101917551A (en) High-dynamic-range image acquisition method of single exposure
CN101409850A (en) Method and apparatus for processing color components of pixel
CN107659777B (en) Automatic exposure method and device
CN114257741B (en) Vehicle-mounted HDR method with rapid response
JP7362441B2 (en) Imaging device and method of controlling the imaging device
CN113240590B (en) Image processing method and device
US8300970B2 (en) Method for video enhancement and computer device using the method
CN112991240B (en) Image self-adaptive enhancement algorithm for real-time image enhancement
KR100480560B1 (en) Backlight correction method and device for image
CN113891081A (en) Video processing method, device and equipment
CN111064897B (en) Exposure evaluation value statistical method and imaging equipment
US9013626B2 (en) Signal processing circuit of solid-state imaging element, signal processing method of solid-state imaging element, and electronic apparatus
CN103137098A (en) Tone corresponding method and imaging processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant