CN107545556B - Signal lamp image processing method and system - Google Patents

Signal lamp image processing method and system Download PDF

Info

Publication number
CN107545556B
CN107545556B CN201610518275.3A CN201610518275A CN107545556B CN 107545556 B CN107545556 B CN 107545556B CN 201610518275 A CN201610518275 A CN 201610518275A CN 107545556 B CN107545556 B CN 107545556B
Authority
CN
China
Prior art keywords
image
red
exposure image
lowexp
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610518275.3A
Other languages
Chinese (zh)
Other versions
CN107545556A (en
Inventor
黄中宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201610518275.3A priority Critical patent/CN107545556B/en
Publication of CN107545556A publication Critical patent/CN107545556A/en
Application granted granted Critical
Publication of CN107545556B publication Critical patent/CN107545556B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a method and a system for processing signal lamp images, wherein the method and the system are used for acquiring at least one frame of low-exposure image and at least one frame of normal-exposure image; performing color correction on the reddish pixel area of the at least one frame of low-exposure image to obtain a corrected low-exposure image; and fusing the corrected low-exposure image and the normal-exposure image to generate a fused image. The invention generates the fusion image by fusing with the normal exposure image, so that the integral brightness of the fusion image is not changed; when color correction is carried out, only the interested area is corrected, the calculation amount is greatly reduced, only the reddish pixel area in the interested area is corrected, and the image can be accurately processed to the pixel level.

Description

Signal lamp image processing method and system
Technical Field
The invention belongs to the field of computer image processing and video monitoring, and particularly relates to a signal lamp image processing method and system.
Background
With the increasing number of urban motor vehicles, traffic control departments can install traffic signal lamp systems at urban traffic intersections to indicate motor vehicles, non-motor vehicles and pedestrians to pass through so as to dredge traffic. In order to ensure that the motor vehicles run according to the signal lamp indication and the violation conditions of the intersections are monitored, the traffic management department can also install a video monitoring system at the intersections for recording the violation conditions of vehicles and pedestrians and taking the violation conditions as the evidence of law enforcement units.
In the prior art, during the use of a video monitoring system, a red traffic light in a video image often has a distortion phenomenon. For example, at night with low ambient brightness, the exposure time of the camera is long, and the red traffic light can show overexposed white. The traffic light (red light) is usually an LED light, and when the dominant wavelength of the emitted spectrum is located at the higher position of the red and green sensitivity curve of the camera sensor, the red light will appear yellow. These phenomena may cause the images captured by the video surveillance system to be insufficient as valid evidence of a violation.
The existing methods for solving the problems are divided into two types: one is to add an optical filter in front of the camera lens to reduce the brightness of the traffic light region or the transmittance of specific wavelengths, thereby recovering the color of the traffic light. Such an approach would increase equipment complexity, increase cost, and reduce field brightness and imaging quality. The other is focusing on the end of the camera image post-processing flow, and performing special processing on the colors of the traffic light area.
The patent document 'a method for correcting the red light discoloration and enlargement problem of a picture of a vehicle running red light at night', which is disclosed as 102663892a, can only deal with the problem that the red light is abnormal due to over exposure at night and cannot deal with the problem that the red light is yellow.
The patent document 'signal lamp image processing method and device' has publication number 103679733A, the method has large calculation amount, low scene adaptability and high error rate, and is easy to position errors so as to cause correction failure, and troubles are indirectly brought to effective and rapid law enforcement of law enforcement units.
Disclosure of Invention
The invention aims to provide a signal lamp image processing method and a signal lamp image processing system, which do not change the overall brightness of an image and do not influence the quality of a generated fusion image; only the interested region is corrected, so that the calculation amount of the method is greatly reduced; only red pixel points in the region of interest are corrected, so that the image processing of the invention can be accurate to the pixel level.
To achieve the above object, an aspect of the present invention provides a method for processing a signal lamp image, including: acquiring at least one frame of low-exposure image and at least one frame of normal-exposure image; performing color correction on the reddish pixel area of the at least one frame of low-exposure image to obtain a corrected low-exposure image; and fusing the corrected low-exposure image and the normal-exposure image to generate a fused image.
Wherein the method comprises the following steps: acquiring a frame of normal exposure image NorExp, a previous frame of low exposure image LowExp1 and a next frame of low exposure image LowExp 2; color correction is carried out on the union of the partial red pixel regions of the two frames of low-exposure images LowExp1 and LowExp2 to obtain a corrected low-exposure image LowExp; and fusing the corrected low-exposure image LowExp and the normal-exposure image NorExp to generate a fused image Dstimg.
Wherein the step of fusing the corrected low-exposure image with the normal-exposure image to generate a fused image comprises: and taking a gray level image of an intersection area of a red pixel area of at least one frame of low-exposure image and a high-brightness area of a normal-exposure image as a weight, and carrying out weighted assignment on red, green and blue channel values of the corrected low-exposure image and the corrected normal-exposure image to obtain a fused image.
Wherein the step of acquiring an image comprises: selecting an interested region OriRect; carrying out image coordinate alignment on the region of interest OriRect to obtain an aligned region of interest DstRect; and acquiring at least one frame of low-exposure image and at least one frame of normal-exposure image for the aligned region of interest DstRect.
Wherein the region of interest oriRect is image coordinate aligned according to the following formula:
OriRect=[xmin0,xmax0,ymin0,ymax0];
DstRect=[xmin1,xmax1,ymin1,ymax1];
xmin1=xmin0/2*2;xmax1=(xmax0+1)/2*2;
ymin1=ymin0/2*2;ymax1=(ymax0+1)/2*2;
wherein OriRect represents the region of interest and DstRect represents the aligned region of interest.
Wherein the step of performing color correction on the reddish pixel region of the at least one frame of low-exposure image comprises: identifying a red-bias pixel region in the at least one frame of low-exposure image based on a red channel value, a green channel value, and a blue channel value in the low-exposure image; and re-assigning the red, green and blue channel values of the red-biased pixel region to obtain a corrected low-exposure image.
Wherein the step of color correcting the union of the reddish pixel regions of the two frames of low exposure images LowExp1, LowExp2 comprises: identifying a first reddish pixel region LowMask1 in the previous frame low exposure image LowExp 1; identifying a second partial red pixel region LowMask2 in the next frame low exposure image LowExp 2; performing or calculating the first partial red pixel region LowMask1 and the second partial red pixel region LowMask2 to obtain a union LowMask of the first partial red pixel region LowMask and the second partial red pixel region LowMask; and performing color correction on the union LowMask of the first and second reddish pixel regions to obtain a corrected low-exposure image LowExp.
Wherein the first partial red pixel region LowMask1 is identified according to the following formula:
LowMask1(i,j)=R_LowMask1(i,j)>R_thres&&R_LowMask1(i,j)>2*G_LowMask1(i,j)&&G_LowMask1(i,j)>B_LowMask1(i,j);
wherein, LowMask1(i, j) represents the channel value of the pixel (i, j) in the first partial red pixel region LowMask1, R _ LowMask1(i, j), G _ LowMask1(i, j), and B _ LowMask1(i, j) represent the red, green, and blue channel values of the pixel (i, j) in the first partial red pixel region LowMask1, and R _ thres represents the red channel lower limit value; and/or
The second partial red pixel region LowMask2 is identified according to the following equation:
LowMask2(i,j)=R_LowMask2(i,j)>R_thres&&R_LowMask2(i,j)>2*G_LowMask2(i,j)&&G_LowMask2(i,j)>B_LowMask2(i,j)
wherein, LowMask2(i, j) represents the channel value of the pixel (i, j) in the second partial red pixel region LowMask2, R _ LowMask2(i, j), G _ LowMask2(i, j), and B _ LowMask2(i, j) represent the red channel value, green channel value, and blue channel value of the pixel (i, j) in the second partial red pixel region LowMask2, and R _ thres represents the red channel lower limit value.
And performing color correction on the union LowMask of the red-biased pixel regions according to the following formula:
when the value of LowMask (i, j) is 1, then
R_LowExp(i,j)=MAX(R_LowExp1(i,j),R_LowExp2(i,j));
G_LowExp(i,j)=MIN(B_LowExp1(i,j),B_LowExp2(i,j));
B_LowExp(i,j)=G_LowExp(i,j);
When the value of LowMask (i, j) is 0, then
R_LowExp(i,j)=(R_LowExp1(i,j)+R_LowExp2(i,j))/2;
G_LowExp(i,j)=(G_LowExp1(i,j)+G_LowExp2(i,j))/2;
B_LowExp(i,j)=(B_LowExp1(i,j)+B_LowExp2(i,j))/2;
Wherein, R _ LowExp (i, j), G _ LowExp (i, j), B _ LowExp (i, j) respectively represent red, green, blue channel values of the pixel (i, j) in the corrected low exposure image LowExp, R _ LowExp1(i, j), B _ LowExp1(i, j), G _ LowExp1(i, j) respectively represent red, blue, green channel values of the pixel (i, j) in the previous frame low exposure image LowExp1, R _ LowExp2(i, j), B _ LowExp2(i, j), G _ LowExp2(i, j) respectively represent red, blue, green channel values of the pixel (i, j) in the next frame low exposure image LowExp 2.
Wherein the step of generating a fused image comprises: identifying a highlight region NorMask in the normal exposure image NorExp; taking intersection of the partial red pixel region LowMask and the highlight region NorMask to obtain a gray level image NewGray; and taking the gray level image NewGray as a weight, and carrying out weighted assignment on the corrected low-exposure image LowExp and the normal-exposure image NorExp to generate a fused image.
The step of identifying the NorMask of the highlight region meets the following conditions:
NorMask(i,j)=MAX(R_NorExp(i,j),G_NorExp(i,j),B_NorExp(i,j))>V_max;
wherein, NorMask (i, j) represents the channel value of the pixel point (i, j) in the NorMask in the highlight area, R _ NorExp (i, j), G _ NorExp (i, j), B _ NorExp (i, j) represent the red, green, blue channel values of the pixel point (i, j) in the NorExp of the normal exposure image, respectively, and V _ max represents the lower limit value of the brightness.
In the step of performing weighted assignment on the corrected low-exposure image LowExp and the normal-exposure image NorExp to generate a fused image, a fused image DstImg is generated according to the following formula:
R_DstImg(i,j)=(R_LowExp(i,j)*NewGray(i,j)+R_NorExp(i,j)*(255-NewGray(i,j)))>>8;
G_DstImg(i,j)=(G_LowExp(i,j)*NewGray(i,j)+G_NorExp(i,j)*(255-NewGray(i,j)))>>8;
B_DstImg(i,j)=(B_LowExp(i,j)*NewGray(i,j)+B_NorExp(i,j)*(255-NewGray(i,j)))>>8;
wherein, R _ DstImg (i, j), G _ DstImg (i, j) and B _ DstImg (i, j) respectively represent red, green and blue channel values of the pixel (i, j) in the fused image DstImg, NewGray represents a gray scale, R _ LowExp (i, j), G _ LowExp (i, j) and B _ LowExp (i, j) respectively represent red, green and blue channel values of the pixel (i, j) in the corrected low-exposure image LowExp, and R _ NorExp (i, j), G _ NorExp (i, j) and B _ NorExp (i, j) respectively represent red, green and blue channels of the pixel (i, j) in the normal-exposure image NorExp.
According to another aspect of the present invention, there is provided a signal lamp image processing system, including: the image fusion device comprises an image acquisition unit, an image correction unit and an image fusion unit; the image acquisition unit is used for acquiring at least one frame of low-exposure image and at least one frame of normal-exposure image; the image correction unit is used for carrying out color correction on the reddish pixel area of the at least one frame of low-exposure image to obtain a corrected low-exposure image; and the image fusion unit is used for fusing the corrected low-exposure image and the normal-exposure image to generate a fusion image.
When the image acquisition unit acquires one frame of normal exposure image NorExp, a previous frame of low exposure image LowExp1 and a next frame of low exposure image LowExp2, the two frames of low exposure images LowExp1, LowExp2 and one frame of normal exposure image NorExp are sent to the image correction unit; the image correction unit performs color correction on a union of partial red pixel regions of the two frames of low-exposure images LowExp1 and LowExp2 to obtain a corrected low-exposure image LowExp, and sends the corrected low-exposure image LowExp to the image fusion unit; the image fusion unit fuses the corrected low-exposure image LowExp and the normal-exposure image NorExp to generate a fusion image Dstimg.
Wherein the image fusion unit performs the following operations: and taking a gray level image of an intersection area of a red pixel area of at least one frame of low-exposure image and a high-brightness area of a normal-exposure image as a weight, and carrying out weighted assignment on red, green and blue channel values of the corrected low-exposure image and the corrected normal-exposure image to obtain a fused image.
Wherein the image acquisition unit includes: the system comprises an interested region selection module, an interested region alignment module and an image acquisition module; the interested region selection module is used for selecting an interested region OriRect; the interested region alignment module is used for carrying out image coordinate alignment on the interested region OriRect so as to obtain an aligned interested region DstRect; and the image acquisition module is used for acquiring at least one frame of low-exposure image and at least one frame of normal-exposure image of the aligned region of interest DstRect.
The interested region alignment module aligns the image coordinates of the interested region oriRect according to the following formula:
OriRect=[xmin0,xmax0,ymin0,ymax0];
DstRect=[xmin1,xmax1,ymin1,ymax1];
xmin1=xmin0/2*2;xmax1=(xmax0+1)/2*2;
ymin1=ymin0/2*2;ymax1=(ymax0+1)/2*2;
wherein OriRect represents the region of interest and DstRect represents the aligned region of interest.
Wherein the image correction unit includes: the device comprises a red pixel area identification module and a red pixel area correction module; a red pixel region identification module, configured to identify a red pixel region in the at least one frame of low-exposure image based on a red channel value, a green channel value, and a blue channel value in the low-exposure image; and the red pixel area correction module is used for reassigning the red, green and blue channel values of the red pixel area to obtain a corrected low-exposure image.
The red-biased pixel region identification module is configured to identify a first red-biased pixel region LowMask1 in the previous frame low exposure image LowExp1, and is further configured to identify a second red-biased pixel region LowMask2 in the next frame low exposure image LowExp 2; the red-biased pixel region correction module is configured to perform or calculate on the first red-biased pixel region LowMask1 and the second red-biased pixel region LowMask2 to obtain a union LowMask of the first red-biased pixel region and the second red-biased pixel region, and further perform color correction on the union LowMask of the first red-biased pixel region and the second red-biased pixel region to obtain a corrected low-exposure image LowExp.
The red-biased pixel region identification module identifies a first red-biased pixel region LowMask1 according to the following formula:
LowMask1(i,j)=R_LowMask1(i,j)>R_thres&&R_LowMask1(i,j)>2*G_LowMask1(i,j)&&G_LowMask1(i,j)>B_LowMask1(i,j);
wherein, LowMask1(i, j) represents the channel value of the pixel (i, j) in the first partial red pixel region LowMask1, R _ LowMask1(i, j), G _ LowMask1(i, j), and B _ LowMask1(i, j) represent the red, green, and blue channel values of the pixel (i, j) in the first partial red pixel region LowMask1, and R _ thres represents the red channel lower limit value; and/or
The red-biased pixel region identification module identifies a second red-biased pixel region LowMask2 according to the following equation:
LowMask2(i,j)=R_LowMask2(i,j)>R_thres&&R_LowMask2(i,j)>2*G_LowMask2(i,j)&&G_LowMask2(i,j)>B_LowMask2(i,j);
wherein, LowMask2(i, j) represents the channel value of the pixel (i, j) in the second partial red pixel region LowMask2, R _ LowMask2(i, j), G _ LowMask2(i, j), and B _ LowMask2(i, j) represent the red, green, and blue channel values of the pixel (i, j) in the second partial red pixel region LowMask2, respectively, and R _ thres represents the red channel lower limit value.
The red pixel region correction module performs color correction on the union LowMask of the red pixel regions according to the following formula:
when the value of LowMask (i, j) is 1, then
R_LowExp(i,j)=MAX(R_LowExp1(i,j),R_LowExp2(i,j));
G_LowExp(i,j)=MIN(B_LowExp1(i,j),B_LowExp2(i,j));
B_LowExp(i,j)=G_LowExp(i,j);
When the value of LowMask (i, j) is 0, then
R_LowExp(i,j)=(R_LowExp1(i,j)+R_LowExp2(i,j))/2;
G_LowExp(i,j)=(G_LowExp1(i,j)+G_LowExp2(i,j))/2;
B_LowExp(i,j)=(B_LowExp1(i,j)+B_LowExp2(i,j))/2;
Wherein, R _ LowExp (i, j), G _ LowExp (i, j), B _ LowExp (i, j) respectively represent red, green, blue channel values of the pixel (i, j) in the corrected low exposure image LowExp, R _ LowExp1(i, j), B _ LowExp1(i, j), G _ LowExp1(i, j) respectively represent red, blue, green channel values of the pixel (i, j) in the low exposure image LowExp1, R _ LowExp2(i, j), B _ LowExp2(i, j), G _ LowExp2(i, j) respectively represent red channel values, blue, green channel values of the pixel (i, j) in the low exposure image LowExp 2.
Wherein the image fusion unit includes: the system comprises a brightness region identification module, an intersection calculation module and a fusion module; the brightness region identification module is used for identifying a high brightness region NorMask in the normal exposure image NorExp; the intersection calculation module is used for taking intersection of the partial red pixel region LowMask and the highlight region NorMask to obtain a gray level image NewGray; and the fusion module is used for carrying out weighted assignment on the corrected low-exposure image LowExp and the normal-exposure image NorExp by taking the gray level image NewGray as a weight so as to generate a fusion image.
The high-brightness region NorMask identified by the brightness region identification module meets the following conditions:
NorMask(i,j)=MAX(R_NorExp(i,j),G_NorExp(i,j),B_NorExp(i,j))>V_max;
wherein, NorMask (i, j) represents the channel value of the pixel point (i, j) in the NorMask in the highlight area, R _ NorExp (i, j), G _ NorExp (i, j), B _ NorExp (i, j) represent the red, green, blue channel values of the pixel point (i, j) in the NorExp of the normal exposure image, respectively, and V _ max represents the lower limit value of the brightness.
Wherein the fusion module generates a fused image Dstimg according to the following formula:
R_DstImg(i,j)=(R_LowExp(i,j)*NewGray(i,j)+R_NorExp(i,j)*(255-NewGray(i,j)))>>8;
G_DstImg(i,j)=(G_LowExp(i,j)*NewGray(i,j)+G_NorExp(i,j)*(255-NewGray(i,j)))>>8;
B_DstImg(i,j)=(B_LowExp(i,j)*NewGray(i,j)+B_NorExp(i,j)*(255-NewGray(i,j)))>>8;
wherein, R _ DstImg (i, j), G _ DstImg (i, j), B _ DstImg (i, j) respectively represent the red, green, blue channels of the pixel (i, j) in the fused image DstImg, NewGray represents the gray map, R _ LowExp (i, j), G _ LowExp (i, j), B _ LowExp (i, j) respectively represent the red, green, blue channels of the pixel (i, j) in the corrected low exposure image LowExp, R _ NorExp (i, j), G _ NorExp (i, j), B _ NorExp (i, j) respectively represent the red, green, blue channels of the pixel (i, j) in the normal exposure image NorExp.
Compared with the scheme of additionally arranging the optical filter in front of the camera lens in the prior art, the invention does not change the overall brightness of the image, does not generate adverse effect on the image quality, does not increase hardware devices and reduces the hardware cost. In addition, compared with the traditional image processing scheme, the method only corrects the red pixel points in the image, and can accurately process the image to the pixel level, so that the problem of poor image effect caused by positioning errors is avoided. In addition, the method only corrects the region of interest, so that the calculated amount of the method is greatly reduced, the method is convenient to transplant to different platforms, and the real-time processing requirement of the video signal can be met.
Drawings
FIG. 1 is a schematic flow diagram of a method for processing signal light images according to an embodiment of the invention;
FIG. 2 is a schematic flow chart diagram of a method for processing signal light images in accordance with an alternative embodiment of the present invention;
FIG. 3 is a flowchart illustrating steps S1 and S10 according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating step S2 according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating step S20 according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating steps S3 and S30 according to an embodiment of the present invention;
FIG. 7 is a schematic block diagram of a signal light image processing system according to another embodiment of the present invention;
fig. 8 is a schematic structural diagram of the image acquisition unit 1 according to an embodiment of the present invention;
fig. 9 is a schematic configuration diagram of the image correction unit 2 and the image fusion unit 3 according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings in conjunction with the following detailed description. It should be understood that the description is intended to be exemplary only, and is not intended to limit the scope of the present invention. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present invention.
It should be noted that: the bayer pattern refers to an arrangement of red, green, and blue filters of a sensor surface of a digital camera, which is called a bayer pattern. The light passes through the bayer pattern, exciting the camera sensor, and the image obtained by analog-to-digital conversion is called a bayer image.
The exposure degree refers to the amount of light received by the image, i.e., the degree of exposure of the image. The more light an image receives, i.e. the higher the exposure, the whiter the image; the less light an image receives, i.e. the lower the exposure, i.e. the less exposed the image, the darker the image.
Fig. 1 is a schematic flow chart of a signal lamp image processing method according to the present invention.
As shown in fig. 1, the signal lamp image processing method of the present invention includes:
in step S1, at least one frame of low exposure image and at least one frame of normal exposure image are acquired.
The method comprises the steps of collecting video images of signal lamps, and in a video monitoring system, a camera continuously and circularly captures images in a video scene in a low-exposure and normal-exposure mode. The camera captures at least one frame of low exposure image and at least one frame of normal exposure image. Specifically, an exposure upper limit value and an exposure lower limit value can be preset, the exposure of each frame of image is detected in real time, and when the exposure of a certain frame of image is detected to be between the exposure upper limit value and the exposure lower limit value, the frame of image is collected and serves as a normal exposure image; and when the exposure of a certain frame of image is detected to be lower than the lower limit value of the exposure, acquiring the frame of image and taking the frame of image as a low exposure image.
Here, the image captured by the camera may be a bayer image, or may be another type of image.
Step S2, performing color correction on the red-biased pixel region of the at least one frame of low-exposure image to obtain a corrected low-exposure image.
In the step, a red pixel area of at least one frame of low-exposure image is identified, and after the red pixel area is identified, color correction is performed on the red pixel area to obtain a corrected low-exposure image.
The specific implementation of this step can be seen in fig. 4 below.
Step S3, fusing the corrected low-exposure image with the normal-exposure image to generate a fused image.
In the step, the gray level image of the intersection area of the red pixel area of at least one frame of low exposure image and the highlight area of the normal exposure image is taken as the weight, and the red, green and blue channel values of the corrected low exposure image and the normal exposure image are subjected to weighted assignment to obtain a fused image. The corrected low-exposure image and the normal-exposure image are fused to generate a fused image, and the overall brightness of the fused image is not changed by fusing the fused image and the normal-exposure image; the method only corrects the reddish pixel area in the image, and the processing of the image can be accurate to the pixel level; and the invention only corrects the interested region, and compared with the prior art, the calculated amount is greatly reduced.
Fig. 2 is a schematic flow chart of a signal lamp image processing method according to an alternative embodiment of the present invention.
As shown in fig. 2, a method for processing a signal lamp image according to an alternative embodiment of the present invention includes:
in step S10, a normal exposure image NorExp of one frame, a low exposure image LowExp1 of a previous frame, and a low exposure image LowExp2 of a next frame are acquired.
In this step, as described above, the camera cyclically captures images in the video scene in a low-exposure, normal-exposure manner. When the camera captures a frame of normal exposure image, the previous frame of low exposure image and the next frame of low exposure image are captured simultaneously. As mentioned above, in the process of capturing images, the exposure of each frame of image is detected and analyzed, the exposure of each frame of image is judged in real time, and the magnitude of the upper limit value and the lower limit value of the exposure is judged; meanwhile, a low exposure image LowExp1 (an image whose exposure is less than the lower limit value of exposure) of the frame preceding the normal exposure image and a low exposure image LowExp2 (an image whose exposure is less than the lower limit value of exposure) of the frame succeeding the normal exposure image are captured.
Here, the camera can capture arbitrarily three frames of images, for example, one frame of a normal exposure image, and arbitrarily two frames of a low exposure image. As described above, in an alternative embodiment of the present invention, one frame of the normal exposure image, the previous frame of the low exposure image, and the next frame of the low exposure image may be captured.
In step S20, color correction is performed on the union of the partial red pixel regions of the two frames of low exposure images LowExp1 and LowExp2 to obtain a corrected low exposure image LowExp.
In this step, the reddish pixel regions of the two frames of low-exposure images LowExp1 and LowExp2 are respectively identified, and the reddish pixel regions are subjected to union operation to obtain a union of the reddish pixel regions of the two frames of low-exposure images LowExp1 and LowExp 2; further, color correction is performed on a union of the reddish pixel regions of the two frames of low-exposure images LowExp1 and LowExp2, so that a corrected low-exposure image LowExp is obtained.
Here, the union of the red-biased pixel regions may be calculated by an or operation or other logical operation.
The specific implementation of this step can be seen in fig. 5 below.
Step S30, fusing the corrected low exposure image LowExp and the normal exposure image NorExp to generate a fused image DstImg.
As described above, the gray level map of the intersection region of the red-biased pixel region of the two frames of low-exposure images and the high-brightness region of the normal-exposure image is used as the weight, and the red, green and blue channel values of the corrected low-exposure image and the normal-exposure image are weighted and assigned to obtain the fused image. The corrected low-exposure image and the normal-exposure image are fused to generate a fused image, and the overall brightness of the fused image is not changed by fusing the fused image and the normal-exposure image; when color correction is carried out, only a reddish pixel area in an image is corrected, and the image processing can be accurate to a pixel level; and the invention only corrects the interested region, and the calculated amount is greatly reduced.
FIG. 3 is a flowchart illustrating steps S1 and S10 according to an embodiment of the present invention.
As shown in fig. 3, the step of acquiring the low-exposure image and the normal-exposure image in the aforementioned step S1 and step S10 of the present invention includes:
in step S11, a region of interest OriRect is selected.
In this step, at the time of initialization, i.e., before performing step S1 (i.e., the step of acquiring at least one frame of low exposure image and at least one frame of normal exposure image) and step S10 (i.e., acquiring one frame of normal exposure image, the previous frame of low exposure image, and the next frame of low exposure image), a region of interest OriRect of the user, i.e., the range of the signal light to be corrected in the video image, is first selected.
Here, when selecting the region of interest, it may be selected automatically, manually, or by other selection methods.
In a video monitoring system, the height, the position and the direction of a camera are all fixed and unchanged. In the present invention, no video scene change due to camera shake is taken into account, i.e. the video scene of the camera is assumed to be stationary. The camera has a wide monitoring range, and generally includes areas (i.e., non-interested areas) of roads, vehicles, pedestrians, surrounding buildings, and the like, which do not need to be corrected. If the whole image is used as an input parameter for correction, the calculation amount is very large, and excessive redundant calculation is generated.
According to the invention, only the region of interest is selected as an input parameter, the region of interest is taken as the input parameter, and the whole image is not taken as the input parameter, so that the region of interest is only required to be corrected, the region of no interest is not corrected, redundant calculation is not generated, and the calculation amount is greatly reduced.
And step S12, carrying out image coordinate alignment on the region of interest to obtain an aligned region of interest DstRect.
Specifically, the region of interest OriRect is image coordinate-aligned according to the following formula:
OriRect=[xmin0,xmax0,ymin0,ymax0];
DstRect=[xmin1,xmax1,ymin1,ymax1];
xmin1=xmin0/2*2;xmax1=(xmax0+1)/2*2;
ymin1=ymin0/2*2;ymax1=(ymax0+1)/2*2;
wherein oriRect represents the region of interest, DstRect represents the region of interest after alignment, xmin represents the X coordinate of the upper left corner of the region of interest, ymin0 represents the Y coordinate of the upper left corner of the region of interest, xmax0 represents the X coordinate of the lower right corner of the region of interest, and ymax0 represents the Y coordinate of the lower right corner of the region of interest; similarly, xmin1 represents the Y-coordinate of the top left corner of the region of interest after alignment, ymin1 represents the Y-coordinate of the top left corner of the region of interest after alignment, xmax1 represents the X-coordinate of the bottom right corner of the region of interest after alignment, and ymax1 represents the Y-coordinate of the bottom right corner of the region of interest after alignment.
Here, at the time of calculation, four pixels adjacent to each other are taken as one calculation unit, with the object of reducing the calculation complexity without losing the calculation accuracy. In addition, the bayer image has a plurality of arrangement formats, such as BGGR, GBRG, and the like, the arrangement of four pixels in each computing unit is 2 × 2, and the RGB arrangement of four pixels corresponding to different arrangement formats is different, for example, the RGB arrangement of four pixels corresponding to BGGR is B, G, G, R from left to right; the same principle as the above arrangement, the RGB arrangement of the four pixels of the image in other arrangement formats is not repeated.
In step S13, at least one frame of low exposure image and at least one frame of normal exposure image are acquired for the aligned region of interest DstRect.
In the step, after the image coordinates are aligned, at least one frame of low-exposure image and at least one frame of normally-exposed image of DstRect of the aligned region of interest are obtained.
In the present invention, during the actual usage, step S11 and step S12 are only run once during the algorithm initialization, and step S13 is called in a loop.
Fig. 4 is a flowchart illustrating step S2 according to an embodiment of the present invention.
As shown in fig. 4, the foregoing step S2 further includes the following steps:
step S21, identifying a red-bias pixel region in the at least one frame of low-exposure image based on the red channel value, the green channel value and the blue channel value in the low-exposure image.
In the step, a red-biased pixel area in each frame of low-exposure image is identified according to the relationship between the red channel value, the green channel value and the blue channel value of each pixel in each frame of low-exposure image and the lower limit value of the red channel. The three-color channel values (red channel value, green channel value, and blue channel value) of the reddish pixel region have some relationship with the red channel lower limit value, and the reddish pixel region in the low-exposure image is identified based on this relationship.
The specific implementation of this step can be seen in step S201 and step S202 in fig. 5 below.
And step S22, reassigning the red, green and blue channel values of the red-biased pixel region to obtain a corrected low-exposure image.
In this step, the three-color channel value (red channel value, green channel value, and blue channel value) of each pixel in the red-biased pixel region is re-assigned based on the red channel value, green channel value, and blue channel value of each pixel in each frame of low-exposure image, so as to obtain a corrected low-exposure image.
The specific implementation of this step can be seen in step S204 of fig. 5 below.
Fig. 5 is a flowchart illustrating step S20 according to an embodiment of the present invention.
As shown in fig. 5, the aforementioned step S20 of the present invention includes the following steps:
in step S201, a first reddish-pixel region LowMask1 is identified in the previous frame low-exposure image LowExp 1.
Specifically, the red channel value, the green channel value, and the blue channel value of each pixel in the first reddish pixel region are associated with the red channel lower limit value as follows: the red channel value is greater than the red channel lower limit value, the green channel value is less than 1/2 of the red channel value, and the blue channel value is less than the green channel value. A first red-biased pixel region LowMask1 is identified by performing an and operation on the red channel value (the red channel value is greater than the red channel lower limit), the green channel value (the green channel value is less than 1/2 of the red channel value), and the blue channel value (the blue channel value is less than the green channel value) of each pixel.
The first reddish pixel region LowMask1 is identified according to the following equation:
LowMask1(i,j)=R_LowMask1(i,j)>R_thres&&R_LowMask1(i,j)>2*G_LowMask1(i,j)&&G_LowMask1(i,j)>B_LowMask1(i,j);
wherein, LowMask1(i, j) represents the channel value of the pixel (i, j) in the first partial red pixel region LowMask1, R _ LowMask1(i, j), G _ LowMask1(i, j), and B _ LowMask1(i, j) represent the red, green, and blue channel values of the pixel (i, j) in the first partial red pixel region LowMask1, respectively, R _ thres represents the red channel lower limit value, whose value range may be 1536-.
In the invention, regarding the values of i and j in the pixel points (i and j), the values of i and j are different according to different resolutions of images. For example, in an image with a resolution of 1080p, i takes on values of 0-1920 and j takes on values of 1080. Wherein, the values of i and j are natural numbers.
In step S202, a second partial red pixel region LowMask2 is identified in the subsequent frame low exposure image LowExp 2.
In the same identification manner as the first reddish pixel, the red channel value, the green channel value, and the blue channel value of each pixel in the second reddish pixel region are also associated with the red channel lower limit value as follows: the red channel value is greater than the red channel lower limit value, the green channel value is less than 1/2 of the red channel value, and the blue channel value is less than the green channel value. A second partial red pixel region LowMask1 is identified by performing an and operation on the red channel value (the red channel value is greater than the red channel lower limit value), the green channel value (the green channel value is less than 1/2 of the red channel value), and the blue channel value (the blue channel value is less than the green channel value) of each pixel.
The second partial red pixel region LowMask2 is identified according to the following equation:
LowMask2(i,j)=R_LowMask2(i,j)>R_thres&&R_LowMask2(i,j)>2*G_LowMask2(i,j)&&G_LowMask2(i,j)>B_LowMask2(i,j);
wherein, LowMask2(i, j) represents the channel value of the pixel (i, j) in the second partial red pixel region LowMask2, R _ LowMask2(i, j), G _ LowMask2(i, j), and B _ LowMask2(i, j) represent the red channel value, green channel value, and blue channel value of the pixel (i, j) in the second partial red pixel region LowMask2, and R _ thres represents the red channel lower limit value, and the value range thereof may be 1536-2048, and those skilled in the art may set the lower limit value as required.
In step S203, the first partial red pixel region LowMask1 and the second partial red pixel region LowMask2 are subjected to or calculation to obtain a union LowMask of the two regions.
In this step, the first partial red pixel region LowMask1 identified in the step S201 and the second partial red pixel region LowMask2 identified in the step S202 are subjected to or calculation to obtain a union LowMask of the two regions.
Specifically, the union LowMask of the two is calculated according to the following formula:
LowMask(i,j)=LowMask1(i,j)||LowMask2(i,j);
wherein, LowMask1(i, j) represents the channel value of the pixel (i, j) in the first reddish-biased pixel region LowMask1, and LowMask2(i, j) represents the channel value of the pixel (i, j) in the second reddish-biased pixel region LowMask 2.
Here, the execution order of step S201 and step S202 is not unique, and may be executed sequentially or simultaneously.
Step S204, performing color correction on the union LowMask of the first and second reddish pixel regions to obtain a corrected low-exposure image LowExp.
In this step, red channel values, green channel values and blue channel values of a union LowMask of the first reddish-biased pixel region LowMask1 and the second reddish-biased pixel region LowMask2 are reassigned, and the union LowMask of the reassigned two is output to a low-exposure image LowExp to obtain a corrected low-exposure image LowExp.
Specifically, the color correction is performed on the union LowMask of the partial red pixel regions according to the following formula:
when the value of LowMask (i, j) is 1, then
R_LowExp(i,j)=MAX(R_LowExp1(i,j),R_LowExp2(i,j));
G_LowExp(i,j)=MIN(B_LowExp1(i,j),B_LowExp2(i,j));
B_LowExp(i,j)=G_LowExp(i,j);
When the value of LowMask (i, j) is 0, then
R_LowExp(i,j)=(R_LowExp1(i,j)+R_LowExp2(i,j))/2;
G_LowExp(i,j)=(G_LowExp1(i,j)+G_LowExp2(i,j))/2;
B_LowExp(i,j)=(B_LowExp1(i,j)+B_LowExp2(i,j))/2;
Wherein, R _ LowExp (i, j), G _ LowExp (i, j), B _ LowExp (i, j) respectively represent red, green, blue channel values of the pixel (i, j) in the corrected low exposure image LowExp, R _ LowExp1(i, j), B _ LowExp1(i, j), G _ LowExp1(i, j) respectively represent red, blue, green channel values of the pixel (i, j) in the previous frame low exposure image LowExp1, R _ LowExp2(i, j), B _ LowExp2(i, j), G _ LowExp2(i, j) respectively represent red, blue, green channel values of the pixel (i, j) in the next frame low exposure image LowExp 2.
In an alternative embodiment, after the step of performing color correction on the red-biased pixel region, the method further includes outputting the corrected low-exposure image LowExp.
As described above, the present invention corrects only the reddish pixel region of the image (region of interest), the amount of calculation is greatly reduced compared to the prior art, and the processing of the image (region of interest) by the present invention can be accurate to the pixel level.
FIG. 6 is a flowchart illustrating steps S3 and S30 according to an embodiment of the present invention.
As shown in fig. 6, the aforementioned step S3 and the aforementioned step S30 of the present invention further include:
in step S31, a highlight region NorMask is identified in the normal exposure image NorExp.
In this step, first, the obtained normal exposure image NorExp is processed as follows: a highlight region NorMask is identified in the normal exposure image NorExp. And then setting the gray value of the NorMask in the highlight area as 1, and setting the gray values of other image areas except the NorMask in the highlight area as 0 to obtain a binary image.
The NorMask of the highlight region meets the following conditions: the maximum channel value of the red channel value, the green channel value and the blue channel value of each pixel in the highlight region NorMask is greater than the brightness lower limit value V _ max.
Specifically, the NorMask of the highlight region meets the following conditions:
NorMask(i,j)=MAX(R_NorExp(i,j),G_NorExp(i,j),B_NorExp(i,j))>V_max;
wherein, NorMask (i, j) represents the channel value of the pixel (i, j) in the NorMask in the highlight region, R _ NorExp (i, j), G _ NorExp (i, j), B _ NorExp (i, j) represent the red, green, blue channel values of the pixel (i, j) in the NorExp of the normal exposure image, V _ max represents the luminance lower limit value, the value range thereof can be 1536 and 2048, and those skilled in the art can set the lower limit value according to the requirement.
Step S32, intersecting the partial red pixel region LowMask and the highlight region NorMask to obtain a gray level map NewGray.
Specifically, an intersection of a red pixel region and the NorMask region is taken, and when the intersection of the red pixel region value and the NorMask region is 1 (when the values of the red pixel region value and the NorMask region are both 1, the intersection of the red pixel region value and the NorMask region is 1), the value of the obtained gray scale image OriGray is 255; when at least one of the reddish pixel area value and the highlight area takes a value of 0 (i.e. the reddish pixel area takes a value of 0, or the highlight area takes a value of 0, or both the reddish pixel area value and the highlight area take values of 0, and the intersection of the two is 0), the obtained gray scale image oriGray also takes a value of 0.
First, the original gray scale image OriGray is obtained according to the following formula:
OriGray(i,j)=(LowMask(i,j)&&NorMask(i,j))*255;
next, the original gray map OriGray obtained above is subjected to a frame blurring process to obtain a gray map NewGray. The block blurring is to blur an image based on an average color value of neighboring pixels. The block blurring is a spatial domain linear filtering processing method, and each pixel value in the image after the block blurring is equal to the mean value of the adjacent pixel values of the pixel.
Here, if the camera captures a frame of low-exposure image, the red-shifted pixel region in step S32 is the red-shifted pixel region of the frame of low-exposure image; if two frames of low-exposure images are captured, the red-biased pixel area in step S32 is the union of the red-biased pixel areas of the two frames of low-exposure images, and so on.
And step 33, taking the gray map as a weight, and performing weighted assignment on the corrected low-exposure image LowExp and the normal-exposure image NorExp to generate a fused image.
In this step, the gray-scale map is used as a weight, and the corrected low-exposure image LowExp and the normal-exposure image NorExp are subjected to weighted assignment to generate a fusion image.
The step of generating the fusion image comprises the following steps of: and (5) summing the red channel value of the corrected low exposure image LowExp multiplied by the gray map NewGray (i, j) and the red channel value of the normal exposure image NorExp multiplied by (255-NewGray (i, j)). Similarly, the generation process of the green channel value and the blue channel value of the fused image is also the same as the generation process of the red channel value, and details are not repeated here.
Specifically, the fused image DstImg is generated according to the following formula:
R_DstImg(i,j)=(R_LowExp(i,j)*NewGray(i,j)+R_NorExp(i,j)*(255-NewGray(i,j)))>>8;
G_DstImg(i,j)=(G_LowExp(i,j)*NewGray(i,j)+G_NorExp(i,j)*(255-NewGray(i,j)))>>8;
B_DstImg(i,j)=(B_LowExp(i,j)*NewGray(i,j)+B_NorExp(i,j)*(255-NewGray(i,j)))>>8;
wherein, R _ DstImg (i, j), G _ DstImg (i, j) and B _ DstImg (i, j) respectively represent red, green and blue channel values of the pixel (i, j) in the fused image DstImg, NewGray represents a gray scale, R _ LowExp (i, j), G _ LowExp (i, j) and B _ LowExp (i, j) respectively represent red, green and blue channel values of the pixel (i, j) in the corrected low-exposure image LowExp, and R _ NorExp (i, j), G _ NorExp (i, j) and B _ NorExp (i, j) respectively represent red, green and blue channels of the pixel (i, j) in the normal-exposure image NorExp.
Here, > >8 denotes that the value after summation is shifted to the right by eight bits and assigned to the red channel value, the green channel value, and the blue channel value of the fused image.
In an alternative embodiment, after the step of generating the fused image DstImg, the method further includes outputting the fused image DstImg.
As described above, the method for processing signal lamp images of the present invention is introduced in detail, and the present invention generates a fusion image by fusing with a normal exposure image, so that the overall brightness of the fusion image is not changed; when color correction is carried out, only the interested area is corrected, the calculation amount is greatly reduced, only the reddish pixel area in the interested area is corrected, and the image can be accurately processed to the pixel level. By adopting the image processing method, the problems in the prior art can be solved, the image quality can be ensured, and the overall brightness of the image cannot be changed.
Fig. 7 is a schematic configuration diagram of a system for processing a signal lamp image according to another embodiment of the present invention.
As shown in fig. 7, a signal lamp image processing system according to another embodiment of the present invention includes: an image acquisition unit 1, an image correction unit 2, and an image fusion unit 3.
The image acquisition unit 1 is used for acquiring at least one frame of low exposure image and at least one frame of normal exposure image. The image acquisition unit 1 is used to capture an image. Specifically, the image capturing unit 1 captures at least one frame of a low exposure image and at least one frame of a normally exposed image.
The image correction unit 2 is connected to the image acquisition unit 1, and is configured to perform color correction on the reddish pixel region of the at least one frame of low-exposure image to obtain a corrected low-exposure image. The red-shifted pixel regions are represented as abnormal regions in the low-exposure image. The image correction unit 2 identifies a red pixel area of at least one frame of low exposure image, and after the red pixel area is identified, performs color correction on the red pixel area to obtain a corrected low exposure image.
The image fusion unit 3 is respectively connected to the image correction unit 2 and the image acquisition unit 1, and is used for fusing the corrected low-exposure image and the normal-exposure image to generate a fused image. Specifically, the image fusion unit 3 performs the following operations: and taking a gray level image of an intersection area of a red pixel area of at least one frame of low-exposure image and a high-brightness area of a normal-exposure image as a weight, and carrying out weighted assignment on red, green and blue channel values of the corrected low-exposure image and the corrected normal-exposure image to obtain a fused image.
Fig. 8 is a schematic structural diagram of the image acquisition unit 1 according to the embodiment of the present invention.
As shown in fig. 8, in an embodiment, the image acquiring unit 1 further includes: a region of interest selection module 11, a region of interest alignment module 12 and an image acquisition module 13.
And the interested region selecting module 11 is used for selecting the interested region oriRect.
And the region-of-interest aligning module 12 is configured to perform image coordinate alignment on the region of interest OriRect to obtain an aligned region of interest DstRect.
The region of interest alignment module 12 aligns the image coordinates of the region of interest OriRect according to the following formula:
OriRect=[xmin0,xmax0,ymin0,ymax0];
DstRect=[xmin1,xmax1,ymin1,ymax1];
xmin1=xmin0/2*2;xmax1=(xmax0+1)/2*2;
ymin1=ymin0/2*2;ymax1=(ymax0+1)/2*2;
wherein oriRect denotes the region of interest, DstRect denotes the aligned region of interest xmin denotes the X coordinate of the upper left corner of the region of interest, ymin0 denotes the Y coordinate of the upper left corner of the region of interest, xmax0 denotes the X coordinate of the lower right corner of the region of interest, and ymax0 denotes the Y coordinate of the lower right corner of the region of interest; similarly, xmin1 represents the Y-coordinate of the top left corner of the region of interest after alignment, ymin1 represents the Y-coordinate of the top left corner of the region of interest after alignment, xmax1 represents the X-coordinate of the bottom right corner of the region of interest after alignment, and ymax1 represents the Y-coordinate of the bottom right corner of the region of interest after alignment.
And an image obtaining module 13, configured to obtain at least one frame of low-exposure image and at least one frame of normal-exposure image for the aligned region of interest DstRect.
Here, the region of interest selecting module 11 and the region of interest aligning module 12 are operated once at initialization for selecting a region of interest, and the image acquiring module 13 is continuously called up in the signal light image processing system.
Fig. 9 is a schematic configuration diagram of the image correction unit 2 and the image fusion unit 3 according to the embodiment of the present invention.
As shown in fig. 9, in one embodiment, the image correction unit 2 includes: a red-bias pixel region identification module 21 and a red-bias pixel region correction module 22.
A red-biased pixel region identification module 21, configured to identify a red-biased pixel region in the at least one frame of low-exposure image based on the red channel value, the green channel value, and the blue channel value in the low-exposure image.
The red bias pixel area correction module 22 is connected to the red bias pixel area identification module 21, and is configured to reassign the red, green, and blue channel values of the red bias pixel area to obtain a corrected low-exposure image.
In an optional embodiment, the system for processing a signal lamp image according to the present invention further includes a display unit connected to the red-biased pixel region correction module 22, and configured to output the corrected low-exposure image LowExp.
As shown in fig. 9, in one embodiment, the image fusion unit 3 includes: a luminance region identification module 31, an intersection calculation module 32 and a fusion module 33.
And a bright area identification module 31 for identifying a highlight area NorMask in the normal exposure image NorExp.
Specifically, first, the luminance area identification module 31 performs the following processing on the acquired normal exposure image NorExp: a highlight region NorMask is identified in the normal exposure image NorExp. And then setting the gray value of the NorMask in the highlight area as 1, and setting the gray values of other image areas except the NorMask in the highlight area as 0 to obtain a binary image.
The high brightness region NorMask identified by the brightness region identification module 31 satisfies the following condition:
NorMask(i,j)=MAX(R_NorExp(i,j),G_NorExp(i,j),B_NorExp(i,j))>V_max;
wherein, NorMask (i, j) represents the channel value of the pixel (i, j) in the NorMask in the highlight region, R _ NorExp (i, j), G _ NorExp (i, j), B _ NorExp (i, j) represent the red, green, blue channel values of the pixel (i, j) in the NorExp of the normal exposure image, V _ max represents the luminance lower limit value, the value range thereof can be 1536 and 2048, and those skilled in the art can set the lower limit value according to the requirement.
The intersection calculation module 32 is respectively connected to the red-biased pixel region selection module 21 and the brightness region identification module 31, and is configured to perform intersection on the red-biased pixel region LowMask and the highlight region NorMask to obtain a gray scale map.
As described in the method steps, the intersection calculation module 32 performs intersection on the reddish pixel area LowMask and the highlight area NorMask, and when the intersection of the reddish pixel area value and the highlight area is 1 (when the values of the reddish pixel area value and the highlight area are both 1, the intersection of the reddish pixel area value and the highlight area is 1), the value of the obtained gray scale map OriGray is 255; when at least one of the reddish pixel area value and the highlight area takes a value of 0 (i.e. the reddish pixel area takes a value of 0, or the highlight area takes a value of 0, or both the reddish pixel area value and the highlight area take values of 0, and the intersection of the two is 0), the obtained gray scale image oriGray also takes a value of 0. .
First, the original gray scale image OriGray is obtained according to the following formula:
OriGray(i,j)=(LowMask(i,j)&&NorMask(i,j))*255;
next, the original gray map OriGray obtained above is subjected to a frame blurring process to obtain a gray map NewGray. The block blurring is to blur an image based on an average color value of neighboring pixels. The block blurring is a spatial domain linear filtering processing method, and each pixel value in the image after the block blurring is equal to the mean value of the adjacent pixel values of the pixel.
It should be noted that: if the camera captures a frame of low-exposure image, the red-biased pixel area in step S32 is the red-biased pixel area of the frame of low-exposure image; if two frames of low-exposure images are captured, the red-biased pixel area in step S32 is the union of the red-biased pixel areas of the two frames of low-exposure images, and so on.
The fusion module 33 is respectively connected to the image acquisition module 1, the red pixel region correction module 22, and the intersection calculation module 32, and configured to perform weighted assignment on the corrected low-exposure image LowExp and the normal-exposure image NorExp by using the grayscale map as a weight to generate a fusion image.
The fusion module 33 generates a fused image DstImg according to the following formula:
R_DstImg(i,j)=(R_LowExp(i,j)*NewGray(i,j)+R_NorExp(i,j)*(255-NewGray(i,j)))>>8;
G_DstImg(i,j)=(G_LowExp(i,j)*NewGray(i,j)+G_NorExp(i,j)*(255-NewGray(i,j)))>>8;
B_DstImg(i,j)=(B_LowExp(i,j)*NewGray(i,j)+B_NorExp(i,j)*(255-NewGray(i,j)))>>8;
wherein, R _ DstImg (i, j), G _ DstImg (i, j), B _ DstImg (i, j) respectively represent the red, green, blue channels of the pixel (i, j) in the fused image DstImg, NewGray represents the gray map, R _ LowExp (i, j), G _ LowExp (i, j), B _ LowExp (i, j) respectively represent the red, green, blue channels of the pixel (i, j) in the corrected low exposure image LowExp, R _ NorExp (i, j), G _ NorExp (i, j), B _ NorExp (i, j) respectively represent the red, green, blue channels of the pixel (i, j) in the normal exposure image NorExp.
In an optional implementation, the display unit is further connected to the fusion module 33, and is configured to output the fused image DstImg.
The corrected low-exposure image and the normal-exposure image are fused to generate a fused image, and the overall brightness of the fused image is not changed by fusing the fused image and the normal-exposure image; the method only corrects the reddish pixel area in the image, and the processing of the image can be accurate to the pixel level; and the invention only corrects the interested region, and compared with the prior art, the calculated amount is greatly reduced.
In an alternative embodiment of the present invention, when the image acquisition unit 1 acquires one frame of normal exposure image NorExp, a previous frame of low exposure image LowExp1, and a next frame of low exposure image LowExp2, the two frames of low exposure images LowExp1, LowExp2, and the one frame of normal exposure image NorExp are sent to the image correction unit 2. The image correction unit 2 performs color correction on the union of the partial red pixel regions of the two frames of low exposure images LowExp1 and LowExp2 to obtain a corrected low exposure image LowExp, and sends the corrected low exposure image LowExp to the image fusion unit 3. The image fusion unit 3 fuses the corrected low exposure image LowExp and the normal exposure image NorExp to generate a fused image DstImg.
In an alternative embodiment, the red-biased pixel region identification module 21 is configured to identify a first red-biased pixel region LowMask1 in the previous frame low exposure image LowExp1, and further configured to identify a second red-biased pixel region LowMask2 in the next frame low exposure image LowExp 2.
The red-biased pixel region identification module 21 identifies a first red-biased pixel region LowMask1 according to the following formula:
LowMask1(i,j)=R_LowMask1(i,j)>R_thres&&R_LowMask1(i,j)>2*G_LowMask1(i,j)&&G_LowMask1(i,j)>B_LowMask1(i,j);
wherein, LowMask1(i, j) represents the channel value of the pixel (i, j) in the first partial red pixel region LowMask1, R _ LowMask1(i, j), G _ LowMask1(i, j), and B _ LowMask1(i, j) represent the red, green, and blue channel values of the pixel (i, j) in the first partial red pixel region LowMask1, respectively, R _ thres represents the red channel lower limit value, whose value range may be 1536-.
The red-biased pixel region identification module 21 identifies a second red-biased pixel region LowMask2 according to the following formula:
LowMask2(i,j)=R_LowMask2(i,j)>R_thres&&R_LowMask2(i,j)>2*G_LowMask2(i,j)&&G_LowMask2(i,j)>B_LowMask2(i,j);
wherein, LowMask2(i, j) represents the channel value of the pixel (i, j) in the second partial red pixel region LowMask2, R _ LowMask2(i, j), G _ LowMask2(i, j), and B _ LowMask2(i, j) represent the red, green, and blue channel values of the pixel (i, j) in the second partial red pixel region LowMask2, respectively, R _ thres represents the red channel lower limit value, whose value range may be 1536-.
The red-biased pixel region correction module 22 is configured to perform or calculate the first red-biased pixel region LowMask1 and the second red-biased pixel region LowMask2 to obtain a union LowMask of the first red-biased pixel region and the second red-biased pixel region, and further perform color correction on the union LowMask of the first red-biased pixel region and the second red-biased pixel region to obtain a corrected low-exposure image LowExp.
The red-biased pixel region correction module 22 performs color correction on the union LowMask of the red-biased pixel regions according to the following formula:
when the value of LowMask (i, j) is 1, then
R_LowExp(i,j)=MAX(R_LowExp1(i,j),R_LowExp2(i,j));
G_LowExp(i,j)=MIN(B_LowExp1(i,j),B_LowExp2(i,j));
B_LowExp(i,j)=G_LowExp(i,j);
When the value of LowMask (i, j) is 0, then
R_LowExp(i,j)=(R_LowExp1(i,j)+R_LowExp2(i,j))/2;
G_LowExp(i,j)=(G_LowExp1(i,j)+G_LowExp2(i,j))/2;
B_LowExp(i,j)=(B_LowExp1(i,j)+B_LowExp2(i,j))/2;
Wherein, R _ LowExp (i, j), G _ LowExp (i, j), B _ LowExp (i, j) respectively represent red, green, blue channel values of the pixel (i, j) in the corrected low exposure image LowExp, R _ LowExp1(i, j), B _ LowExp1(i, j), G _ LowExp1(i, j) respectively represent red, blue, green channel values of the pixel (i, j) in the low exposure image LowExp1, R _ LowExp2(i, j), B _ LowExp2(i, j), G _ LowExp2(i, j) respectively represent red channel values, blue, green channel values of the pixel (i, j) in the low exposure image LowExp 2.
As described above, the signal light image processing system of the present invention is described in detail, and the present invention does not change the overall brightness of the original image and does not adversely affect the image quality. In addition, compared with the traditional signal lamp image processing scheme, the method only corrects the interested region instead of the whole image, and reduces the calculation amount. In addition, only red pixel points in the image are corrected, the image processing can be accurate to the pixel level, and the problem of poor image effect caused by positioning errors can be solved.
Compared with the prior art in which an optical filter is additionally arranged in front of a camera lens, the system does not change the overall brightness of a signal lamp range (namely an interested area) to be corrected, does not influence the quality of a video image, does not increase a hardware device, and reduces the hardware cost. Compared with the traditional method for specially processing the colors of the traffic light region at the tail end of the camera image processing flow, the method only corrects the red pixel points in the signal light range (namely the region of interest) needing to be corrected in the image, can accurately process the video image to the pixels, and cannot cause the problem of poor image effect caused by positioning errors. In addition, the invention only corrects the signal lamp range (namely the region of interest) which needs to be corrected, but not the whole image, has small calculation amount, is convenient to be transplanted to different platforms, and can meet the real-time processing requirement of the video signal.
It is to be understood that the above-described embodiments of the present invention are merely illustrative of or explaining the principles of the invention and are not to be construed as limiting the invention. Therefore, any modification, equivalent replacement, improvement and the like made without departing from the spirit and scope of the present invention should be included in the protection scope of the present invention. Further, it is intended that the appended claims cover all such variations and modifications as fall within the scope and boundaries of the appended claims or the equivalents of such scope and boundaries.

Claims (23)

1. A method of processing a signal light image, the method comprising:
acquiring at least one frame of low-exposure image and at least one frame of normal-exposure image;
performing color correction on the reddish pixel area of the at least one frame of low-exposure image to obtain a corrected low-exposure image;
fusing the corrected low-exposure image with the normal-exposure image to generate a fused image;
wherein the step of performing color correction on the reddish pixel region of the at least one frame of low-exposure image comprises:
identifying a red-bias pixel region in the at least one frame of low-exposure image based on a red channel value, a green channel value, and a blue channel value in the low-exposure image;
and re-assigning the red, green and blue channel values of the red-biased pixel region to obtain a corrected low-exposure image.
2. The method of claim 1, wherein the method comprises:
acquiring a frame of normal exposure image NorExp, a previous frame of low exposure image LowExp1 and a next frame of low exposure image LowExp 2;
color correction is carried out on the union of the partial red pixel regions of the two frames of low-exposure images LowExp1 and LowExp2 to obtain a corrected low-exposure image LowExp;
and fusing the corrected low-exposure image LowExp and the normal-exposure image NorExp to generate a fused image Dstimg.
3. The method of claim 1 or 2, wherein the step of fusing the corrected low-exposure image with the normal-exposure image to generate a fused image comprises:
and taking a gray level image of an intersection area of a red pixel area of at least one frame of low-exposure image and a high-brightness area of a normal-exposure image as a weight, and carrying out weighted assignment on red, green and blue channel values of the corrected low-exposure image and the normal-exposure image to obtain a fused image.
4. The method of claim 3, wherein the step of acquiring an image comprises:
selecting an interested region OriRect;
carrying out image coordinate alignment on the region of interest OriRect to obtain an aligned region of interest DstRect;
and acquiring at least one frame of low-exposure image and at least one frame of normal-exposure image for the aligned region of interest DstRect.
5. The method according to claim 4, wherein the region of interest OriRect is image coordinate aligned according to:
OriRect=[xmin0,xmax0,ymin0,ymax0];
DstRect=[xmin1,xmax1,ymin1,ymax1];
xmin1=xmin0/2*2;xmax1=(xmax0+1)/2*2;
ymin1=ymin0/2*2;ymax1=(ymax0+1)/2*2;
wherein OriRect represents the region of interest and DstRect represents the aligned region of interest.
6. The method of claim 2, wherein the color correcting the union of the reddish pixel regions of the two frames of low exposure images LowExp1, LowExp2 comprises:
identifying a first reddish pixel region LowMask1 in the previous frame low exposure image LowExp 1;
identifying a second partial red pixel region LowMask2 in the next frame low exposure image LowExp 2;
performing or calculating the first partial red pixel region LowMask1 and the second partial red pixel region LowMask2 to obtain a union LowMask of the first partial red pixel region LowMask and the second partial red pixel region LowMask;
and performing color correction on the union LowMask of the first and second reddish pixel regions to obtain a corrected low-exposure image LowExp.
7. The method of claim 5 or 6,
the first reddish pixel region LowMask1 is identified according to the following equation:
LowMask1(i,j)=R_LowMask1(i,j)>R_thres&&R_LowMask1(i,j)>2*G_LowMask1(i,j)&&G_LowMask1(i,j)>B_LowMask1(i,j);
wherein, LowMask1(i, j) represents the channel value of the pixel (i, j) in the first partial red pixel region LowMask1, R _ LowMask1(i, j), G _ LowMask1(i, j), and B _ LowMask1(i, j) represent the red, green, and blue channel values of the pixel (i, j) in the first partial red pixel region LowMask1, and R _ thres represents the red channel lower limit value; and/or
The second partial red pixel region LowMask2 is identified according to the following equation:
LowMask2(i,j)=R_LowMask2(i,j)>R_thres&&R_LowMask2(i,j)>2*G_LowMask2(i,j)&&G_LowMask2(i,j)>B_LowMask2(i,j)
wherein, LowMask2(i, j) represents the channel value of the pixel (i, j) in the second partial red pixel region LowMask2, R _ LowMask2(i, j), G _ LowMask2(i, j), and B _ LowMask2(i, j) represent the red channel value, green channel value, and blue channel value of the pixel (i, j) in the second partial red pixel region LowMask2, and R _ thres represents the red channel lower limit value.
8. The method of claim 6, wherein the union LowMask of the red-biased pixel regions is color corrected according to:
when the value of LowMask (i, j) is 1, then
R_LowExp(i,j)=MAX(R_LowExp1(i,j),R_LowExp2(i,j));
G_LowExp(i,j)=MIN(B_LowExp1(i,j),B_LowExp2(i,j));
B_LowExp(i,j)=G_LowExp(i,j);
When the value of LowMask (i, j) is 0, then
R_LowExp(i,j)=(R_LowExp1(i,j)+R_LowExp2(i,j))/2;
G_LowExp(i,j)=(G_LowExp1(i,j)+G_LowExp2(i,j))/2;
B_LowExp(i,j)=(B_LowExp1(i,j)+B_LowExp2(i,j))/2;
Wherein, R _ LowExp (i, j), G _ LowExp (i, j), B _ LowExp (i, j) respectively represent red, green, blue channel values of the pixel (i, j) in the corrected low exposure image LowExp, R _ LowExp1(i, j), B _ LowExp1(i, j), G _ LowExp1(i, j) respectively represent red, blue, green channel values of the pixel (i, j) in the previous frame low exposure image LowExp1, R _ LowExp2(i, j), B _ LowExp2(i, j), G _ LowExp2(i, j) respectively represent red, blue, green channel values of the pixel (i, j) in the next frame low exposure image LowExp 2.
9. The method of any of claims 1-2, 4-5, 8, wherein the generating a fused image comprises:
identifying a highlight region NorMask in the normal exposure image NorExp;
taking intersection of the partial red pixel region LowMask and the highlight region NorMask to obtain a gray level image NewGray;
and taking the gray-scale image as a weight, and carrying out weighted assignment on the corrected low-exposure image LowExp and the normal-exposure image NorExp to generate a fused image.
10. The method of claim 9, wherein the step of identifying the NorMask of the highlight region satisfies the following condition:
NorMask(i,j)=MAX(R_NorExp(i,j),G_NorExp(i,j),B_NorExp(i,j))>V_max;
wherein, NorMask (i, j) represents the channel value of the pixel point (i, j) in the NorMask in the highlight area, R _ NorExp (i, j), G _ NorExp (i, j), B _ NorExp (i, j) represent the red, green, blue channel values of the pixel point (i, j) in the NorExp of the normal exposure image, respectively, and V _ max represents the lower limit value of the brightness.
11. The method according to claim 9, wherein in the step of performing a weighted assignment of the corrected low exposure image LowExp and the normal exposure image NorExp to generate a fused image, a fused image DstImg is generated according to the following formula:
R_DstImg(i,j)=
(R_LowExp(i,j)*NewGray(i,j)+R_NorExp(i,j)*(255-NewGray(i,j)))>>8;
G_DstImg(i,j)=
(G_LowExp(i,j)*NewGray(i,j)+G_NorExp(i,j)*(255-NewGray(i,j)))>>8;
B_DstImg(i,j)=
(B_LowExp(i,j)*NewGray(i,j)+B_NorExp(i,j)*(255-NewGray(i,j)))>>8;
wherein, R _ DstImg (i, j), G _ DstImg (i, j) and B _ DstImg (i, j) respectively represent red, green and blue channel values of the pixel (i, j) in the fused image DstImg, NewGray represents a gray scale, R _ LowExp (i, j), G _ LowExp (i, j) and B _ LowExp (i, j) respectively represent red, green and blue channel values of the pixel (i, j) in the corrected low-exposure image LowExp, and R _ NorExp (i, j), G _ NorExp (i, j) and B _ NorExp (i, j) respectively represent red, green and blue channels of the pixel (i, j) in the normal-exposure image NorExp.
12. A system for processing a signal light image, comprising:
an image acquisition unit (1) for acquiring at least one frame of a low exposure image and at least one frame of a normal exposure image;
the image correction unit (2) is used for carrying out color correction on a reddish pixel area of the at least one frame of low-exposure image to obtain a corrected low-exposure image;
an image fusion unit (3) for fusing the corrected low exposure image with the normal exposure image to generate a fused image.
13. The system of claim 12, wherein,
when the image acquisition unit (1) acquires one frame of normal exposure image NorExp, a previous frame of low exposure image LowExp1 and a next frame of low exposure image LowExp2, sending the two frames of low exposure images LowExp1, LowExp2 and the one frame of normal exposure image NorExp to the image correction unit (2);
the image correction unit (2) performs color correction on the union of the partial red pixel regions of the two frames of low exposure images LowExp1 and LowExp2 to obtain a corrected low exposure image LowExp, and sends the corrected low exposure image LowExp to the image fusion unit (3);
the image fusion unit (3) fuses the corrected low-exposure image LowExp and the normal-exposure image NorExp to generate a fused image Dstimg.
14. The system according to any one of claims 12-13, wherein the image fusion unit (3) performs the following operations:
and taking a gray level image of an intersection area of a red pixel area of at least one frame of low-exposure image and a high-brightness area of a normal-exposure image as a weight, and carrying out weighted assignment on red, green and blue channel values of the corrected low-exposure image and the corrected normal-exposure image to obtain a fused image.
15. The system according to claim 14, wherein the image acquisition unit (1) comprises:
the interested region selecting module (11) is used for selecting an interested region OriRect;
a region-of-interest aligning module (12) for performing image coordinate alignment on the region of interest oriRect to obtain an aligned region of interest DstRect;
and the image acquisition module (13) is used for acquiring at least one frame of low-exposure image and at least one frame of normal-exposure image of the aligned region of interest DstRect.
16. The system of claim 15, wherein,
the region of interest alignment module (12) performs image coordinate alignment on the region of interest oriRect according to the following formula:
OriRect=[xmin0,xmax0,ymin0,ymax0];
DstRect=[xmin1,xmax1,ymin1,ymax1];
xmin1=xmin0/2*2;xmax1=(xmax0+1)/2*2;
ymin1=ymin0/2*2;ymax1=(ymax0+1)/2*2;
wherein OriRect represents the region of interest and DstRect represents the aligned region of interest.
17. The system according to claim 13, wherein the image correction unit (2) comprises:
a red-biased pixel region identification module (21) for identifying a red-biased pixel region in the at least one frame of low-exposure image based on a red channel value, a green channel value, and a blue channel value in the low-exposure image;
and the red pixel region correction module (22) is used for reassigning the red, green and blue channel values of the red pixel region to obtain a corrected low-exposure image.
18. The system of claim 17, wherein,
the red-biased pixel region identification module (21) is configured to identify a first red-biased pixel region LowMask1 in the previous frame low exposure image LowExp1, and is further configured to identify a second red-biased pixel region LowMask2 in the next frame low exposure image LowExp 2;
the red-biased pixel region correction module (22) is configured to perform or calculate the first red-biased pixel region LowMask1 and the second red-biased pixel region LowMask2 to obtain a union LowMask of the first red-biased pixel region and the second red-biased pixel region, and further perform color correction on the union LowMask of the first red-biased pixel region and the second red-biased pixel region to obtain a corrected low-exposure image LowExp.
19. The system of claim 17 or 18,
the red-biased pixel region identification module (21) identifies a first red-biased pixel region LowMask1 according to the following formula:
LowMask1(i,j)=R_LowMask1(i,j)>R_thres&&R_LowMask1(i,j)>2*G_LowMask1(i,j)&&G_LowMask1(i,j)>B_LowMask1(i,j);
wherein, LowMask1(i, j) represents the channel value of the pixel (i, j) in the first partial red pixel region LowMask1, R _ LowMask1(i, j), G _ LowMask1(i, j), and B _ LowMask1(i, j) represent the red, green, and blue channel values of the pixel (i, j) in the first partial red pixel region LowMask1, and R _ thres represents the red channel lower limit value; and/or
The red-biased pixel region identification module (21) identifies a second red-biased pixel region LowMask2 according to the following formula:
LowMask2(i,j)=R_LowMask2(i,j)>R_thres&&R_LowMask2(i,j)>2*G_LowMask2(i,j)&&G_LowMask2(i,j)>B_LowMask2(i,j);
wherein, LowMask2(i, j) represents the channel value of the pixel (i, j) in the second partial red pixel region LowMask2, R _ LowMask2(i, j), G _ LowMask2(i, j), and B _ LowMask2(i, j) represent the red, green, and blue channel values of the pixel (i, j) in the second partial red pixel region LowMask2, respectively, and R _ thres represents the red channel lower limit value.
20. The system of claim 18, wherein the red-bias pixel region correction module (22) color corrects the union LowMask of red-bias pixel regions according to:
when the value of LowMask (i, j) is 1, then
R_LowExp(i,j)=MAX(R_LowExp1(i,j),R_LowExp2(i,j));
G_LowExp(i,j)=MIN(B_LowExp1(i,j),B_LowExp2(i,j));
B_LowExp(i,j)=G_LowExp(i,j);
When the value of LowMask (i, j) is 0, then
R_LowExp(i,j)=(R_LowExp1(i,j)+R_LowExp2(i,j))/2;
G_LowExp(i,j)=(G_LowExp1(i,j)+G_LowExp2(i,j))/2;
B_LowExp(i,j)=(B_LowExp1(i,j)+B_LowExp2(i,j))/2;
Wherein, R _ LowExp (i, j), G _ LowExp (i, j), B _ LowExp (i, j) respectively represent red, green, blue channel values of the pixel (i, j) in the corrected low exposure image LowExp, R _ LowExp1(i, j), B _ LowExp1(i, j), G _ LowExp1(i, j) respectively represent red, blue, green channel values of the pixel (i, j) in the low exposure image LowExp1, R _ LowExp2(i, j), B _ LowExp2(i, j), G _ LowExp2(i, j) respectively represent red channel values, blue, green channel values of the pixel (i, j) in the low exposure image LowExp 2.
21. The system according to any one of claims 12-13, 15-16, 18, 20, wherein the image fusion unit (3) comprises:
a bright area identification module (31) for identifying a highlight area NorMask in the normal exposure image NorExp;
the intersection calculation module (32) is used for intersecting the partial red pixel region LowMask and the highlight region NorMask to obtain a gray scale image;
and the fusion module (33) is used for carrying out weighted assignment on the corrected low-exposure image LowExp and the normal-exposure image NorExp by taking the gray-scale map as a weight so as to generate a fusion image.
22. The system according to claim 21, wherein the highlight region NorMask identified by the highlight region identification module (31) satisfies the following condition:
NorMask(i,j)=MAX(R_NorExp(i,j),G_NorExp(i,j),B_NorExp(i,j))>V_max;
wherein, NorMask (i, j) represents the channel value of the pixel point (i, j) in the NorMask in the highlight area, R _ NorExp (i, j), G _ NorExp (i, j), B _ NorExp (i, j) represent the red, green, blue channel values of the pixel point (i, j) in the NorExp of the normal exposure image, respectively, and V _ max represents the lower limit value of the brightness.
23. The system as recited in claim 21, wherein the fusion module (33) generates a fused image DstImg in accordance with:
R_DstImg(i,j)=
(R_LowExp(i,j)*NewGray(i,j)+R_NorExp(i,j)*(255-NewGray(i,j)))>>8;
G_DstImg(i,j)=
(G_LowExp(i,j)*NewGray(i,j)+G_NorExp(i,j)*(255-NewGray(i,j)))>>8;
B_DstImg(i,j)=
(B_LowExp(i,j)*NewGray(i,j)+B_NorExp(i,j)*(255-NewGray(i,j)))>>8;
wherein, R _ DstImg (i, j), G _ DstImg (i, j), B _ DstImg (i, j) respectively represent the red, green, blue channels of the pixel (i, j) in the fused image DstImg, NewGray represents the gray map, R _ LowExp (i, j), G _ LowExp (i, j), B _ LowExp (i, j) respectively represent the red, green, blue channels of the pixel (i, j) in the corrected low exposure image LowExp, R _ NorExp (i, j), G _ NorExp (i, j), B _ NorExp (i, j) respectively represent the red, green, blue channels of the pixel (i, j) in the normal exposure image NorExp.
CN201610518275.3A 2016-06-28 2016-06-28 Signal lamp image processing method and system Active CN107545556B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610518275.3A CN107545556B (en) 2016-06-28 2016-06-28 Signal lamp image processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610518275.3A CN107545556B (en) 2016-06-28 2016-06-28 Signal lamp image processing method and system

Publications (2)

Publication Number Publication Date
CN107545556A CN107545556A (en) 2018-01-05
CN107545556B true CN107545556B (en) 2021-08-17

Family

ID=60965524

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610518275.3A Active CN107545556B (en) 2016-06-28 2016-06-28 Signal lamp image processing method and system

Country Status (1)

Country Link
CN (1) CN107545556B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018223882A1 (en) 2017-06-08 2018-12-13 Zhejiang Dahua Technology Co., Ltd. Methods and devices for processing images of traffic light
CN111565283A (en) * 2019-02-14 2020-08-21 初速度(苏州)科技有限公司 Traffic light color identification method, correction method and device
CN117252870B (en) * 2023-11-15 2024-02-02 青岛天仁微纳科技有限责任公司 Image processing method of nano-imprint mold

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101997981A (en) * 2010-11-01 2011-03-30 惠州Tcl移动通信有限公司 Mobile phone camera-based latitude realization method and mobile phone
CN103679733A (en) * 2013-12-18 2014-03-26 浙江宇视科技有限公司 Method and device for processing signal lamp image
CN104301621A (en) * 2014-09-28 2015-01-21 北京凌云光技术有限责任公司 Image processing method, device and terminal
CN104574377A (en) * 2014-12-24 2015-04-29 南京金智视讯技术有限公司 Method for correcting yellow cast of red lamp of electronic police

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014078979A1 (en) * 2012-11-20 2014-05-30 Harman International Industries, Incorporated Method and system for detecting traffic lights

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101997981A (en) * 2010-11-01 2011-03-30 惠州Tcl移动通信有限公司 Mobile phone camera-based latitude realization method and mobile phone
CN103679733A (en) * 2013-12-18 2014-03-26 浙江宇视科技有限公司 Method and device for processing signal lamp image
CN104301621A (en) * 2014-09-28 2015-01-21 北京凌云光技术有限责任公司 Image processing method, device and terminal
CN104574377A (en) * 2014-12-24 2015-04-29 南京金智视讯技术有限公司 Method for correcting yellow cast of red lamp of electronic police

Also Published As

Publication number Publication date
CN107545556A (en) 2018-01-05

Similar Documents

Publication Publication Date Title
JP7077395B2 (en) Multiplexed high dynamic range image
US7764319B2 (en) Image processing apparatus, image-taking system, image processing method and image processing program
KR100591731B1 (en) Image processing systems, projectors, information storage media and image processing methods
CN111246051B (en) Method, device, equipment and storage medium for automatically detecting stripes and inhibiting stripes
JP6221682B2 (en) Image processing apparatus, imaging system, image processing method, and program
US20180027149A1 (en) Image processing method and image processing apparatus for executing image processing method
US10410078B2 (en) Method of processing images and apparatus
CN107545556B (en) Signal lamp image processing method and system
JP2005331929A5 (en)
CN107219229B (en) Panel dust filtering method and device
KR20130139788A (en) Imaging apparatus which suppresses fixed pattern noise generated by an image sensor of the apparatus
JP6185249B2 (en) Image processing apparatus and image processing method
WO2013114803A1 (en) Image processing device, image processing method therefor, computer program, and image processing system
KR101643612B1 (en) Photographing method and apparatus and recording medium thereof
CN107277299A (en) Image processing method, device, mobile terminal and computer-readable recording medium
KR100869134B1 (en) Image processing apparatus and method
WO2008004554A1 (en) Luminance calculation method, luminance calculation device, inspection device, luminance calculation program, and computer-readable recording medium
KR20150040559A (en) Apparatus for Improving Image Quality and Computer-Readable Recording Medium with Program Therefor
JP2011114760A (en) Method for inspecting camera module
CN112399150A (en) Method for optimizing imaging picture of monitoring camera
CN114511469B (en) Intelligent image noise reduction prior detection method
JP2004219072A (en) Method and apparatus for detecting streak defect of screen
JP2002140789A (en) Road surface situation judging device
JP2017135600A (en) Image processing device and program
CN115145521A (en) Electronic display method for restoring scene in vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant