CN101783013A - Image enhancement system and method applicable to traffic control - Google Patents

Image enhancement system and method applicable to traffic control Download PDF

Info

Publication number
CN101783013A
CN101783013A CN201010120064A CN201010120064A CN101783013A CN 101783013 A CN101783013 A CN 101783013A CN 201010120064 A CN201010120064 A CN 201010120064A CN 201010120064 A CN201010120064 A CN 201010120064A CN 101783013 A CN101783013 A CN 101783013A
Authority
CN
China
Prior art keywords
image
original image
value
gray threshold
frequency coefficient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201010120064A
Other languages
Chinese (zh)
Inventor
卜庆凯
陈维强
李月高
邵明欣
隋守鑫
何东晓
苑希强
王玮
郑维学
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Hisense Network Technology Co Ltd
Original Assignee
Qingdao Hisense Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Network Technology Co Ltd filed Critical Qingdao Hisense Network Technology Co Ltd
Priority to CN201010120064A priority Critical patent/CN101783013A/en
Publication of CN101783013A publication Critical patent/CN101783013A/en
Withdrawn legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a method applicable to the traffic control, which comprises the following steps: a. obtaining an original image; b. carrying out contrast grade stretching on the original image; and c. carrying out image merging on the original image and the image after the contract grade stretching; and d, outputting the merged images. In the invention, because of the adoption of the contract stretching and image merging on the candid images, the identification effect of the image is high. The invention discloses an image enhancement system used for the traffic control.

Description

A kind of Image Intensified System and method that is applicable to traffic control
Technical field
The present invention relates to the traffic control field, relate in particular to a kind of Image Intensified System and method that is used for traffic control.
Background technology
Follow the process of Chinese Urbanization and the rapid increase of automobile pollution, the coordinated development of compartmentalization economy, the traffic of China becomes the problem of a sternness day by day, especially traffic jam and traffic safety problem.Intelligent transportation system becomes the preferred option that addresses these problems, such as the detection of traffic flow, the adaptive control of signal.Wherein, becoming the focus of paying close attention to recent years based on the various product of vision technique, also is one of following intelligent transportation industry development, and the key in the vision technique to be image capture.Though the booster action of additional light source is arranged, but still have a problem, the image that is exactly candid photograph is fuzzyyer, and contrast is poor, and the people in the car is clear inadequately, and recognition effect is poor.
Summary of the invention
Technical matters to be solved by this invention is to provide in a kind of traffic control control recognition effect better to capture image processing method.
In order to solve the problems of the technologies described above, the present invention proposes a kind of image enchancing method that is used for traffic control, and it comprises the steps:
A, acquisition original image;
B, original image degree of comparing is stretched;
C, carry out image co-registration with described original image with through the image that contrast stretches;
Image after d, output are merged.
Wherein, described step b comprises:
B1, determine first gray threshold and second gray threshold;
B2, the pixel degree of comparing that is in the described original image between first gray threshold and second gray threshold is stretched;
Wherein, first gray threshold is less than second gray threshold.
Wherein, step b comprises that also the setting gamma value is to carry out the step of gamma correction to described original image.
Wherein, before the step of Gamma correction, comprise:
Calculate the average gray of all pixels in the described original image, judge whether this average gray reaches setting value, if, then think the image on daytime, described gamma value is value between (1,10); Otherwise, think nighttime image, described gamma value is value between (0,1).
Wherein, described step c comprises:
C1, described original image and contrast stretched image are carried out low frequency coefficient and the high frequency coefficient that wavelet transformation obtains two two field pictures;
C2, the arithmetic mean of low frequency coefficient of asking for described two two field pictures and the weighted mean value of high frequency coefficient are respectively as the low frequency coefficient and the high frequency coefficient of the wavelet coefficient after merging;
C3, the wavelet coefficient after merging is carried out inverse transformation, the image after obtaining merging.
Wherein, also comprise the definite small echo type and the decomposition number of plies among the step c1.
Accordingly, in order to solve the problems of the technologies described above, the present invention also proposes a kind of Image Intensified System that is used for traffic control, comprising:
Receiving element is used to receive original image;
The contrast draw unit is used for described original image degree of comparing is stretched;
The image co-registration unit carries out image co-registration with described original image with through the image that contrast stretches;
Output unit is used to export the image after the fusion.
Wherein, the contrast drawing process of described contrast draw unit comprises:
Determine first gray threshold and second gray threshold;
The pixel degree of comparing that is in the described original image between first gray threshold and second gray threshold is stretched;
Wherein, first gray threshold is less than second gray threshold.
Wherein, described contrast draw unit also is used to set gamma value so that described original image is carried out gamma correction;
Wherein, calculate the average gray of all pixels in the described original image, judge whether this average gray reaches setting value, if, then think the image on daytime, described gamma value is value between (1,10); Otherwise, think nighttime image, described gamma value is value between (0,1).
Wherein, the image co-registration process of described image co-registration unit comprises:
Described original image and contrast stretched image are carried out low frequency coefficient and the high frequency coefficient that wavelet transformation obtains two two field pictures;
Ask for arithmetic mean and the weighted mean value of high frequency coefficient, the low frequency coefficient and the high frequency coefficient of the wavelet coefficient after the conduct fusion respectively of the low frequency coefficient of described two two field pictures;
Wavelet coefficient after merging is carried out inverse transformation, the image after obtaining merging.
Among the present invention, owing to adopt the image degree of comparing after capturing is stretched and image co-registration, thereby make that the recognition effect of image is higher.
Description of drawings
Fig. 1 is the structural representation of the embodiment of a kind of Image Intensified System that is applicable to traffic control of the present invention;
Fig. 2 is based on the process flow diagram of the embodiment of a kind of image enchancing method that is applicable to traffic control of the present invention embodiment illustrated in fig. 1.
Embodiment
The present invention will be described in detail below in conjunction with accompanying drawing.
With reference to figure 1, illustrate the structural representation of the embodiment of a kind of Image Intensified System that is applicable to traffic control of the present invention.As shown in the figure, comprising:
Receiving element 11 is used to obtain original image.
Original image herein can be the real-time traffic behavior image of capturing by camera.
Contrast draw unit 12 is used for the original image degree of comparing that obtains is stretched.
The purpose that stretches is to make that the identification of certain subregion is more prone to, and its concrete drawing process can be with reference to the correlation step in embodiment illustrated in fig. 2.
Image co-registration unit 13 is used for carrying out image co-registration through the contrast stretched image.
Detailed process for image co-registration can be with reference to correlation step embodiment illustrated in fig. 2.
Output unit 14 is used to export the image after the described fusion.
With reference to figure 2, illustrate process flow diagram based on the embodiment of a kind of image enchancing method that is applicable to traffic control of the present invention embodiment illustrated in fig. 1.As shown in the figure, may further comprise the steps:
Step S21 obtains original image.
That is, capture real-time traffic by camera.It is corresponding to the receiving element in embodiment illustrated in fig. 1.
Step S22 carries out Gamma correction to original image.
In this step, at first determine the gamma value range of choice of Gamma correction, its process is: the average gray that calculates each pixel under the described original image gray space, judge whether this average gray reaches pre-set threshold T (can be determined by empirical value), if think that then described original image is the image of capturing daytime, the gamma value value is between (1,10) when carrying out Gamma correction; Otherwise, think that then described original image is the image of capturing night, gamma value value between (0,1) when carrying out Gamma correction.
Wherein, when gamma value got 1, expression did not need to proofread and correct.
Use gamma correction can realize the function of figure image intensifying, specifically,, use gamma correction can see the details of dark portion or highlights clearly, just the purpose of this step is this if a figure kine bias is dark or bright partially.For the concrete selection of gamma value, then can observe to determine by selection adjustment repeatedly, also can adopt empirical value.
Step S23 determines first gray threshold and second gray threshold.
Because different picture data type has different gray-scale value scopes, gray-scale value is for example arranged between 0 to 1, also has between 0 to 255.
So the type of the original image of being captured according to step S21 is determined the scope of gray-scale value.For example, if 0 between 1 during value, then first gray threshold can be got 0.3, the second gray threshold and can get 0.5; If value between 0 to 255, then first gray threshold can be got 35, the second gray thresholds and can get 90.
How to determine to get what value for described first gray threshold and second gray threshold in this step, after should stretching by next step step S23, make desired region (for example people's face, car plate etc.) become easily to be recognized as and consider.It can be to determine by empirical value, also can determine by adjustment repeatedly.
Step S24 stretches for the pixel degree of comparing that is between first gray threshold and second gray threshold.
In this step, scan according to determined first gray threshold of previous step and second gray threshold gray space described original image, determine the pixel of gray-scale value between described first gray threshold and second gray threshold, and these pixel degree of comparing are stretched.Detailed process is:
Gray-scale value is set to 0 less than the pixel of described first gray threshold, gray-scale value is set to maximum gray threshold (for example 1 or 255 etc.) greater than the pixel of described second gray threshold, and the pixel of gray-scale value between described first gray threshold and second gray threshold stretched in the whole gray-scale value scope.Be exemplified below:
Suppose that the gray-scale value scope is that 0 to 1, the first gray threshold and second gray threshold are respectively 0.3 and 0.5.Then the stretching of mentioning in this step will the pixel of original image in gray scale be lower than all numerical value of 0.3 and be set to 0, and the gray-scale value greater than 0.5 all is set to 1, the pixel of gray-scale value between 0.3 to 0.5 is stretched between 0 to 1, and be more clear thereby the details of this part gray scale correspondence is just seen.
Certainly, suitable equally for gray-scale value in 0 to 255 situation, set forth no longer one by one.
The feasible pixel region that is between first gray threshold and second gray threshold of this step of process becomes and is more prone to identification, and details is abundanter; But the pixel outside but making thickens (crossing black or white excessively), make entire image seem very unnatural on the whole, this problem will be solved in the process of image co-registration.
In a word, the determining constantly to adjust by step S23 and step S24 and observe to determine of first gray threshold and second gray threshold among the step S23 also can be got empirical value and determine.And described first gray threshold is less than second gray threshold, guarantees that like this conversion is linear, is unlikely to take place the situation that color is put upside down, and keeps the watching property of image, otherwise image is understood turmoil and can't be seen.
Wherein, step S23 and S24 can be understood as the process that contrast stretches, and its contrast draw unit in can corresponding diagram 1 is finished.Certainly, the step of described Gamma correction also can be put into the process that contrast stretches, and also can be finished by the contrast draw unit.
Step S25 carries out low frequency coefficient and the high frequency coefficient that wavelet transformation obtains two two field pictures to original image and stretched image.
In this step, the required wavelet transform type when carrying out wavelet transformation (such as the Haar small echo) and the wavelet decomposition number of plies (such as 4) can preestablish.
Step S26 calculates the arithmetic mean of low frequency coefficient of two two field pictures and the weighted mean value of high frequency coefficient, respectively as the low frequency coefficient and the high frequency coefficient that merge the back wavelet coefficient.
For example, the low frequency coefficient of supposing original image is SAn, and high frequency coefficient is SHn; The low frequency coefficient of stretched image is LAn, and high frequency coefficient is LHn; So, the low frequency coefficient after merging in this step then is FAn=(SAn+LAn)/2, and the high frequency coefficient after the fusion then is: if SHn>LHn, FHn=SHn*th1+LHn*th2, if SHn<LHn, FHn=SHn*th2+LHn*th1.Wherein, th1>th2, the expression weighting coefficient, said process is just asked the process of weighted mean value; And, because that coefficient generally all comprises is some, thus according to the value of n one by one correspondence handle in the manner described above and get final product.
Step S27 carries out inverse transformation to wavelet coefficient after merging and obtains fused image.
That is, behind the low frequency and HFS after the process previous step has obtained merging, then this step is carried out the image after inverse transformation obtains merging again.
Wherein, step S25, S26, S27 can be used as the step of image co-registration, and it can be finished corresponding to the image co-registration unit among Fig. 1.The purpose that merges be the details of integrated two width of cloth images (original image and stretched image) in an image, the image after the fusion seems to strengthen better effects if like this, and is more natural, attractive in appearance.
Step S28, the image after output is merged.This can be finished by the output unit among Fig. 1.
Step S29 finishes.
Need to prove, be provided with divide in the process of Gamma correction daytime and night purpose be, distinguish two kinds of different situations and select the gamma value scope so that the enhancing effect of image is better.In addition, the process of Gamma correction also can be placed on after the drawing process, before the image co-registration etc.
In this embodiment, owing to be provided with the process of image stretch and fusion, thereby provide the candid photograph that contrast is more clear, stereovision is stronger image for the user; Also provide the image that includes more details information for the user; Also, the user has more convictive image as evidence for providing.
Above disclosed is a kind of preferred embodiment of the present invention only, can not limit the present invention's interest field certainly with this, and therefore the equivalent variations of doing according to claim of the present invention still belongs to the scope that the present invention is contained.

Claims (10)

1. image enchancing method that is used for traffic control, it comprises the steps:
A, acquisition original image;
B, original image degree of comparing is stretched;
C, carry out image co-registration with described original image with through the image that contrast stretches;
Image after d, output are merged.
2. the method for claim 1 is characterized in that, described step b comprises:
B1, determine first gray threshold and second gray threshold;
B2, the pixel degree of comparing that is in the described original image between first gray threshold and second gray threshold is stretched;
Wherein, first gray threshold is less than second gray threshold.
3. method as claimed in claim 2 is characterized in that, step b comprises that also the setting gamma value is to carry out the step of gamma correction to described original image.
4. method as claimed in claim 3 is characterized in that, comprises before the step of Gamma correction:
Calculate the average gray of all pixels in the described original image, judge whether this average gray reaches setting value, if, then think the image on daytime, described gamma value is value between (1,10); Otherwise, think nighttime image, described gamma value is value between (0,1).
5. the method for claim 1 is characterized in that, described step c comprises:
C1, described original image and contrast stretched image are carried out low frequency coefficient and the high frequency coefficient that wavelet transformation obtains two two field pictures;
C2, the arithmetic mean of low frequency coefficient of asking for described two two field pictures and the weighted mean value of high frequency coefficient are respectively as the low frequency coefficient and the high frequency coefficient of the wavelet coefficient after merging;
C3, the wavelet coefficient after merging is carried out inverse transformation, the image after obtaining merging.
6. method as claimed in claim 5 is characterized in that, also comprises among the step c1 determining the small echo type and decomposing the number of plies.
7. Image Intensified System that is used for traffic control comprises:
Receiving element is used to receive original image;
The contrast draw unit is used for described original image degree of comparing is stretched;
The image co-registration unit carries out image co-registration with described original image with through the image that contrast stretches;
Output unit is used to export the image after the fusion.
8. system as claimed in claim 7 is characterized in that, the contrast drawing process of described contrast draw unit comprises:
Determine first gray threshold and second gray threshold;
The pixel degree of comparing that is in the described original image between first gray threshold and second gray threshold is stretched;
Wherein, first gray threshold is less than second gray threshold.
9. system as claimed in claim 8 is characterized in that, described contrast draw unit also is used to set gamma value so that described original image is carried out gamma correction;
Wherein, calculate the average gray of all pixels in the described original image, judge whether this average gray reaches setting value, if, then think the image on daytime, described gamma value is value between (1,10); Otherwise, think nighttime image, described gamma value is value between (0,1).
10. system as claimed in claim 7 is characterized in that, the image co-registration process of described image co-registration unit comprises:
Described original image and contrast stretched image are carried out low frequency coefficient and the high frequency coefficient that wavelet transformation obtains two two field pictures;
Ask for arithmetic mean and the weighted mean value of high frequency coefficient, the low frequency coefficient and the high frequency coefficient of the wavelet coefficient after the conduct fusion respectively of the low frequency coefficient of described two two field pictures;
Wavelet coefficient after merging is carried out inverse transformation, the image after obtaining merging.
CN201010120064A 2010-03-04 2010-03-04 Image enhancement system and method applicable to traffic control Withdrawn CN101783013A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010120064A CN101783013A (en) 2010-03-04 2010-03-04 Image enhancement system and method applicable to traffic control

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010120064A CN101783013A (en) 2010-03-04 2010-03-04 Image enhancement system and method applicable to traffic control

Publications (1)

Publication Number Publication Date
CN101783013A true CN101783013A (en) 2010-07-21

Family

ID=42522999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010120064A Withdrawn CN101783013A (en) 2010-03-04 2010-03-04 Image enhancement system and method applicable to traffic control

Country Status (1)

Country Link
CN (1) CN101783013A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102063704A (en) * 2010-11-19 2011-05-18 中国航空无线电电子研究所 Airborne vision enhancement method and device
CN106169181A (en) * 2016-06-30 2016-11-30 北京奇艺世纪科技有限公司 A kind of image processing method and system
CN107330865A (en) * 2017-06-09 2017-11-07 昆明理工大学 A kind of image enchancing method converted based on BEMD and contrast stretching
CN108665437A (en) * 2018-05-10 2018-10-16 句容康泰膨润土有限公司 A kind of image enchancing method based on layered shaping
CN110135247A (en) * 2019-04-03 2019-08-16 深兰科技(上海)有限公司 Data enhancement methods, device, equipment and medium in a kind of segmentation of road surface

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102063704A (en) * 2010-11-19 2011-05-18 中国航空无线电电子研究所 Airborne vision enhancement method and device
CN102063704B (en) * 2010-11-19 2012-08-29 中国航空无线电电子研究所 Airborne vision enhancement method and device
CN106169181A (en) * 2016-06-30 2016-11-30 北京奇艺世纪科技有限公司 A kind of image processing method and system
CN106169181B (en) * 2016-06-30 2019-04-26 北京奇艺世纪科技有限公司 A kind of image processing method and system
CN107330865A (en) * 2017-06-09 2017-11-07 昆明理工大学 A kind of image enchancing method converted based on BEMD and contrast stretching
CN107330865B (en) * 2017-06-09 2020-07-31 昆明理工大学 Image enhancement method based on BEMD and contrast stretching transformation
CN108665437A (en) * 2018-05-10 2018-10-16 句容康泰膨润土有限公司 A kind of image enchancing method based on layered shaping
CN110135247A (en) * 2019-04-03 2019-08-16 深兰科技(上海)有限公司 Data enhancement methods, device, equipment and medium in a kind of segmentation of road surface
CN110135247B (en) * 2019-04-03 2021-09-24 深兰科技(上海)有限公司 Data enhancement method, device, equipment and medium in pavement segmentation

Similar Documents

Publication Publication Date Title
CN108090888B (en) Fusion detection method of infrared image and visible light image based on visual attention model
CN101833754B (en) Image enhancement method and image enhancement system
CN101409825B (en) Nighttime vision monitoring method based on information fusion
CN112487922B (en) Multi-mode human face living body detection method and system
CN101783013A (en) Image enhancement system and method applicable to traffic control
CN104504673A (en) Visible light and infrared images fusion method based on NSST and system thereof
CN106204470A (en) Low-light-level imaging method based on fuzzy theory
Asmare et al. Image enhancement by fusion in contourlet transform
CN102999890B (en) Based on the image light dynamic changes of strength bearing calibration of environmental factor
Zhu et al. A hybrid algorithm for automatic segmentation of slowly moving objects
Jeon et al. Low-light image enhancement using inverted image normalized by atmospheric light
Soumya et al. Day color transfer based night video enhancement for surveillance system
Pandian et al. Object Identification from Dark/Blurred Image using WBWM and Gaussian Pyramid Techniques
CN105447827A (en) Image noise reduction method and system thereof
CN111666869B (en) Face recognition method and device based on wide dynamic processing and electronic equipment
CN116258653B (en) Low-light level image enhancement method and system based on deep learning
Yue et al. Low-illumination traffic object detection using the saliency region of infrared image masking on infrared-visible fusion image
Zhang et al. Nighttime haze removal with illumination correction
CN108769521B (en) Photographing method, mobile terminal and computer readable storage medium
TW201224954A (en) Embedded system and method for processing and identifying image
CN115937776A (en) Monitoring method, device, system, electronic equipment and computer readable storage medium
CN108259819B (en) Dynamic image feature enhancement method and system
CN114862707A (en) Multi-scale feature recovery image enhancement method and device and storage medium
Cao et al. A License Plate Image Enhancement Method in Low Illumination Using BEMD.
Kumari et al. Image fusion techniques based on pyramid decomposition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C04 Withdrawal of patent application after publication (patent law 2001)
WW01 Invention patent application withdrawn after publication

Open date: 20100721