CN105118032A - Wide dynamic processing method based on visual system - Google Patents

Wide dynamic processing method based on visual system Download PDF

Info

Publication number
CN105118032A
CN105118032A CN201510510485.3A CN201510510485A CN105118032A CN 105118032 A CN105118032 A CN 105118032A CN 201510510485 A CN201510510485 A CN 201510510485A CN 105118032 A CN105118032 A CN 105118032A
Authority
CN
China
Prior art keywords
formula
wide dynamic
color space
image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510510485.3A
Other languages
Chinese (zh)
Other versions
CN105118032B (en
Inventor
黄俊仁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Youxiang Technology Co Ltd
Original Assignee
Hunan Youxiang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Youxiang Technology Co Ltd filed Critical Hunan Youxiang Technology Co Ltd
Priority to CN201510510485.3A priority Critical patent/CN105118032B/en
Publication of CN105118032A publication Critical patent/CN105118032A/en
Application granted granted Critical
Publication of CN105118032B publication Critical patent/CN105118032B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Processing Of Color Television Signals (AREA)

Abstract

The invention discloses a wide dynamic processing method based on a visual system. The method comprises steps of: acquiring a single image of a specific scene; transforming the single image from a RGB color space to a YUV color space and acquiring brightness information Y component; computing the environmental factor of each pixel by using the pixel neighborhood information of the Y component; and according to the characteristic of the human visual system, transforming the original RGB image by using a wide dynamic transformation formula acquired by the environmental factors so as to obtain a wide dynamic RGB image. The wide dynamic processing method based on the visual system well keeps detailed information of a bright area and a dark area, has good wide dynamic image quality, consumes few resources, and well satisfies a requirement of real-time processing.

Description

A kind of wide method for dynamically processing of view-based access control model system
Technical field:
The invention belongs to image processing field, refer in particular to a kind of wide method for dynamically processing of view-based access control model system.
Background technology:
Wide dynamic technique allows video camera see a kind of technology of image clearly in the very large situation of brightness variation range.When we find a view to the scene comprising highlight regions and backlight area simultaneously with video camera, output image becomes a slice white in highlight regions due to over-exposed, and in dark area because under-exposure becomes a slice black, thus the scene cannot seeing these regions clearly makes picture quality cannot meet the needs of practical application completely.Cause the reason of this phenomenon to be the image defects of common camera, namely usual said dynamic range is not enough.And wide dynamic technique is owing to can cover wider dynamic range, effectively improves the image quality of the large scene of light variation range, be widely used in video monitoring, remote sensing survey, military surveillance etc.
Current wide dynamic technique divides two classes according to processing mode: a class is hardware mode, and another kind of is software mode.Hardware based implementation method mainly improves the structure organization of collecting device, comprises based on high dynamic range camera, carries out special transformation etc. to sensor chip.But it is comparatively complicated that these class methods realize technique, and cost is higher, be unfavorable for that large-area applications is promoted.
Processing mode is the method based on software widely, the method of representative is tone mapping method and exposes fusion method more, all utilize general camera to gather repeatedly data to Same Scene with different Exposure modes, then by various algorithm process, their details being fused into new image is also wide dynamic range image, finally shows in regular display.The weak point of these class methods is to need repeatedly to take scene, and then process multiple image, real-time cannot be met.
Summary of the invention:
The present invention is directed to the deficiency of existing wide dynamic approach, a kind of wide method for dynamically processing of view-based access control model system is proposed, it uses a frame image information of current scene, a two-layer wide dynamic process mechanism is formulated according to human visual system's feature, and use neighborhood of pixels information calculating parameter environmental variance, make the image after processing to dark areas restore information, to suppress high light to reduce halation impact on bright area, zone line is stretched, obtains larger dynamic range.
Technical scheme of the present invention is:
A wide method for dynamically processing for view-based access control model system, concrete steps are as follows:
(1) sub-picture is taken to given scenario, obtain the RGB color space image of this scene;
(2) by image from RGB color space conversion to YUV color space, obtain monochrome information Y-component wherein;
By image from RGB color notation conversion space to YUV color space, transformation for mula is as follows:
Y = 0.299 * R + 0.587 * G + 0.114 * B U = - 0.147 * R - 0.289 * G + 0.436 * B V = 0.615 * R - 0.515 * G - 0.100 * B
At YUV color space, monochrome information Y is separated completely with two chrominance information U, V, so this color showing method more meets the visual characteristic of human eye.
(3) the neighborhood of pixels information of Y-component is used to calculate the envirment factor of each pixel;
The value of envirment factor a directly determines Output rusults, also namely determines the effect of wide dynamic images, and envirment factor a adopts the luma component information Y of YUV color space to calculate, and formula is as follows:
a = Y * G + Y ‾
Wherein G is a Gaussian template, it is the average of monochrome information Y.
(4) according to human visual system's feature, envirment factor is utilized to obtain wide dynamic mapping formula;
Human visual system's (HumanVisualSystem is called for short HVS), primarily of retina composition, comprises three functional layers: photographic layer, outer plexiform layer.Inner plexiform layer.The conversion of photographic layer primary responsibility light signal, transmission, compression process.Here the light signal that the present invention need process is divided into two parts: Part I is incident optical signal, and Part II is the light signal that outer plexiform layer feeds back based on neighborhood.
The classical compression formula Naka-Rushton the Representation Equation of Part I incident optical signal:
y = x x + a Formula 1
Wherein x be input, y be export, a is envirment factor, envirment factor a be greater than 0 a positive number.
The curve synoptic diagram of formula 1 as shown in Figure 1, can draw a rule: when input x belongs to [0,1] interval, a is less, and y promotes larger; A is larger, and y promotes less.
Part II outer plexiform layer is play regulatory role incident optical signal based on the light signal that neighborhood feeds back, this has the individual procedure of adaptation because human eye enters bright place in dark place, or enter dark place in bright place and have the individual procedure of adaptation, this process is exactly a process feeding back adaptation gradually.The present invention's formula below represents this process:
y = a x x + a Formula 2
The curve synoptic diagram of formula 2 as shown in Figure 2, can draw a rule: when input x belongs to [0,1] interval, a is less, and y promotes less; A is larger, and y promotes larger.The Changing Pattern of this Changing Pattern and formula 1 is just in time contrary, plays the effect of an adjustment, just in time meets the visual characteristic of human eye.
Aggregative formula 1 and formula 2 can obtain the two-layer vision system of a simulation, the final wide dynamic mapping formula of also i.e. the present invention's proposition:
y = x x + a + a x x + a = ( 1 + a ) x x + a Formula 3
As can be seen from Figure 3, by carrying out different adjustment to a, exporting y and forming with input x the nonlinear relationship combined by many bunches of class gamma curves.When input x belongs to [0,1] interval, a is less, and y promotes larger; A is larger, and y promotes less.
(5) utilize wide dynamic mapping formula to convert original RGB image, obtain wide dynamic RGB image.
During conversion each pixel three components (R, G, B) of RGB color space successively as the input x of formula 3, calculate new output (R ', G ', B ') with formula 3.
The width state method of the view-based access control model system that the present invention proposes retains the detailed information of highlight regions and dark area preferably, the wide dynamic images quality obtained is better and resource consumption is few, particularly due to without the need to by repeatedly taking synthesis one panel height dynamic image, can be good at the demand meeting process in real time.
Accompanying drawing illustrates:
Fig. 1 is Naka-Rushton equation curve schematic diagram
Fig. 2 is the equation curve schematic diagram that outer plexiform layer feeds back based on neighborhood;
Fig. 3 is wide dynamic mapping equation curve schematic diagram;
Fig. 4 be daytime scene former figure and wide dynamic result figure;
Fig. 5 is the former figure of night-time scene and wide dynamic result figure.
Embodiment:
Below in conjunction with accompanying drawing, the present invention is described in further details.
First a sub-picture is taken to given scenario, obtain the RGB color space image of this scene.As everyone knows, occurring in nature any one coloured light all can be mixed in the addition of different ratios by R, G, B three primary colours, is mixed into black light when three primary colours component is all 0; White light is mixed into when three primary colours component is all 255.RGB color space adopts physics three primary colours to represent, is applicable to chromoscope work, but this method for expressing do not meet the visual characteristic of human eye.Thus, we are by image from RGB color notation conversion space to YUV color space, and transformation for mula is as follows:
Y = 0.299 * R + 0.587 * G + 0.114 * B U = - 0.147 * R - 0.289 * G + 0.436 * B V = 0.615 * R - 0.515 * G - 0.100 * B
At YUV color space, monochrome information Y is separated completely with two chrominance information U, V, so this color showing method more meets the visual characteristic of human eye.
Human visual system's (HumanVisualSystem is called for short HVS), primarily of retina composition, comprises three functional layers: photographic layer, outer plexiform layer.Inner plexiform layer.The conversion of photographic layer primary responsibility light signal, transmission, compression process.Here the light signal that the present invention need process is divided into two parts: Part I is incident optical signal, and Part II is the light signal that outer plexiform layer feeds back based on neighborhood.
The classical compression formula Naka-Rushton the Representation Equation of Part I incident optical signal:
y = x x + a Formula 1
Wherein x be input, y be export, a is envirment factor, envirment factor a be greater than 0 a positive number.
The curve synoptic diagram of formula 1 as shown in Figure 1, can draw a rule: when input x belongs to [0,1] interval, a is less, and y promotes larger; A is larger, and y promotes less.
The feedback of Part II plays regulatory role Part I, and this has the individual procedure of adaptation because human eye enters bright place in dark place, or enter dark place in bright place and have the individual procedure of adaptation, and this process is exactly a process feeding back adaptation gradually.The present invention's formula below represents this process:
y = a x x + a Formula 2
The curve synoptic diagram of formula 2 as shown in Figure 2, can draw a rule: when input x belongs to [0,1] interval, a is less, and y promotes less; A is larger, and y promotes larger.The Changing Pattern of this Changing Pattern and formula 1 is just in time contrary, plays the effect of an adjustment, just in time meets the visual characteristic of human eye.
Aggregative formula 1 and formula 2 can obtain the two-layer vision system of a simulation, the final wide dynamic mapping formula of also i.e. the present invention's proposition:
y = x x + a + a x x + a = ( 1 + a ) x x + a Formula 3
As can be seen from Fig. 3 we, by carrying out different adjustment to a, export y with input x form the nonlinear relationship combined by many bunches of class gamma curves.When input x belongs to [0,1] interval, a is less, and y promotes larger; A is larger, and y promotes less.
The value of envirment factor a directly determines Output rusults, also namely determines the effect of wide dynamic images.In order to obtain better output effect, the present invention proposes a kind of method using the realm information computing environment factor of pixel, and in order to meet human eye characteristic, adopt the luma component information Y of YUV color space to calculate, formula is as follows:
a = Y * G + Y ‾ Formula 4
Wherein G is a Gaussian template, it is the average of monochrome information Y.
The envirment factor of each pixel is obtained by the convolution of Y-component and Gaussian template, and the realm information considering pixel when calculating current point is described, doing like this can to dark areas restore information; High light is suppressed to bright area, reduces halation impact; Zone line is stretched, thus obtains larger dynamic range.
The width state disposal route of a kind of view-based access control model system of the present invention, concrete steps are as follows:
1, a sub-picture is taken to given scenario, obtain the RGB color space image of this scene
2, by image from RGB color notation conversion space to YUV color space;
3, pixel neighborhoods information is used to calculate the envirment factor of each pixel by formula 4;
4, according to human visual system's feature, envirment factor is utilized to obtain wide dynamic mapping formula;
5, each pixel three components (R, G, B) of RGB color space successively as input, calculate new output (R ', G ', B ') with formula 3.
The result of former figure and wide dynamic images is provided in Fig. 4 and Fig. 5, can see, be no matter scene or night-time scene on daytime, wide dynamic images is obviously better than original image, all remain more details information in bright areas and dark area, actual application demand can be met.

Claims (1)

1. a width state disposal route for view-based access control model system, is characterized in that, comprise following steps:
(1) sub-picture is taken to given scenario, obtain the RGB color space image of this scene;
(2) by image from RGB color space conversion to YUV color space, obtain monochrome information Y-component;
By image from RGB color notation conversion space to YUV color space, transformation for mula is as follows:
Y = 0.299 * R + 0.587 * G + 0.114 * B U = - 0.147 * R - 0.289 * G + 0.436 * B V = 0.615 * R - 0.515 * G - 0.100 * B
At YUV color space, monochrome information Y is separated completely with two chrominance information U, V, so this color showing method more meets the visual characteristic of human eye;
(3) the neighborhood of pixels information of Y-component is used to calculate the envirment factor of each pixel;
Envirment factor a adopts the luma component information Y of YUV color space to calculate, and formula is as follows:
a = Y * G + Y ‾
Wherein G is a Gaussian template, it is the average of monochrome information Y;
(4) according to human visual system's feature, envirment factor is utilized to obtain wide dynamic mapping formula;
Human visual system forms primarily of retina, comprises three functional layers: photographic layer, outer plexiform layer and inner plexiform layer; The conversion of photographic layer primary responsibility light signal, transmission, compression process; The light signal that need process here is divided into two parts: Part I is incident optical signal, and Part II is the light signal that outer plexiform layer feeds back based on neighborhood;
The classical compression formula Naka-Rushton the Representation Equation of Part I incident optical signal:
y = x x + a Formula 1
Wherein x be input, y be export, a is envirment factor, envirment factor a be greater than 0 a positive number;
A rule can be drawn: when input x belongs to [0,1] interval, a is less, and y promotes larger by formula 1; A is larger, and y promotes less;
Part II outer plexiform layer is play regulatory role Part I incident optical signal based on the light signal that neighborhood feeds back, this has the individual procedure of adaptation because human eye enters bright place in dark place, or enter dark place in bright place and have the individual procedure of adaptation, this process is exactly a process feeding back adaptation gradually; This process is represented with formula below at this:
y = a x x + a Formula 2
A rule can be drawn: when input x belongs to [0,1] interval, a is less, and y promotes less by formula 2; A is larger, and y promotes larger; The Changing Pattern of this Changing Pattern and formula 1 is just in time contrary, plays the effect of an adjustment, just in time meets the visual characteristic of human eye;
Aggregative formula 1 and formula 2 can obtain the two-layer vision system of a simulation, also namely final wide dynamic mapping formula:
y = x x + a + a x x + a = ( 1 + a ) x x + a Formula 3
(5) utilize wide dynamic mapping formula to convert original RGB image, obtain wide dynamic RGB image;
During conversion each pixel three components (R, G, B) of RGB color space successively as the input x of formula 3, calculate new output (R ', G ', B ') with formula 3.
CN201510510485.3A 2015-08-19 2015-08-19 A kind of wide method for dynamically processing of view-based access control model system Active CN105118032B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510510485.3A CN105118032B (en) 2015-08-19 2015-08-19 A kind of wide method for dynamically processing of view-based access control model system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510510485.3A CN105118032B (en) 2015-08-19 2015-08-19 A kind of wide method for dynamically processing of view-based access control model system

Publications (2)

Publication Number Publication Date
CN105118032A true CN105118032A (en) 2015-12-02
CN105118032B CN105118032B (en) 2018-11-06

Family

ID=54666007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510510485.3A Active CN105118032B (en) 2015-08-19 2015-08-19 A kind of wide method for dynamically processing of view-based access control model system

Country Status (1)

Country Link
CN (1) CN105118032B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930854A (en) * 2016-04-19 2016-09-07 东华大学 Manipulator visual system
CN106846359A (en) * 2017-01-17 2017-06-13 湖南优象科技有限公司 Moving target method for quick based on video sequence
CN114051098A (en) * 2021-11-23 2022-02-15 河南牧业经济学院 Intelligent acquisition method and platform for visual images

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679157A (en) * 2013-12-31 2014-03-26 电子科技大学 Human face image illumination processing method based on retina model
CN103870820A (en) * 2014-04-04 2014-06-18 南京工程学院 Illumination normalization method for extreme illumination face recognition

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679157A (en) * 2013-12-31 2014-03-26 电子科技大学 Human face image illumination processing method based on retina model
CN103870820A (en) * 2014-04-04 2014-06-18 南京工程学院 Illumination normalization method for extreme illumination face recognition

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LAURENCE MEYLAN等: "Model of retinal local adaptation for the tone mapping of color filter array images", 《OPTICAL SOCIETY OF AMERICA》 *
王章野等: "基于人眼视觉感知的场景明暗适应动态过程模拟", 《软件学报》 *
陈军等: "YUV空间的彩色图像HDR合成算法", 《计算机工程》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930854A (en) * 2016-04-19 2016-09-07 东华大学 Manipulator visual system
CN106846359A (en) * 2017-01-17 2017-06-13 湖南优象科技有限公司 Moving target method for quick based on video sequence
CN106846359B (en) * 2017-01-17 2019-09-20 湖南优象科技有限公司 Moving target rapid detection method based on video sequence
CN114051098A (en) * 2021-11-23 2022-02-15 河南牧业经济学院 Intelligent acquisition method and platform for visual images
CN114051098B (en) * 2021-11-23 2023-05-30 河南牧业经济学院 Intelligent visual image acquisition method and platform

Also Published As

Publication number Publication date
CN105118032B (en) 2018-11-06

Similar Documents

Publication Publication Date Title
DE102016115292B4 (en) Method and device for automatic exposure value acquisition for high dynamic range imaging
CN103593830B (en) A kind of low illumination level video image enhancement
CN106920221B (en) Take into account the exposure fusion method that Luminance Distribution and details are presented
JP7077395B2 (en) Multiplexed high dynamic range image
CN106897981A (en) A kind of enhancement method of low-illumination image based on guiding filtering
CN102722868B (en) Tone mapping method for high dynamic range image
CN103177424A (en) Low-luminance image reinforcing and denoising method
CN103034986A (en) Night vision image enhancement method based on exposure fusion
CN106504212A (en) A kind of improved HSI spatial informations low-luminance color algorithm for image enhancement
CN104883504A (en) Method and device for opening HDR (high-dynamic range) function on intelligent terminal
CN110706172B (en) Low-illumination color image enhancement method based on adaptive chaotic particle swarm optimization
DE102018119625A1 (en) Reduction of structured IR patterns in stereoscopic depth sensor imaging
CN105825472A (en) Rapid tone mapping system and method based on multi-scale Gauss filters
CN104200431A (en) Processing method and processing device of image graying
CN106204470A (en) Low-light-level imaging method based on fuzzy theory
CN105096278A (en) Image enhancement method based on illumination adjustment and equipment thereof
CN103546730A (en) Method for enhancing light sensitivities of images on basis of multiple cameras
CN103065282A (en) Image fusion method based on sparse linear system
CN111970432A (en) Image processing method and image processing device
US20100207958A1 (en) Color image creating apparatus
CN107862672A (en) The method and device of image defogging
CN105118032A (en) Wide dynamic processing method based on visual system
CN105578081A (en) Imaging method, image sensor, imaging device and electronic device
CN104299213A (en) Method for synthesizing high-dynamic image based on detail features of low-dynamic images
Zheng et al. Low-light image and video enhancement: A comprehensive survey and beyond

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant