CN103973991A - Automatic exposure method for judging illumination scene on basis of B-P neural network - Google Patents

Automatic exposure method for judging illumination scene on basis of B-P neural network Download PDF

Info

Publication number
CN103973991A
CN103973991A CN201410198357.5A CN201410198357A CN103973991A CN 103973991 A CN103973991 A CN 103973991A CN 201410198357 A CN201410198357 A CN 201410198357A CN 103973991 A CN103973991 A CN 103973991A
Authority
CN
China
Prior art keywords
brightness
neural net
image
luminance
desirable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410198357.5A
Other languages
Chinese (zh)
Other versions
CN103973991B (en
Inventor
宋宝
周向东
余晓菁
唐小琦
杜宝森
刘路
张文杰
占颂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201410198357.5A priority Critical patent/CN103973991B/en
Publication of CN103973991A publication Critical patent/CN103973991A/en
Application granted granted Critical
Publication of CN103973991B publication Critical patent/CN103973991B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The invention discloses an automatic exposure method for judging an illumination scene on basis of a B-P neural network. The method comprises the following steps of S1 obtaining original images through a video capture system; S2 dividing the original images into multiple areas; S3 obtaining the brightness average value of images in each area, and obtaining brightness vectors; S4 designing the B-P neural network and using the brightness vectors as input of the neural network to judge the illumination scene; S5 calculating the ideal brightness of the images according to the judgment result of the neural network; S6 using the deviation between the actual brightness and the ideal brightness of the original images as the initial input of a PID algorithm, and obtaining the ideal controllable magnitude corresponding to the ideal brightness by using the PID algorithm; S7 obtaining exposure time t and a simulation gain coefficient g according to the ideal controllable magnitude, and transmitting t and g to sensors of the video capture system to achieve automatic exposure. According to the method, the illumination scene can be judged accurately, the algorithm is simple, and the method is capable of being applied widely.

Description

A kind of automatic explosion method that judges light scene based on B-P neural net
Technical field
The invention belongs to technical field of video image processing, be specifically related to a kind of automatic explosion method that judges light scene based on B-P neural net.
Background technology
Automatic exposure refers to the adjusting of automatically carrying out aperture, shutter, signal gain etc., thereby obtains method clear, that color approaches image in kind.Since being born from idiot camera, various equivalent modifications is just being studied automatic explosion method always.Early stage exposure method adopts the mode of analogue exposure, and its speed is fast, without a large amount of calculating, cannot carry out scene analysis, although can carry out exposure compensating, is still difficult to obtain desirable picture brightness, and its mechano-electronic complex structure, poor stability.
After Digital image technology is born, electron exposure mode has substituted the mode of analogue exposure gradually, electron exposure mode only need to can obtain light intensity to digital picture analysis, thereby and then regulate aperture and shutter to carry out automatic exposure, along with embedded device is as single-chip microcomputer, the hardware programming such as DSP and FPGA equipment is applied in digital image acquisition gradually, image capture device possesses some basic image analysis function, more clear in order to obtain, color approaches image in kind more, technical staff introduces simple scene analysis technology in automatic explosion method, adopt the mode of dynamic desirable brightness value, adapt to various exterior lighting situations, as backlight and to light, thereby exposure method is carried out based on light scene, produced the exposure method that judges light scene.
It is a kind of exposure method based on light scene that image analysis technology based on image entropy is set ideal exposure value, make image subject no matter under what illumination condition, all have good exposure, but the algorithm of image entropy is comparatively complicated, and its applicable surface is restricted, cannot extensive use.Also have the automatic explosion method based on human face recognition and fuzzy logic, this algorithm is realized more complicated, also cannot extensively use.
Summary of the invention
For the problems referred to above, the present invention seeks to be to provide a kind of automatic explosion method that judges light scene based on B-P neural net, its accuracy of judgement to light scene, algorithm is simple, can be to backlight and optical condition hypograph main body is carried out to suitable exposure, meet the automatic exposure requirement of the Video processing front end under the application of network camera.
For achieving the above object, the invention provides a kind of automatic explosion method that judges light scene based on B-P neural net, it is characterized in that, comprise the following steps:
S1: obtain original image by video acquisition system;
S2: according to the degree of concern of picture zones of different is divided into multiple regions by described original image, and give successively zone number according to the size that is concerned degree;
S3: the region of described division is asked to the image brightness mean value in each region, in conjunction with the mean flow rate of image in this regional location numbering and its region, obtained luminance vector;
S4: the quantity design B-P neural net of dividing according to described original image region, the input using described luminance vector as neural net, judges light scene, output judged result;
S5: according to the judged result of described neural net, the desirable brightness of computed image;
S6: the initial input using the deviation of desirable brightness in the intrinsic brilliance of described original image and described S5 as pid algorithm, utilizes pid algorithm to obtain the needed desirable controlled amounts of desirable brightness;
S7: obtain desirable controlled amounts according in described S6, obtain ideal exposure time t and desirable simulation gain coefficient g, the ideal exposure time t of acquisition and desirable simulation gain coefficient g are transferred in the transducer of video acquisition system, can realize automatic exposure.
Further, in described step S4, the input layer variable number of B-P neural net is the number that original image region is divided, and the B-P neural net output formula of design is as follows:
y=f(X),y=0,1
X is the vector representation of neural net input data, the output that y is neural net, and y=0 or 1, the result of determination of described B-P neural net output is two kinds, and 0 represents normal illumination, and 1 represents special light conditions.Such design can be determined the structure of neural net fast, and makes its designs simplification, is conducive to its learning training process.
Further, in described step S5, in the time that neural net is output as 0, desirable brightness get luminance vector largest component 1/2nd;
In the time that neural net is output as 1, the formula that calculates desirable brightness is as follows:
y z = X * W T / sum ( W i ) η = y z / y p - 1 y l = y mid / ( 1 + ηl )
In formula, y zfor the grading brightness of image subject, the luminance vector that X is image, W is grading luminance weights vector, y pfor the evaluation brightness of picture, the luminance errors coefficient that η is image subject, l is compensation coefficient, and its span is between 0-1, y midfor 1/2nd of the largest component of luminance vector, be also image intermediate luminance, y lfor the desirable brightness of calculating, W ifor i the component of grading luminance weights vector W, T represents transposition.
Wherein, the degree that grading luminance weights vector W token image zones of different is concerned, its component is identical with number of components in luminance vector X and mutually corresponding, and the degree that certain region is concerned is higher, and in weight vector, the component value corresponding with this region is just larger.
Further, the evaluation brightness y of described picture padopt following formula to calculate:
y p=X*W z T
Wherein y pfor picture is evaluated brightness, the luminance vector that X is image, W zfor evaluating luminance weights vector, this vector rule of thumb designs.
Wherein, evaluate luminance weights vector W zthe degree that token image zones of different is concerned, its component is identical with number of components in luminance vector X and mutually corresponding, and the degree that certain region is concerned is higher, evaluates component value corresponding with this region in luminance weights vector just larger.Evaluate luminance weights vector W zluminance weights W is different from grading, each component value in different size.
Further, in described step S6, described controlled amounts is shown with following formula table:
lnp=lntg
In formula, lnp is controlled amounts, p=tg, and t is the automatic exposure time, g analog gain coefficient.
Further, in described step S7, adopt inquiry exposure gain table to obtain time for exposure t and analog gain coefficient g.The mode that obtains time for exposure t and analog gain coefficient g according to controlled amounts lnp has multiple, and as look-up table, iterative method or other control algolithms, inquiry exposure gain table is wherein a kind of comparatively easily and fast.
In the inventive method, design B-P neural net and judge light scene, B-P neural net is carried out to training study, make B-P neural net there is natural active and intelligent, the input taking real image luminance vector as neural net, thus realize the accurate judgement to light scene, after only having light scene being judged accurately, could take suitable exposure based on light scene, the inventive method is to light scene accuracy of judgement, and algorithm is simple, highly versatile, can be widely used.
Brief description of the drawings
Fig. 1 is the automatic explosion method flow chart of the embodiment of the present invention;
Fig. 2 is division and the numbering of original image-region in the embodiment of the present invention;
Fig. 3 is the B-P neural network structure figure designing in the embodiment of the present invention;
Fig. 4 is the neuron models of B-P neural net in the present embodiment;
Fig. 5 is the design sketch that carries out brightness rectification according to desirable brightness in the embodiment of the present invention;
Fig. 6 adopts PID controller to obtain the system block diagram of desirable controlled amounts in the embodiment of the present invention;
Fig. 7 is the final exposure gain table of camera ov9712 in the embodiment of the present invention;
Fig. 8 is the effect contrast figure that apply the embodiment of the present invention in different light scenes in, automatic explosion method obtains.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that, specific embodiment described herein only, in order to explain the present invention, is not intended to limit the present invention.
Fig. 1 is the automatic explosion method flow chart of embodiment in the present invention, and the method comprises:
S1: obtain original image by video acquisition system, the light that includes scene signals is radiated on cmos image sensor after lens focus, cmos image sensor generally comprises three parts: COMS photo-sensitive cell, analog gain coefficient device, A/D converter, the light being radiated on COMS transducer converts analog electrical signal to by photo-sensitive cell after the exposure of certain hour, analog gain coefficient device amplifies the signal of telecommunication, finally convert electrical signals to digital signal by A/D converter, digital signal is outputed in Digital Video collection controller.
S2: according to the degree of concern of picture zones of different is divided into multiple regions by described original image, and give successively zone number according to the size that is concerned degree, consider characteristic and degree of concern and the experience of people to picture of picture in frontlighting or situation backlight, picture is divided into 13 regions, think that picture centre position is the region that is concerned degree maximum, its numbering is less, picture neighboring area is to be concerned the region that degree is less, its numbering is larger, according to the degree being concerned, to be divided region number consecutively is 1 to 13, the region that image is divided and numbering are as Fig. 2.
S3: the region of described division is asked to the image brightness mean value in each region, in conjunction with the mean flow rate of image in this regional location numbering and its region, obtained luminance vector.To 13 regions dividing through S2, ask the average of image brightness in each region, establish it for x i, the Position Number that i is region, i=1,2 ... 13, luminance vector X=(x 1, x 2..., x 13).
S4: the quantity design B-P neural net of dividing according to described original image region, the input using described luminance vector as neural net, judges light scene, output judged result.The input layer variable number of B-P neural net is the number that original image region is divided, and is 13 in the present embodiment, and the B-P neural net output formula of design is as follows:
y=f(X),y=0,1
X is the vector representation of neural net input data, the output that y is neural net, and y=0 or 1, the result of determination of described B-P neural net output is two kinds, and 0 represents normal illumination, and 1 represents special light conditions.
The input layer variable number of determining neural net is 13, and the neuron number of its output is 1, the existing number of plies and the each layer of neuronic number that only need to determine its middle hidden layer, and the structure of B-P neural net can be decided.
B-P neural net need to be passed through training study, just may have certain flexibility and intelligent, just can judge accurately light scene.The training study algorithm of B-P neural net has multiple, is preferably error back propagation learning algorithm in the present embodiment, but the present invention does not limit the training study algorithm of B-P neural net.
Middle hidden layer can be one deck or multilayer, in the time that middle hidden layer is multilayer, error back propagation learning algorithm can not obtain good training study effect, when middle hidden layer is one deck, it is simple in structure for B-P neural net, error back propagation learning algorithm maturation, in the middle of therefore adopting, hidden layer is one deck, but the present invention does not limit the number of plies of hidden layer in the middle of neural net.
Many employings of number experience of middle hidden layer neuron arranges, generally as follows with reference to computing formula:
l<n-1
l < ( m + n ) + a
l=log 2n
Wherein l is hidden layer neuron number, and n is input layer number, and m is output neuron number, a is constant, and its scope is 0-10, rule of thumb, getting constant a is 3, and middle hidden layer neuron number is 6, and the structure of the neural net that obtains designing in the present embodiment is as Fig. 3.
Then, need to determine each neuronic parameter, neuron parameter comprises each input weights, neuron deviant and transfer function, and neuron models, as Fig. 4, suppose Figure 4 shows that j neuron, this neuronic p that is input as j1to p jn, offset function is b j, transfer function is f j, weights are w j1to w jn.
Transfer function is generally simoid function or logistic function, and in the time that transfer function is simoid function, B-P neural net adopts steepest descent method can high efficiencyly complete training process, is transfer function therefore all neurons are selected simoid function.
Determine that the process of each neuronic input weights and deviant is the training study process of neural net, B-P neural net adopts the learning process of error back propagation learning algorithm, this algorithm is comparatively ripe, be integrated in Matlab tool box, the realization of can programming in Matlab, also can directly use Matlab tool box, in the present embodiment, directly call Matlab tool box and complete learning process.
B-P neural net adopts the learning process of error back propagation learning algorithm, the speed of sample data set pair training study and the order of accuarcy of final weights and deviant that this learning process is used have a great impact, in order to obtain quickly and accurately each neuronic parameter of neural net, correctly choose study sampled point very necessary, consider that sampled point must cover the application of all possible parameter situation and this neural net, in the present embodiment by sample set design data as table 1.
Table 1 train samples collection situation
Illuminating position Sample number Y output
To light situation 1000 1
Situation backlight 1000 1
Indoor normal illumination 1000 0
Outdoor just in illumination 1000 0
The image brightness of the image pattern that table 1 is designed is converted to the input of luminance vector as neural net, obtains the luminance vector X=(x of each sample 1, x 2x 13), the input using each component of luminance vector as B-P neural net.
By in the error back propagation learning algorithm module in the sample data substitution Matlab tool box in table 1, the B-P neural net of the present embodiment is carried out to training study, can obtain each neuronic input weights, deviant through training study.
Table 2 is depicted as training study and obtains input layer input weights and each neuron deviant of hidden layer to hidden layer.
Table 3 is depicted as input weights and the output layer neuronic deviant of hidden layer to output layer.
Table 1 neural net output layer is to hidden layer information
Neuron 1 Neuron 2 Neuron 3 Neuron 4 Neuron 4 Neuron 6
x 1 0.1276 -0.3372 -0.0845 -0.1833 -0.8470 -0.0714
x 2 0.5879 -1.5609 -0.6589 0.5194 0.5053 0.0397
x 3 -0.1543 -0.2152 0.0366 -0.9620 0.4315 0.2737
x 4 0.0622 -0.4902 0.7891 -0.8363 0.0391 0.3279
x 5 1.0354 -0.6431 -0.3181 0.0359 -0.8763 0.1632
x 6 -0.5359 -0.0690 0.6762 -0.6810 -0.0480 -0.4427
x 7 -0.1343 -0.7522 0.0114 0.1146 -0.2919 -0.7228
x 8 0.5234 0.2893 0.8203 -0.1602 0.8097 0.3045
x 9 -0.4793 0.0864 0.6060 -0.7993 -0.0423 0.0958
x 10 1.0091 -1.1566 0.1749 0.8206 0.2858 -0.0081
x 11 0.1433 0.0517 0.0961 -0.6016 0.4484 0.1029
x 12 0.4519 0.8756 1.1838 -0.8442 -0.5452 -0.7440
x 13 -0.1887 -0.5859 0.8504 -0.5049 0.1127 0.5854
Deviant b -1.4628 1.7321 1.0325 -0.5727 -0.9139 -2.0257
Table 2 neural net hidden layer is to output layer information
Each neuron parameter that table 2 and table 3 are obtained is applied in the B-P neural net in the present embodiment, select at random photo backlight, to photo that under radiograph, illumination normal condition, photo and application image entropy can not judge scene as test sample, the B-P neural net of the present embodiment design is judged to the accuracy of light scene verifies.Result shows, the light scene accuracy of judgement of B-P neural net to picture in the present embodiment, and it can judge the light scene of picture that application image entropy judges that scene lost efficacy.
S5: according to the judged result of neural net, the desirable brightness of computed image, in the time that neural net is output as 0,1/2nd of luminance vector largest component is got in desirable brightness; In the time that neural net is output as 1, the formula that calculates desirable brightness is as follows:
y z = X * W T / sum ( W i ) &eta; = y z / y p - 1 y l = y mid / ( 1 + &eta;l )
In formula, y zfor the grading brightness of image subject, the luminance vector that X is image, W is grading luminance weights vector, y pfor the evaluation brightness of picture, the luminance errors coefficient that η is image subject, l is compensation coefficient, and its span is between 0-1, y midfor 1/2nd of the largest component of luminance vector, be also image intermediate luminance, y lfor the desirable brightness of calculating, W ifor i the component of grading luminance weights vector W, T represents transposition.
The evaluation brightness y of picture padopt following formula to calculate:
y p=X*W z T
Wherein, y pfor picture is evaluated brightness, the luminance vector that X is image, X=(x 1, x 2x 13), W zfor evaluating luminance weights vector, this vector rule of thumb designs.
Wherein, grading luminance weights vector W and evaluation luminance weights vector W zthe degree that all token image zones of different is concerned, its component is identical with number of components in luminance vector X and mutually corresponding, and the degree that certain region is concerned is higher, and the component value in the weight vector corresponding with this region is just larger.Evaluate luminance weights vector W zin different size with each component value of grading luminance weights vector W, rule of thumb gets W z=[4,1,1,1,1,0,0,0,0,0,0,0,0], W=[4,2,2,2,2,1,1,1,1,1,1,1,1], getting compensation coefficient is l=0.5.
Fig. 5 is to contrast effect figure backlight and that the photo of taking under light field scape is corrected according to desirable brightness, Fig. 5 (a) and Fig. 5 (c) are respectively backlight and to the former figure taking under light field scape, Fig. 5 (b) and Fig. 5 (d) are the picture after correcting according to desirable brightness, after correcting, picture is more suitable for eye-observation than the brightness of the picture before correcting, and the design of the formula that calculates desirable brightness, the luminance weights of grading vector W is described and evaluates luminance weights vector W in the embodiment of the present invention zdesign be all rational.
S6: the initial input using the deviation of desirable brightness in the intrinsic brilliance of original image and S5 as pid algorithm, utilizes pid algorithm to obtain the needed desirable controlled amounts of desirable brightness.
First, controlled amounts is shown with following formula table:
lnp=lntg
In formula, lnp is controlled amounts, p=tg, and t is the automatic exposure time, g analog gain coefficient.
Wherein, the process of acquisition controlled amounts formula lnp=lntg is:
Consider that under the factor prerequisites such as direct current offset, after actual exposure, traditional expression formula of image brightness y is:
y=πkltgr 2+c 1tg+c 2
Wherein k is proportionality coefficient, l is the illumination that external environment condition arrives camera lens, it is relevant with multiple factors such as lighting condition, subject surface reflectivity, shooting angle, r is the iris radius of image capture device, iris radius is generally definite value, and t is the time for exposure, and g is analog gain coefficient, t and g all can change size, c 1, c 2being the constant relevant to the structure of image capturing system, is definite value.
Correct and remove the direct current offset producing in photo-process and analog gain coefficient process, c by black level 1tg+c 2=0, now real image brightness y represents by following formula,
y=πkltgr 2
The formula of above real image brightness is simplified, and its method for simplifying is as follows:
p = tg q = l k &prime; = &pi; kr 2 y = k &prime; pq
Iris radius r is constant, and outer scene illumination l is uncontrollable amount, and time for exposure t and analog gain coefficient g are controlled amounts, and p is controlled amounts, and q is uncontrollable amount, and k ' is constant.
Now, the pass of real image brightness y and controlled amounts p and uncontrollable amount q is nonlinear, is taken the logarithm in both sides, obtains following formula,
lny=lnk'+lnp+lnq
In formula, the pass of lny and lnp and lnq is linear, and lnp is controlled amounts, and lnq is non-controlled amounts.
The effect of carrying out above simplification is: by the element paritng that affects real image brightness y out, image brightness y is only relevant to p, p=tg, and regulating time for exposure t and analog gain coefficient g is the brightness of adjustable real image.
Desirable illuminometer is shown to the picture brightness of exporting in desirable controlled amounts situation, according to the desirable brightness of the picture calculating in step S5, obtains desirable controlled amounts, further acquire the needed time for exposure t of desirable brightness and analog gain coefficient g.In the present embodiment, utilize pid algorithm to obtain desirable controlled amounts, the initial input of pid algorithm is the deviation of original image intrinsic brilliance and the desirable brightness of this image.
If Fig. 6 is the system block diagram that employing PID controller obtains desirable controlled amounts, wherein, lny emiddle y efor desirable brightness, lny outmiddle y outfor a certain brightness value that approaches desirable brightness of exporting after pid algorithm, lny inmiddle y infor be input to the image brightness of Digital Video collection equipment after PID calculates, lnp is controlled amounts, lnq+lnk ' exists as system interference, G (s) be at Digital Video collection equipment to the transfer function before auto-exposure control, the feedback transfer function of H (s) for asking error to be.Repeatedly by desirable brightness y ewith y outdeviation is as the input of pid algorithm, and so cycle calculations, passes through resize ratio coefficient k in calculating each time p, integral coefficient k i, differential coefficient k d, make controlled amounts more approach ideal value, even also lny outwith lny emore approaching, when | lny out-lny e| < h etime, think that controlled amounts lnp is now desirable controlled amounts, PID computational process finishes, and proves by experiment, as luminance threshold h evalue while being ln1.1 effect better.
S7: obtain desirable controlled amounts according in described S6, adopt look-up table to obtain ideal exposure time t and desirable simulation gain coefficient g, the ideal exposure time t of acquisition and desirable simulation gain coefficient g are transferred in the transducer of video acquisition system, can realize automatic exposure.
The mode that obtains time for exposure t and analog gain coefficient g according to controlled amounts lnp has multiple, and as look-up table, iterative method or other control algolithms, inquiry exposure gain table is wherein comparatively simple a kind of.The exposure gain table of the camera lens of different model is different, and the functional relation of the time for exposure t of different model camera lens and analog gain coefficient g is different, the camera ov9712 explanation exposure gain table acquisition methods of producing with ov company in the embodiment of the present invention.
Ov9712 imageing sensor is COMS imageing sensor, and gain coefficient adjustable range is 1-31, according to COMS imageing sensor characteristic, for ensureing not have striped and scintillation to produce under light, time for exposure and gain coefficient need meet the demands into:
f v=2f/n
t=m/(2f)
t<=1/f v
1≤g≤31
U=lntg
In formula, U is function P in controlled amounts lnp, and m and n are proportionality coefficient, regulates the large I of m to change the time for exposure, regulates n size, adjustable video sampling frame per second f vsize, f is ac frequency, f vfor video sampling frame per second, t is the time for exposure, and g is analog gain coefficient.Wherein, ac frequency f=50hz, 1≤g≤31.
In different illumination situations, can obtain the good image of brightness by adjusting the time for exposure, be in different illumination situations, time for exposure regulates larger on the impact of image brightness, therefore,, in the time determining exposure gain table, first determine time for exposure table, regulate and calculate gain table according to restrictive condition again, obtain final exposure gain table.
Concrete mode is, in low-light (level) situation, when illumination is less than 10lux, get n=8, sample rate is reduced to 12.5hz, will the time for exposure be made as 80ms, can obtain the suitable image of brightness, high light situation, be illumination while being greater than 2000lux (ordinary lamps illuminance is approximately 500lux), need not consider the time for exposure, no matter how short the time for exposure is, all can produce overexposure, now the time for exposure is set to be greater than 10ms or is less than 10ms and all can.Thinking is first determined the time for exposure table of this camera successively, then regulates the gain table of calculating this camera according to restrictive condition, obtains final exposure gain table as Fig. 7.
Fig. 8 is the picture that apply the embodiment of the present invention in different light scenes in, automatic explosion method obtains.The outdoor picture that Fig. 8 (a) takes for the automatic explosion method adopting in the embodiment of the present invention, this picture is applicable to eye-observation, does not occur the situation that exposure is overflowed; The indoor picture that Fig. 8 (b) takes for the automatic explosion method adopting in the embodiment of the present invention, this picture is applicable to eye-observation, does not occur that under-exposure or exposure overflow situation; Fig. 8 (c) and (d) be under the same conditions to same target take photo, Fig. 8 (c) for do not have in situation backlight should the embodiment of the present invention in the picture taken of automatic explosion method, this picture brightness is darker, be beyond recognition details, be not suitable for eye-observation, Fig. 8 (d), for adopting the picture that in the embodiment of the present invention, automatic explosion method is taken, compares Fig. 8 (c), its brightness improves, and can recognize details; Fig. 8 (e) and (f) be under same scene condition to same target take picture, Fig. 8 (e) in light situation, do not have should the embodiment of the present invention in the picture taken of automatic explosion method, this picture brightness is brighter, be beyond recognition details, be not suitable for eye-observation, Fig. 8 (f) is for adopting the picture that in the embodiment of the present invention, automatic explosion method is taken, compare Fig. 8 (c), its brightness reduces, can recognize details, be applicable to eye-observation.The automatic explosion method of the embodiment of the present invention is applicable to various light scenes, can obtain good image effect.
Automatic explosion method of the present invention can meet the lower Video processing front end automatic exposure requirement of network camera application, and analysis result can be applicable to that important goal is cut apart, the application such as video scaling, image retrieval and safety monitoring, military affairs guard of object identification, adaptive video compression, content erotic.
Above execution mode is only for illustrating the present invention; and be not limitation of the present invention; the those of ordinary skill in relevant technologies field; without departing from the spirit and scope of the present invention; can also make a variety of changes and improve; therefore all technical schemes that are equal to also belong to category of the present invention, and scope of patent protection of the present invention should be defined by the claims.

Claims (6)

1. an automatic explosion method that judges light scene based on B-P neural net, is characterized in that, comprises the following steps:
S1: obtain original image by video acquisition system;
S2: according to the degree of concern of picture zones of different is divided into multiple regions by described original image, and give successively zone number according to the size that is concerned degree;
S3: the region of described division is asked to the image brightness mean value in each region, in conjunction with the mean flow rate of image in this regional location numbering and its region, obtained luminance vector;
S4: the quantity design B-P neural net of dividing according to described original image region, the input using described luminance vector as neural net, judges light scene, output judged result;
S5: according to the judged result of described neural net, the desirable brightness of computed image;
S6: the initial input using the deviation of desirable brightness in the intrinsic brilliance of described original image and described S5 as pid algorithm, utilizes pid algorithm to obtain the needed desirable controlled amounts of desirable brightness;
S7: obtain desirable controlled amounts according in described S6, obtain ideal exposure time t and desirable simulation gain coefficient g, the ideal exposure time t of acquisition and desirable simulation gain coefficient g are transferred in the transducer of video acquisition system, can realize automatic exposure.
2. a kind of automatic explosion method that judges light scene based on B-P neural net according to claim 1, it is characterized in that, in described step S4, the input layer variable number of B-P neural net is the number that original image region is divided, and the B-P neural net output formula of design is as follows:
y=f(X),y=0,1
X is the vector representation of neural net input data, the output that y is neural net, and y=0 or 1, the result of determination of described B-P neural net output is two kinds, and 0 represents normal illumination, and 1 represents special light conditions.
3. a kind of automatic explosion method that judges light scene based on B-P neural net according to claim 1 and 2, is characterized in that, in described step S5, in the time that neural net is output as 0, desirable brightness get luminance vector largest component 1/2nd;
In the time that neural net is output as 1, the formula that calculates desirable brightness is as follows:
y z = X * W T / sum ( W i ) &eta; = y z / y p - 1 y l = y mid / ( 1 + &eta;l )
In formula, y zfor the grading brightness of image subject, the luminance vector that X is image, W is grading luminance weights vector, y pfor the evaluation brightness of picture, the luminance errors coefficient that η is image subject, l is compensation coefficient, and its span is between 0-1, y midfor 1/2nd of the largest component of luminance vector, be also image intermediate luminance, y lfor the desirable brightness of calculating, W ifor i the component of grading luminance weights vector W, T represents transposition.
4. a kind of automatic explosion method that judges light scene based on B-P neural net according to claim 3, is characterized in that, the evaluation brightness y of described picture padopt following formula to calculate:
y p=X*W z T
Wherein, y pfor picture is evaluated brightness, the luminance vector that X is image, W zfor evaluating luminance weights vector.
5. according to a kind of automatic explosion method that judges light scene based on B-P neural net one of claim 1-4 Suo Shu, it is characterized in that, in described step S6, described controlled amounts is shown with following formula table:
lnp=lntg
In formula, lnp is controlled amounts, p=tg, and t is the automatic exposure time, g analog gain coefficient.
6. according to a kind of automatic explosion method that judges light scene based on B-P neural net one of claim 1-5 Suo Shu, it is characterized in that, in described step S7, adopt inquiry exposure gain table to obtain time for exposure t and analog gain coefficient g.
CN201410198357.5A 2014-05-12 2014-05-12 A kind of automatic explosion method judging light scene based on B P neutral net Active CN103973991B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410198357.5A CN103973991B (en) 2014-05-12 2014-05-12 A kind of automatic explosion method judging light scene based on B P neutral net

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410198357.5A CN103973991B (en) 2014-05-12 2014-05-12 A kind of automatic explosion method judging light scene based on B P neutral net

Publications (2)

Publication Number Publication Date
CN103973991A true CN103973991A (en) 2014-08-06
CN103973991B CN103973991B (en) 2017-03-01

Family

ID=51242981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410198357.5A Active CN103973991B (en) 2014-05-12 2014-05-12 A kind of automatic explosion method judging light scene based on B P neutral net

Country Status (1)

Country Link
CN (1) CN103973991B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104320593A (en) * 2014-11-19 2015-01-28 湖南国科微电子有限公司 Automatic exposure control method for digital photographic device
CN104363390A (en) * 2014-11-11 2015-02-18 广东中星电子有限公司 Lens vignetting compensation method and system
CN104754240A (en) * 2015-04-15 2015-07-01 中国电子科技集团公司第四十四研究所 Automatic exposure method and device for CMOS (complementary metal oxide semiconductor) image sensor
CN105611189A (en) * 2015-12-23 2016-05-25 北京奇虎科技有限公司 Automatic exposure parameter adjustment method and device and user equipment
CN108777768A (en) * 2018-05-31 2018-11-09 中国科学院西安光学精密机械研究所 A kind of fast automatic exposure regulating method based on calibration
CN110070009A (en) * 2019-04-08 2019-07-30 北京百度网讯科技有限公司 Road surface object identification method and device
CN110602411A (en) * 2019-08-07 2019-12-20 深圳市华付信息技术有限公司 Method for improving quality of face image in backlight environment
CN110708469A (en) * 2018-07-10 2020-01-17 北京地平线机器人技术研发有限公司 Method and device for adapting exposure parameters and corresponding camera exposure system
CN111770285A (en) * 2020-07-13 2020-10-13 浙江大华技术股份有限公司 Exposure brightness control method and device, electronic equipment and storage medium
CN112788250A (en) * 2021-02-01 2021-05-11 青岛海泰新光科技股份有限公司 Automatic exposure control method based on FPGA
CN117793539A (en) * 2024-02-26 2024-03-29 浙江双元科技股份有限公司 Image acquisition method based on variable period and optical sensing device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030052978A1 (en) * 2001-06-25 2003-03-20 Nasser Kehtarnavaz Automatic white balancing via illuminant scoring autoexposure by neural network mapping
CN101452575A (en) * 2008-12-12 2009-06-10 北京航空航天大学 Image self-adapting enhancement method based on neural net

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030052978A1 (en) * 2001-06-25 2003-03-20 Nasser Kehtarnavaz Automatic white balancing via illuminant scoring autoexposure by neural network mapping
CN101452575A (en) * 2008-12-12 2009-06-10 北京航空航天大学 Image self-adapting enhancement method based on neural net

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
徐培凤: "基于图像处理的自动对焦和自动曝光算法研究", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104363390A (en) * 2014-11-11 2015-02-18 广东中星电子有限公司 Lens vignetting compensation method and system
CN104320593B (en) * 2014-11-19 2016-02-24 湖南国科微电子股份有限公司 A kind of digital camera automatic exposure control method
CN104320593A (en) * 2014-11-19 2015-01-28 湖南国科微电子有限公司 Automatic exposure control method for digital photographic device
CN104754240A (en) * 2015-04-15 2015-07-01 中国电子科技集团公司第四十四研究所 Automatic exposure method and device for CMOS (complementary metal oxide semiconductor) image sensor
CN105611189A (en) * 2015-12-23 2016-05-25 北京奇虎科技有限公司 Automatic exposure parameter adjustment method and device and user equipment
CN108777768A (en) * 2018-05-31 2018-11-09 中国科学院西安光学精密机械研究所 A kind of fast automatic exposure regulating method based on calibration
CN110708469A (en) * 2018-07-10 2020-01-17 北京地平线机器人技术研发有限公司 Method and device for adapting exposure parameters and corresponding camera exposure system
CN110070009A (en) * 2019-04-08 2019-07-30 北京百度网讯科技有限公司 Road surface object identification method and device
CN110602411A (en) * 2019-08-07 2019-12-20 深圳市华付信息技术有限公司 Method for improving quality of face image in backlight environment
CN111770285A (en) * 2020-07-13 2020-10-13 浙江大华技术股份有限公司 Exposure brightness control method and device, electronic equipment and storage medium
CN111770285B (en) * 2020-07-13 2022-02-18 浙江大华技术股份有限公司 Exposure brightness control method and device, electronic equipment and storage medium
CN112788250A (en) * 2021-02-01 2021-05-11 青岛海泰新光科技股份有限公司 Automatic exposure control method based on FPGA
CN112788250B (en) * 2021-02-01 2022-06-17 青岛海泰新光科技股份有限公司 Automatic exposure control method based on FPGA
CN117793539A (en) * 2024-02-26 2024-03-29 浙江双元科技股份有限公司 Image acquisition method based on variable period and optical sensing device
CN117793539B (en) * 2024-02-26 2024-05-10 浙江双元科技股份有限公司 Image acquisition method based on variable period and optical sensing device

Also Published As

Publication number Publication date
CN103973991B (en) 2017-03-01

Similar Documents

Publication Publication Date Title
CN103973991A (en) Automatic exposure method for judging illumination scene on basis of B-P neural network
CN106875373B (en) Mobile phone screen MURA defect detection method based on convolutional neural network pruning algorithm
CN104036474B (en) A kind of Automatic adjustment method of brightness of image and contrast
CN111800585B (en) Intelligent lighting control system
CN109410129A (en) A kind of method of low light image scene understanding
CN108401154B (en) Image exposure degree non-reference quality evaluation method
CN111292264A (en) Image high dynamic range reconstruction method based on deep learning
CN102340673B (en) White balance method for video camera aiming at traffic scene
CN106454145A (en) Automatic exposure method with scene self-adaptivity
CN112614077A (en) Unsupervised low-illumination image enhancement method based on generation countermeasure network
CN107071308A (en) A kind of CMOS is quickly adjusted to as system and method
CN104504658A (en) Single image defogging method and device on basis of BP (Back Propagation) neural network
CN111402285B (en) Contour detection method based on visual mechanism dark edge enhancement
CN109255758A (en) Image enchancing method based on full 1*1 convolutional neural networks
CN109660736A (en) Method for correcting flat field and device, image authentication method and device
CN106385544A (en) Camera exposure adjustment method and apparatus
CN110288550A (en) The single image defogging method of confrontation network is generated based on priori knowledge guiding conditions
CN105718922B (en) Adaptive adjusting method and device for iris recognition
CN113643214B (en) Image exposure correction method and system based on artificial intelligence
CN110415653A (en) Backlight illumination regulating system and adjusting method and liquid crystal display device
CN107895350A (en) A kind of HDR image generation method based on adaptive double gamma conversion
CN112217988B (en) Photovoltaic camera motion blur self-adaptive adjusting method and system based on artificial intelligence
CN110602411A (en) Method for improving quality of face image in backlight environment
CN109143758A (en) The technology of automatic enhancing optical projection effect
Xiang et al. Artificial intelligence controller for automatic multispectral camera parameter adjustment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant