Summary of the invention
In view of above problem, the invention provides a kind of method that produces high dynamic range images, use an original image, see through the brightness adjustment model that the neural network algorithm is trained and obtain and produce high dynamic range images.
Therefore, the invention provides the method for a kind of generation high dynamic range images (High Dynamic Range Image, HDR), the method comprises: load a brightness adjustment model, this brightness adjustment model is that application one neural network algorithm forms; Obtain an original image; Capture a pixel characteristic value, the First Eigenvalue on a first direction and the Second Eigenvalue on a second direction of this original image; And, according to this pixel characteristic value, this First Eigenvalue and this Second Eigenvalue of this original image, see through this brightness adjustment model and produce a high dynamic range images, wherein this pixel characteristic value of this original image utilizes following formula to calculate:
Wherein, C
1sum of all pixels, Y for the sum of all pixels of this pixel characteristic value of this original image, horizontal direction that N is this original image, vertical direction that M is this original image
ijfor this first direction i in this original image brightness value and N, M, i and j that reaches j pixel of this second direction is positive integer, wherein this First Eigenvalue of this original image utilizes following formula to calculate:
Wherein,
for this First Eigenvalue of this original image, number, the Y that x is the pixel of this first direction in this original image
ijfor this first direction i in this original image brightness value, Y that reaches j pixel of this second direction
(i+x) jfor the brightness value of this first direction i+x in this original image and j pixel of this second direction and i, j and x are positive integer,
Wherein this Second Eigenvalue of this original image utilizes following formula to calculate:
Wherein,
for this Second Eigenvalue of this original image, number, the Y that y is the pixel of this second direction in this original image
ijfor this first direction i in this original image brightness value, Y that reaches j pixel of this second direction
i (j+y)for the brightness value of this first direction x in this original image and j+y pixel of this second direction and i, j and y are positive integer.
Wherein, the direction of above-mentioned first direction and the direction of second direction are different, and the direction of first direction is horizontal direction, and the direction of second direction is vertical direction.
In addition, above-mentioned brightness adjustment model produces in external device (ED), comprising: load a plurality of frame training images; And the pixel characteristic value, the First Eigenvalue on first direction and the Second Eigenvalue on second direction that capture each frame training image, and see through neural network algorithm generation brightness adjustment model.
Wherein, the direction of above-mentioned first direction and the direction of second direction are different, and the direction of first direction is horizontal direction, and the direction of second direction is vertical direction.
In addition, the pixel characteristic value of each frame training image utilizes following formula to calculate:
Wherein, C
1sum of all pixels, Y for the sum of all pixels of the pixel characteristic value of each frame training image, horizontal direction that N is each frame training image, vertical direction that M is each frame training image
ijfor first direction i in each frame training image brightness value and N, M, i and j that reaches j pixel of second direction is positive integer.
In this, the First Eigenvalue of above-mentioned each frame training image utilizes following formula to calculate:
wherein,
for the First Eigenvalue, the x of each frame training image be the pixel of first direction in each frame training image number, Y
ijfor first direction i in each frame training image brightness value, Y that reaches j pixel of second direction
(i+x) jfor the brightness value of first direction i+x in each frame training image and j pixel of second direction and i, j and x are positive integer.
In addition, the Second Eigenvalue of above-mentioned each frame training image utilizes following formula to calculate:
wherein,
for Second Eigenvalue, the y of each frame training image be the pixel of second direction in each frame training image number, Y
ijfor first direction i in each frame training image brightness value, Y that reaches j pixel of second direction
i (j+y)for the brightness value of first direction x in each frame training image and j+y pixel of second direction and i, j and y are positive integer.
Here, above-mentioned neural network algorithm is back propagation network (Back-propagation Neural Network, BNN) algorithm, radial basis function (Radial Basis Function, RBF) or autonomous feature reflection network (Self-Organizing Map, SOM) algorithm one of them.
Therefore, the present invention also provides a kind of device that produces high dynamic range images, and this device comprises: load the device of a brightness adjustment model, this brightness adjustment model is that application one neural network algorithm forms; Obtain the device of an original image; Capture the device of a pixel characteristic value, the First Eigenvalue on a first direction and the Second Eigenvalue on a second direction of this original image; And, according to this pixel characteristic value, this First Eigenvalue and this Second Eigenvalue of this original image, see through the device that this brightness adjustment model produces a high dynamic range images, wherein this pixel characteristic value of this original image utilizes following formula to calculate:
Wherein, C
1sum of all pixels, Y for the sum of all pixels of this pixel characteristic value of this original image, horizontal direction that N is this original image, vertical direction that M is this original image
ijfor this first direction i in this original image brightness value and N, M, i and j that reaches j pixel of this second direction is positive integer,
Wherein this First Eigenvalue of this original image utilizes following formula to calculate:
Wherein,
for this First Eigenvalue of this original image, number, the Y that x is the pixel of this first direction in this original image
ijfor this first direction i in this original image brightness value, Y that reaches j pixel of this second direction
(i+x) jfor the brightness value of this first direction i+x in this original image and j pixel of this second direction and i, j and x are positive integer,
Wherein this Second Eigenvalue of this original image utilizes following formula to calculate:
Wherein,
for this Second Eigenvalue of this original image, number, the Y that y is the pixel of this second direction in this original image
ijfor this first direction i in this original image brightness value, Y that reaches j pixel of this second direction
i (j+y)for the brightness value of this first direction x in this original image and j+y pixel of this second direction and i, j and y are positive integer.
In this, the direction of above-mentioned first direction and the direction of second direction are different, and the direction of first direction is horizontal direction, and the direction of second direction is vertical direction.
In this, above-mentioned brightness adjustment model produces in external device (ED), comprising: load a plurality of frame training images; And the pixel characteristic value, the First Eigenvalue on first direction and the Second Eigenvalue on second direction that capture each frame training image, and see through neural network algorithm generation brightness adjustment model.
Wherein, the direction of above-mentioned first direction and the direction of second direction are different, and the direction of first direction is horizontal direction, and the direction of second direction is vertical direction.
In this, the pixel characteristic value of each frame training image utilizes following formula to calculate:
Wherein, C
1sum of all pixels, Y for the sum of all pixels of the pixel characteristic value of each frame training image, horizontal direction that N is each frame training image, vertical direction that M is each frame training image
ijfor first direction i in each frame training image brightness value and N, M, i and j that reaches j pixel of second direction is positive integer.
In addition, the First Eigenvalue of each frame training image utilizes following formula to calculate:
wherein,
for the First Eigenvalue, the x of each frame training image be the pixel of first direction in each frame training image number, Y
ijfor first direction i in each frame training image brightness value, Y that reaches j pixel of second direction
(i+x) jfor the brightness value of first direction i+x in each frame training image and j pixel of second direction and i, j and x are positive integer.
In this, the Second Eigenvalue of each frame training image utilizes following formula to calculate:
wherein,
for Second Eigenvalue, the y of each frame training image be the pixel of second direction in each frame training image number, Y
ijfor first direction i in each frame training image brightness value, Y that reaches j pixel of second direction
i (j+y)for the brightness value of first direction x in each frame training image and j+y pixel of second direction and i, j and y are positive integer.
Wherein, above-mentioned neural network algorithm is back propagation network (Back-propagation Neural Network, BNN) algorithm, radial basis function (Radial Basis Function, RBF) or autonomous feature reflection network (Self-Organizing, SOM) algorithm one of them.
According to method and the electronic installation of generation high dynamic range images provided by the present invention, can see through the neural network Algorithm for Training and produce the brightness adjustment model, and utilize the brightness adjustment model to be processed single image and produce high dynamic range images.And then can improve time and the storage area that needs to take multiple images, reduce the processing time of the synthetic single image of multiple images.
About feature of the present invention and practical application, hereby coordinate accompanying drawing to be described in detail as follows as most preferred embodiment.
Embodiment
According to the method for generation high dynamic range images of the present invention, be applied to have the electronic installation of image acquisition function.This method can see through in software or firmware program and be built in the storage device of electronic installation, then carries out built-in software or firmware program collocation image acquisition function is realized the method according to generation high dynamic range images of the present invention by the processor of electronic installation.In this, electronic installation can be the digital camera (DIGITAL CAMERA) of tool image acquisition function, the computing machine of tool image acquisition function, the mobile phone (Mobile Phone) of tool image acquisition function or personal digital assistant (the Personal Digital Assistant of tool image acquisition function, but not only be confined to above-mentioned electronic installation PDA) etc..
Please refer to shown in Fig. 3, it is the process flow diagram according to the method for the generation high dynamic range images of one embodiment of the invention.Flow process of the present invention comprises the following steps:
S100, loading brightness adjustment model, the brightness adjustment model is that the application class neural network algorithm forms;
S110, obtain original image (original image);
Pixel characteristic value, the First Eigenvalue on first direction and the Second Eigenvalue on second direction of S120, acquisition original image; And
S130, according to pixel characteristic value, the First Eigenvalue and the Second Eigenvalue of original image, see through the brightness adjustment model and produce high dynamic range images.
Wherein, the direction of the first direction in step S120 and the direction of second direction are different, and the direction of first direction is horizontal direction, and the direction of second direction is vertical direction.Though the direction of describing first direction in this is horizontal direction, the direction of second direction is vertical direction, but application is adjusted direction according to actual demand, as X axis intersects the direction of positive 45 degree and the direction that X axis intersects positive 135 degree, the direction and the X axis that intersect positive 30 degree as X axis intersect direction of positive 150 degree etc.Only the direction of the acquisition of the eigenwert of original image must with the direction consistent (being equidirectional) of the acquisition of the eigenwert of training image.
In addition, the pixel characteristic value of the original image in step S120 utilizes following formula to calculate:
Wherein, C
1sum of all pixels, Y for the sum of all pixels of the pixel characteristic value of original image, horizontal direction that N is original image, vertical direction that M is original image
ijfor first direction i in original image brightness value and N, M, i and j that reaches j pixel of second direction is positive integer.
In addition, the First Eigenvalue of the original image in step S120 utilizes following formula to calculate:
wherein,
for the First Eigenvalue of original image, number, the Y that x is the pixel of first direction in original image
ijfor first direction i in original image brightness value, Y that reaches j pixel of second direction
(i+x) jfor the brightness value of first direction i+x in original image and j pixel of second direction and i, j and x are positive integer.
In this, the Second Eigenvalue of the original image in step S120 is to utilize following formula to calculate:
wherein,
for the Second Eigenvalue of original image, number, the Y that y is the pixel of second direction in original image
ijfor first direction i in original image brightness value, Y that reaches j pixel of second direction
i (j+y)for the brightness value of first direction x in original image and j+y pixel of second direction and i, j and y are positive integer.
In addition, the described brightness adjustment model in step S100 produces in external device (ED).External device (ED) can be but be not limited to the computer installation of manufacturer or the computer installation in laboratory etc.Please refer to shown in Fig. 4, it is the process flow diagram according to the generation brightness adjustment model of one embodiment of the invention.The flow process that produces the brightness adjustment model comprises the following steps:
S200, load a plurality of frame training images; And
S210, the pixel characteristic value that captures each frame training image, the First Eigenvalue on first direction and the Second Eigenvalue on second direction, and see through neural network algorithm generation brightness adjustment model.
Wherein, the direction of the first direction described in step S210 and the direction of second direction are different, and the direction of first direction is horizontal direction, and the direction of second direction is vertical direction.Though the direction of describing first direction in this is horizontal direction, the direction of second direction is vertical direction, but application is adjusted direction according to actual demand, as X axis intersects the direction of positive 45 degree and the direction that X axis intersects positive 135 degree, the direction and the X axis that intersect positive 30 degree as X axis intersect direction of positive 150 degree etc.Only the direction of the acquisition of the eigenwert of original image must with the direction consistent (being equidirectional) of the acquisition of the eigenwert of training image.
In addition, the pixel characteristic value of each the frame training image described in step S210 utilizes following formula to calculate:
Wherein, C
1sum of all pixels, Y for the sum of all pixels of the pixel characteristic value of each frame training image, horizontal direction that N is each frame training image, vertical direction that M is each frame training image
ijfor first direction i in each frame training image brightness value and N, M, i and j that reaches j pixel of second direction is positive integer.
In this, the First Eigenvalue of each the frame training image described in step S210 utilizes following formula to calculate:
wherein,
for the First Eigenvalue, the x of each frame training image be the pixel of first direction in each frame training image number, Y
ijfor first direction i in each frame training image brightness value, Y that reaches j pixel of second direction
(i+x) jfor the brightness value of first direction i+x in each frame training image and j pixel of second direction and i, j and x are positive integer.
In addition, the Second Eigenvalue of each the frame training image described in step S210 is to utilize following formula to calculate:
wherein,
for Second Eigenvalue, the y of each frame training image be the pixel of second direction in each frame training image number, Y
ijfor first direction i in each frame training image brightness value, Y that reaches j pixel of second direction
i (j+y)for the brightness value of first direction x in each frame training image and j+y pixel of second direction and i, j and y are positive integer.
In this, above-mentioned neural network algorithm can be back propagation network (Back-propagation Neural Network, BNN) algorithm, radial basis function (Radial Basis Function, RBF) or autonomous feature reflection network (Self-Organizing Map, SOM) algorithm one of them.
In addition, please refer to shown in Fig. 5, it is the configuration diagram according to the electronic installation of the generation high dynamic range images of another embodiment of the present invention.Electronic installation 30 comprises: storage element 32, processing unit 34 and output unit 36.Wherein, storage element 32 stores original image 322, storage element 32 can be but be not limited to random access memory (Random Access Memory, RAM), dynamic RAM (Dynamic Random Access Memory, DRAM) or wherein arbitrary of Synchronous Dynamic Random Access Memory (Synchronous Dynamic Random Access Memory, SDRAM).
Processing unit 34 connects storage element 32, and processing unit 34 can comprise: brightness adjustment model 344, eigenwert acquisition unit 342 and brightness adjustment program 346.Pixel characteristic value, the First Eigenvalue on first direction and the Second Eigenvalue on second direction of eigenwert acquisition unit 342 acquisition original images 322.Brightness adjustment model 344 is that the application class neural network algorithm forms.Brightness adjustment program 346, according to pixel characteristic value, the First Eigenvalue and the Second Eigenvalue of original image 322, sees through brightness adjustment model 344 and produces high dynamic range images.Processing unit 34 can be but be not limited to central processing unit (CPU), microprocessor (Micro Control Unit, MCU).Output unit 36 connects processing unit 34, and output unit 36 can be shown in the high dynamic range images of generation on the screen of electronic installation 30.
In this, the direction of above-mentioned first direction and the direction of second direction are different, and the direction of first direction is horizontal direction, and the direction of second direction is vertical direction.Though the direction of describing first direction in this is horizontal direction, the direction of second direction is vertical direction, but application is adjusted direction according to actual demand, as X axis intersects the direction of positive 45 degree and the direction that X axis intersects positive 135 degree, the direction and the X axis that intersect positive 30 degree as X axis intersect direction of positive 150 degree etc.Only the direction of the acquisition of the eigenwert of original image must with the direction consistent (being equidirectional) of the acquisition of the eigenwert of training image.
In addition, the pixel characteristic value of original image 322 utilizes following formula to calculate:
Wherein, C
1sum of all pixels, Y for the sum of all pixels of the pixel characteristic value of original image 322, horizontal direction that N is original image 322, vertical direction that M is original image 322
ijfor first direction i in original image 322 brightness value and N, M, i and j that reaches j pixel of second direction is positive integer.
In addition, the First Eigenvalue of original image 322 is to utilize following formula to calculate:
wherein,
for the First Eigenvalue of original image 322, number, the Y that x is the pixel of first direction in original image 322
ijfor first direction i in original image 322 brightness value, Y that reaches j pixel of second direction
(i+x) jfor the brightness value of first direction i+x in original image 322 and j pixel of second direction and i, j and x are positive integer.
Wherein, the Second Eigenvalue of original image 322 is to utilize following formula to calculate:
wherein,
for the Second Eigenvalue of original image 322, number, the Y that y is the pixel of second direction in original image 322
ijfor first direction i in original image 322 brightness value, Y that reaches j pixel of second direction
i (j+y)for the brightness value of first direction x in original image 322 and j+y pixel of second direction and i, j and y are positive integer.
In this, above-mentioned brightness adjustment model produces in external device (ED).External device (ED) can be but be not limited to the computer installation of manufacturer or the computer installation in laboratory etc.Please refer to shown in Fig. 6, it is the process flow diagram according to the generation brightness adjustment model of another embodiment of the present invention.The flow process that produces the brightness adjustment model comprises the following steps:
S300, load a plurality of frame training images; And
S310, the pixel characteristic value that captures each frame training image, the First Eigenvalue on first direction and the Second Eigenvalue on second direction, and see through neural network algorithm generation brightness adjustment model.
Wherein, the direction of the first direction described in step S310 and the direction of second direction are different, and the direction of first direction is horizontal direction, and the direction of second direction is vertical direction.Though the direction of describing first direction in this is horizontal direction, the direction of second direction is vertical direction, but application is adjusted direction according to actual demand, as X axis intersects the direction of positive 45 degree and the direction that X axis intersects positive 135 degree, the direction and the X axis that intersect positive 30 degree as X axis intersect direction of positive 150 degree etc.Only the direction of the acquisition of the eigenwert of original image must with the direction consistent (being equidirectional) of the acquisition of the eigenwert of training image.
In addition, the pixel characteristic value of each the frame training image described in step S310 utilizes following formula to calculate:
Wherein, C
1sum of all pixels, Y for the sum of all pixels of the pixel characteristic value of each frame training image, horizontal direction that N is each frame training image, vertical direction that M is each frame training image
ijfor first direction i in each frame training image brightness value and N, M, i and j that reaches j pixel of second direction is positive integer.
In this, the First Eigenvalue of each the frame training image described in step S310 is to utilize following formula to calculate:
wherein,
for the First Eigenvalue, the x of each frame training image be the pixel of first direction in each frame training image number, Y
ijfor first direction i in each frame training image brightness value, Y that reaches j pixel of second direction
(i+x) jfor the brightness value of first direction i+x in each frame training image and j pixel of second direction and i, j and x are positive integer.
In addition, the Second Eigenvalue of each the frame training image described in step S310 utilizes following formula to calculate:
wherein,
for Second Eigenvalue, the y of each frame training image be the pixel of second direction in each frame training image number, Y
ijfor first direction i in each frame training image brightness value, Y that reaches j pixel of second direction
i (j+y)for the brightness value of first direction x in each frame training image and j+y pixel of second direction and i, j and y are positive integer.
In this, above-mentioned neural network algorithm can be back propagation network (Back-propagation Neural Network, BNN) algorithm, radial basis function (Radial Basis Function, RBF) or autonomous feature reflection network (Self-Organizing Map, SOM) algorithm one of them.
In addition, please refer to shown in Fig. 7, it is the schematic diagram according to the back propagation network algorithm of embodiments of the invention.Back propagation network 40 comprises: input layer 42, hidden layer 44 and output layer 46.Wherein each frame training image has M*N pixel, and each pixel has three eigenwerts (pixel characteristic value, the First Eigenvalue and Second Eigenvalue).Input layer imports respectively the eigenwert of the pixel of training image into, so the node (X of input layer 42
1, X
2, X
3..., X
α) the number summation be α=3*M*N.Node (the P of hidden layer 44
1, P
2, P
3..., P
β) number be β, the node (Y of output layer 46
1, Y
2, Y
3..., Y
γ) number be γ, and α>=β>=γ.After back propagation network Algorithm for Training and judgement convergence, can obtain the brightness adjustment model in all training images.Wherein between the input layer 42 of brightness adjustment model and hidden layer 44, obtain first group of weighted value W
α β, obtain second group of weighted value W between hidden layer 44 and output layer 46
β γ.
Wherein, the value of each node of hidden
layer 44 is utilized following formula to calculate and is obtained:
wherein, P
jvalue, X for j node of hidden
layer 44
ivalue, W for i node of
input layer 42
ijfor weighted value, the b between j node of i node of
input layer 42 and hidden
layer 44
jfor the side-play amount of j node of hidden
layer 44 and α, i and j are positive integer.
In addition, the value of each node of
output layer 46 is utilize following formula to calculate and obtain:
wherein, Y
kvalue, P for k node of
output layer 46
jvalue, W for j node of hidden
layer 44
jkfor weighted value, the c between k node of j node of hidden
layer 44 and
output layer 46
kfor the side-play amount of k node of
output layer 46 and β, j and k are positive integer.
In addition, the judgement convergence is to utilize Mean Square Error value (Mean Squared Error, MSE) to calculate and obtain:
Wherein, the quantity summation that λ is training image, number of nodes summation, the T that γ is output layer
k sbe target output value, the Y of k output node of s training image
k sthe inference output valve and λ, γ, s and the k that are k output node of s training image are positive integer.
Although the present invention discloses as above with aforesaid preferred embodiment; so it is not in order to limit the present invention; any those skilled in the art; without departing from the spirit and scope of the present invention; therefore when doing a little change and retouching, scope of patent protection of the present invention must be looked this instructions appending claims scope person of defining and is as the criterion.