CN114359200A - Image definition evaluation method based on pulse coupling neural network and terminal equipment - Google Patents

Image definition evaluation method based on pulse coupling neural network and terminal equipment Download PDF

Info

Publication number
CN114359200A
CN114359200A CN202111629067.8A CN202111629067A CN114359200A CN 114359200 A CN114359200 A CN 114359200A CN 202111629067 A CN202111629067 A CN 202111629067A CN 114359200 A CN114359200 A CN 114359200A
Authority
CN
China
Prior art keywords
matrix
input
pulse
domain
coupling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111629067.8A
Other languages
Chinese (zh)
Other versions
CN114359200B (en
Inventor
陈韬宇
王华伟
刘庆
常三三
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XiAn Institute of Optics and Precision Mechanics of CAS
Original Assignee
XiAn Institute of Optics and Precision Mechanics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XiAn Institute of Optics and Precision Mechanics of CAS filed Critical XiAn Institute of Optics and Precision Mechanics of CAS
Priority to CN202111629067.8A priority Critical patent/CN114359200B/en
Publication of CN114359200A publication Critical patent/CN114359200A/en
Application granted granted Critical
Publication of CN114359200B publication Critical patent/CN114359200B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to an image processing method, in particular to an image definition evaluation method based on a pulse coupled neural network and a terminal device, which apply the pulse coupled neural network to solve the technical problem of low image definition, by setting relevant parameters of a feedback input domain, a coupling connection domain and a pulse generation domain of the pulse coupling neural network in advance, circularly iterating, calculating an image definition evaluation function value corresponding to an image sequence obtained in the focusing process, and drawing a curve chart to represent the definition transformation trend corresponding to the image sequence, can be suitable for different scene types, and only the gray information of the image is concerned, so that the operation accuracy can be improved, the interference of other factors is avoided, the real-time performance is better, and the calculation efficiency is high to meet the engineering requirement when the terminal equipment runs.

Description

Image definition evaluation method based on pulse coupling neural network and terminal equipment
Technical Field
The invention relates to an image processing method, in particular to an image definition evaluation method based on a pulse coupling neural network and a terminal device.
Background
The optical system may not be able to position to the focus due to various reasons during the focusing process, and the focus position of the system that has completed focusing may drift, so that the imaging quality is deteriorated due to the out-of-focus, and therefore, it is necessary to implement the auto-focusing function. The automatic focusing technology is widely applied in various fields, the current research focus mainly focuses on an automatic focusing method based on image processing, and the main idea is that after defocusing degree information of an optical system is obtained, a driving motor carries out focusing control according to the information; the image definition evaluation function is used for representing the focusing degree of the optical system, and the image definition is calculated quickly and accurately by the image definition evaluation function, so that the key of image processing is realized; meanwhile, due to inherent defects of a mechanical structure, a lens cannot accurately reach an optimal position of an image focus, or the image quality is still not good when the lens is already at the focus position, so that an image sharpness evaluation method is required to improve the final imaging quality.
Disclosure of Invention
The invention aims to solve the technical problem of low image definition by a pulse coupled neural network, and provides an image definition evaluation method and terminal equipment based on the pulse coupled neural network.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
an image definition evaluation method based on a pulse coupled neural network is characterized by comprising the following steps:
step 1: constructing a pulse coupling neural network;
the pulse coupling neural network comprises a feedback input domain, a coupling connection domain and a pulse generation domain;
the feedback input domain comprises an input matrix F, the coupling connection domain comprises a coupling connection input matrix L, and the pulse generation domain comprises a dynamic threshold matrix E, a neuron internal activity item matrix U and a pulse output matrix Y;
step 2: preprocessing an image and setting initialization parameters;
2.1) obtaining a gray matrix of the image, and carrying out normalization processing on the gray matrix to obtain a stimulation input matrix S;
the size i x j of the stimulation input matrix S is determined according to the resolution of the image, i is the ith row of the stimulation input matrix S, and j is the jth column of the stimulation input matrix S;
2.2) assigning the values of the stimulus input matrix S to an input matrix F;
2.3) initializing a coupling connection input matrix L, a neuron internal activity item matrix U, a pulse output matrix Y and a global feature matrix Q in the pulse coupling neural network into a zero matrix;
2.4) setting a feedback input domain coefficient matrix M and a coupling connection domain coefficient matrix W in the pulse coupling neural network;
2.5) calculating a dynamic threshold matrix E;
solving the convolution of the input matrix F Laplace operator, and returning a matrix with the same size as the input matrix F; subtracting the two to obtain an initial dynamic threshold matrix E;
2.6) screening the maximum value T in the input matrix F;
2.7) setting an amplification factor and a decay time constant;
setting an amplification factor vF of a feedback input domain, an amplification factor vL of a coupling connection domain, an amplification factor vE of a pulse generation domain, an attenuation time constant alpha F of an input matrix F, an attenuation time constant alpha L of a coupling connection input matrix L and an attenuation time constant alpha E of a dynamic threshold matrix E; wherein the value ranges of vF, vL and vE are natural numbers which are more than or equal to 1, the value ranges of alpha F, alpha L and alpha E are that alpha F is more than 0 and less than 1, alpha L is more than 0 and less than 1, and alpha E is more than 0 and less than 1;
2.8) setting a connection coefficient beta of the coupling connection input matrix L to the neuron internal activity item matrix U, wherein beta is a proportional relation between the neuron internal activity item matrix U and the coupling connection input matrix L, and the value range of beta is more than 0 and less than 1;
2.9) setting the cycle number to be N, wherein N is more than or equal to N and more than or equal to 1, and N is the upper limit of the cycle number; setting an initial value n of the cycle number to be 1;
and step 3: based on the pulse coupling neural network, calculating a dynamic threshold matrix E (n), a neuron internal activity item matrix U (n) and a pulse output matrix Y (n);
and 4, step 4: judging the result of the cycle
4.1) when U (n) is less than or equal to E (n), Y (n) is 0 matrix, and the cycle is ended, and the step 5 is executed;
when U (N) > E (N), judging whether N is equal to N;
if N is not equal to N, making N equal to N +1, and returning to the step 3;
if N is equal to N, the loop ends and step 5 is executed.
And 5: outputting the result
5.1) computing the Global feature matrix Q
Figure BDA0003440649060000031
5.2) standardizing the global feature matrix Q obtained in the step 5.1 and outputting the standardized global feature matrix Q;
5.3) calculating the sum of all elements in the standardized global feature matrix Q obtained in the step 5.2, namely the gray sum of the image corresponding to the matrix, and taking the sum as an image definition evaluation function value.
Further, in step 3, based on the pulse coupled neural network, the dynamic threshold matrix e (n), the neuron internal activity item matrix u (n), and the pulse output matrix y (n) are calculated as follows:
3.1) calculating the convolution of the pulse output matrix Y (n-1) and the feedback input domain coefficient matrix M, and storing the convolution as an intermediate matrix J (n); when n is 1, Y (n-1), that is, Y (0), represents an initial value of the pulse output matrix Y;
3.2) calculating the convolution of the pulse output matrix Y (n-1) and the coupling connection domain coefficient matrix W, and storing the convolution as an intermediate matrix K (n);
3.3) calculate the input matrix F (n) ═ exp (- α F) × F (n-1) + vF × J (n-1)
In the formula:
f (n-1) is the value of F (n) in the previous cycle, and F (n-1), i.e. F (0), represents the initial value of the input matrix F when n is 1;
j (n-1) is the value of J (n) in the previous cycle, J (n-1), i.e. J (0), represents the initial value of the intermediate matrix J when n is 1, and the initial value of the intermediate matrix J is a zero matrix;
3.4) calculate the coupling input matrix L (n) ═ exp (- α L) × L (n-1) + vL × K (n-1)
In the formula:
l (n-1) is the value of L (n) in the previous cycle, and L (n-1), i.e. L (0), represents the initial value of the coupling input matrix L when n is 1;
k (n-1) is the value of K (n) in the previous cycle, when n is 1, K (n-1), i.e. K (0), represents the initial value of the intermediate matrix K, and the initial value of the intermediate matrix K is a zero matrix;
3.5) calculating the dynamic threshold matrix E (n) ═ exp (-alpha E) × E (n-1) + vE × Y (n-1)
In the formula:
e (n-1) is the value of E (n) in the previous cycle, and E (n-1), i.e. E (0), represents the initial value of the dynamic threshold matrix E when n is 1;
y (n-1) is the value of Y (n) in the previous cycle;
3.6) calculating the composition items U of the neuron internal activity item matrix U (n)ij(n) the formula is:
Uij(n)=Fij(n)*(1+β*Lij(n))
in the formula:
Uij(n) is the ith row and jth column of U (n);
Fij(n) is the ith row and jth column of F (n);
Lij(n) is the ith row and jth column of L (n);
3.7) calculate the pulse output matrix y (n) ((lnT- (n-1) × E) × (u (n) -E (n)).
Further, the method also comprises the step 6: and (4) repeating the step 1 to the step 5.3, obtaining an image definition evaluation function value corresponding to the image sequence, and drawing a curve chart to represent a definition transformation trend corresponding to the image sequence.
Further, in step 2.3), the feedback input domain coefficient matrix M and the coupling connection domain coefficient matrix W are both 3 × 3 in size.
Further, the input of the feedback input field is the pulse output of the last cycle, and the output is the feedback input of the current cycle; the input of the coupling domain is the pulse output of the peripheral neuron in the last cycle, and the output of the coupling domain is the coupling input of the current cycle; the input of the pulse generation domain is the internal activity item intensity determined by the current feedback input and the coupling input, and the output of the pulse generation domain is the current pulse output and a dynamic threshold for determining the output intensity.
The invention also provides a terminal device, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, and is characterized in that: the steps of the above method are implemented when the processor executes the computer program.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the invention relates to an image definition evaluation method based on a pulse coupled neural network, which applies the pulse coupled neural network to the field of image definition evaluation and automatic focusing; the network parameters of the pulse coupling neural network are set in advance, multi-sample long-time training and parameter adjustment are not needed, the problem of image definition evaluation is solved, the method can be directly applied to engineering, and a large amount of engineering time is saved; compared with the traditional image processing method, the method is more diversified, and realizes multiple functions: the pulse coupling neural network can be independently used for image enhancement to improve the image quality, can also be independently used for image definition evaluation, and can also be combined with the image enhancement and the image definition evaluation; in addition, by modifying the network parameters of the pulse coupling neural network, the method can be suitable for different scene types, only the gray information of the image is concerned, the operation accuracy can be improved, the interference of other factors is avoided, the real-time performance is good, the calculation efficiency is high when the method runs on hardware, and the processing speed can meet the engineering requirements.
Drawings
Fig. 1 is a schematic diagram of a model loop iteration structure of a pulse coupled neural network in the image sharpness evaluation method based on the pulse coupled neural network.
The reference numbers in the figures are:
1-feedback input domain, 2-coupling connection domain, 3-pulse generation domain.
Detailed Description
The technical solution of the present invention will be clearly and completely described below with reference to the embodiments of the present invention and the accompanying drawings, and it is obvious that the described embodiments do not limit the present invention.
The Pulse Coupled Neural Network (PCNN) is a quantitative description of signal transduction characteristics of neurons in the visual cortex of mammals, the biological characteristics of the PCNN are matched with the persistence of vision of the hysteresis and exponential decay of human eye perception brightness change, and the characteristics are often applied to image processing engineering such as image segmentation, edge extraction and the like; the network structure and parameters are changed, and the functions of image enhancement, filtering, fusion and the like can be realized.
The pulse coupling neural network comprises three parts: the pulse generator comprises a feedback input domain 1, a coupling connection domain 2 and a pulse generation domain 3, wherein the input of the feedback input domain 1 is the pulse output of the previous cycle, and the output of the feedback input domain is the feedback input of the current cycle; the input of the coupling domain 2 is the pulse output of the peripheral neuron in the last cycle, and the output is the coupling input of the current cycle; the input of the pulse generation field 3 is the internal activity item intensity determined by the current feedback input and the coupling input, and the output is the current pulse output and the dynamic threshold for determining the output intensity.
The image processing model of the pulse coupling neural network is composed of a two-dimensional single-layer neuron array formed by pulse coupling neurons, the number of the neurons is consistent with the number of pixels, each neuron corresponds to one pixel, each neuron is positioned in the center of a 3 x 3 feedback input domain coefficient matrix M and a coupling connection domain coefficient matrix W (the feedback input domain coefficient matrix M and the coupling connection domain coefficient matrix W are weight connection matrixes), adjacent pixels are corresponding neurons in the feedback input domain coefficient matrix M and the coupling connection domain coefficient matrix W, the connection weight of each neuron and the adjacent neurons is various, and the connection weight is embodied through the value of the weight connection matrix.
The model of the pulse coupling neural network is evolved from a biological visual neuron model and is sensitive to a darker area, namely an area with lower gray value; before image processing, in order to better highlight the edge characteristics of the dark part of an image, the gray level difference between image pixels is often artificially enlarged, and the image edge is enhanced so as to highlight the characteristic information of the image; therefore, the initial value of the dynamic threshold matrix E needs to be modified, the edge of the stimulation input matrix S is enhanced, filtering is carried out by using a Laplace operator, and then the phase of an image is reversed; at the moment, the position of the original dark area of the image becomes bright, the gray value becomes large, the value is given to the dynamic threshold matrix E, the threshold value of the dark area is increased, the attenuation time is longer, and more processing can be obtained in the process of loop iteration.
After the stimulation input matrix S is generated, the neurons are activated; each neuron receives signals associated with an input matrix F and a coupled input matrix L to generate a neuron internal activity term matrix U. And only when the activity item matrix U in the neuron is larger than the dynamic threshold matrix E, the neuron is activated to generate a pulse output matrix Y, the intensity of the pulse output matrix Y is in direct proportion to the difference between the activity item matrix U in the neuron and the dynamic threshold matrix E, and meanwhile, the integral formed by continuous accumulation of the pulse output matrix Y is the global feature matrix Q. Each neuron is related to a peripheral neuron in the previous cycle, a pulse output matrix Y in the previous cycle acts on an adjacent neuron through a feedback input domain coefficient matrix M and a coupling connection domain coefficient matrix W to generate a new input matrix F and a coupling connection input matrix L, and the cycle iteration process is similar to a pulse function. After a plurality of pulse cycles, the activity item matrix U in the neuron is no longer larger than the dynamic threshold matrix E, the pulse output matrix Y is no longer excited, namely the cycle is ended, and the global feature matrix Q is an enhanced image at the moment and can be analogized to an image seen by human eyes after being processed by a pulse neural network.
In the process of focusing a lens of the same target image (or video), an image at a focusing position has richer dark part details, the gray value is larger during image inversion processing and needs to be circulated for more times, and neurons around the image are activated to generate pulses, so that an accumulated global feature matrix Q can obtain a larger gray value, the feature information can be extracted by calculating the gray sum of the global feature matrix Q of the enhanced image to be used as a representation of the image definition, the enhanced image and a definition evaluation function value are output according to a final result to perform the operation of the next stage, and automatic focusing is realized by combining a control strategy, and the specific method is as follows:
an image definition evaluation method based on a pulse coupled neural network comprises the following steps:
step 1: constructing a pulse coupling neural network;
the pulse coupling neural network comprises a feedback input domain 1, a coupling connection domain 2 and a pulse generation domain 3;
the feedback input domain 1 comprises an input matrix F, the coupling connection domain 2 comprises a coupling connection input matrix L, and the pulse generation domain 3 comprises a dynamic threshold matrix E, a neuron internal activity item matrix U and a pulse output matrix Y;
step 2: preprocessing an image and setting initialization parameters;
2.1) obtaining a gray matrix of the image, and carrying out normalization processing on the gray matrix to obtain a stimulation input matrix S;
the size i x j of the stimulation input matrix S is determined according to the resolution of the image, i is the ith row of the stimulation input matrix S, and j is the jth column of the stimulation input matrix S;
2.2) assigning the values of the stimulus input matrix S to an input matrix F;
2.3) initializing a coupling connection input matrix L, a neuron internal activity item matrix U, a pulse output matrix Y and a global feature matrix Q in the pulse coupling neural network into a zero matrix; wherein, the feedback input domain coefficient matrix M and the coupling connection domain coefficient matrix W are both 3 × 3;
2.4) setting a feedback input domain coefficient matrix M and a coupling connection domain coefficient matrix W in the pulse coupling neural network;
2.5) calculating a dynamic threshold matrix E;
solving the convolution of the input matrix F Laplace operator, and returning a matrix with the same size as the input matrix F; subtracting the two to obtain an initial dynamic threshold matrix E;
2.6) screening the maximum value T in the input matrix F;
2.7) setting an amplification factor and a decay time constant;
setting an amplification factor vF of a feedback input domain 1, an amplification factor vL of a coupling connection domain 2, an amplification factor vE of a pulse generation domain 3, an attenuation time constant alpha F of an input matrix F, an attenuation time constant alpha L of a coupling connection input matrix L and an attenuation time constant alpha E of a dynamic threshold matrix E; wherein the value ranges of vF, vL and vE are natural numbers which are more than or equal to 1, the value ranges of alpha F, alpha L and alpha E are that alpha F is more than 0 and less than 1, alpha L is more than 0 and less than 1, and alpha E is more than 0 and less than 1;
2.8) setting a connection coefficient beta of the coupling connection input matrix L to the neuron internal activity item matrix U, wherein beta is a proportional relation between the neuron internal activity item matrix U and the coupling connection input matrix L, and the value range of beta is more than 0 and less than 1;
2.9) setting the cycle number to be N, wherein N is more than or equal to N and more than or equal to 1, and N is the upper limit of the cycle number; setting an initial value n of the cycle number to be 1;
and step 3: based on the pulse coupling neural network, calculating a dynamic threshold matrix E (n), a neuron internal activity item matrix U (n) and a pulse output matrix Y (n);
3.1) calculating the convolution of the pulse output matrix Y (n-1) and the feedback input domain coefficient matrix M, and storing the convolution as an intermediate matrix J (n); when n is 1, Y (n-1), that is, Y (0), represents an initial value of the pulse output matrix Y;
3.2) calculating the convolution of the pulse output matrix Y (n-1) and the coupling connection domain coefficient matrix W, and storing the convolution as an intermediate matrix K (n);
3.3) calculate the input matrix F (n) ═ exp (- α F) × F (n-1) + vF × J (n-1)
In the formula:
f (n-1) is the value of F (n) in the previous cycle, and F (n-1), i.e. F (0), represents the initial value of the input matrix F when n is 1;
j (n-1) is the value of J (n) in the previous cycle, J (n-1), i.e. J (0), represents the initial value of the intermediate matrix J when n is 1, and the initial value of the intermediate matrix J is a zero matrix;
3.4) calculate the coupling input matrix L (n) ═ exp (- α L) × L (n-1) + vL × K (n-1)
In the formula:
l (n-1) is the value of L (n) in the previous cycle, and L (n-1), i.e. L (0), represents the initial value of the coupling input matrix L when n is 1;
k (n-1) is the value of K (n) in the previous cycle, when n is 1, K (n-1), i.e. K (0), represents the initial value of the intermediate matrix K, and the initial value of the intermediate matrix K is a zero matrix;
3.5) calculating the dynamic threshold matrix E (n) ═ exp (-alpha E) × E (n-1) + vE × Y (n-1)
In the formula:
e (n-1) is the value of E (n) in the previous cycle, and E (n-1), i.e. E (0), represents the initial value of the dynamic threshold matrix E when n is 1;
y (n-1) is the value of Y (n) in the previous cycle;
3.6) calculating the composition items U of the neuron internal activity item matrix U (n)ij(n) the formula is:
Uij(n)=Fij(n)*(1+β*Lij(n))
in the formula:
Uij(n) is the ith row and jth column of U (n);
Fij(n) is the ith row and jth column of F (n);
Lij(n) is the ith row and jth column of L (n);
3.7) calculate the pulse output matrix y (n) ((lnT- (n-1) × E) × (u (n) -E (n)).
And 4, step 4: judging the result of the cycle
4.1) when U (n) is less than or equal to E (n), Y (n) is 0 matrix, and the cycle is ended, and the step 5 is executed;
when U (N) > E (N), judging whether N is equal to N;
if N is not equal to N, making N equal to N +1, and returning to the step 3;
if N is equal to N, the loop ends and step 5 is executed.
And 5: outputting the result
5.1) computing the Global feature matrix Q
Figure BDA0003440649060000091
5.2) standardizing the global feature matrix Q obtained in the step 5.1 and outputting the standardized global feature matrix Q;
5.3) calculating the sum of all elements in the standardized global feature matrix Q obtained in the step 5.2, namely the gray sum of the image corresponding to the matrix, and taking the sum as an image definition evaluation function value.
Step 6: and (4) repeating the step 1 to the step 5.3, obtaining an image definition evaluation function value corresponding to the image sequence, and drawing a curve chart to represent a definition transformation trend corresponding to the image sequence.
In addition, the image definition evaluation method based on the pulse coupled neural network can also be applied to terminal equipment, the terminal equipment comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, and the steps of the image definition evaluation method are realized when the processor executes the computer program. The terminal device here may be a computer, a notebook, a palm computer, and various computing devices such as a cloud server, and the processor may be a general processor, a digital signal processor, an application specific integrated circuit, or other programmable logic devices.
In this embodiment, the model loop iteration of the pulse coupled neural network is as follows:
the feedback input domain 1 comprises a feedback input domain coefficient matrix M, an intermediate matrix J, an input matrix F, an amplification coefficient vF of the feedback input domain 1 and an attenuation time constant alpha F of the input matrix F;
in the feedback input domain 1, the input matrix F is composed of two parts: a part of the pulse output matrix Y (namely Y (n-1)) from the previous cycle is convolved with the feedback input domain coefficient matrix M to obtain an intermediate matrix J, and the intermediate matrix J is multiplied by the amplification coefficient vF of the feedback input domain 1; another portion of the input matrix F from the previous cycle (i.e., F (n-1)) is multiplied by its decay time constant α F; and adding the two parts of operation results to obtain an input matrix F (namely F (n)) of the current cycle.
The coupling connection domain 2 is provided with a coupling connection domain coefficient matrix W, a middle matrix K, a coupling connection input matrix L, an amplification coefficient vL of the coupling connection domain 2 and an attenuation time constant alpha L of the coupling connection input matrix L;
in the coupling domain 2, the coupling input matrix L is composed of two parts: convolving a part of pulse output matrix Y from a last cycle peripheral neuron with a coupling connection domain coefficient matrix W to obtain an intermediate matrix K, and multiplying the intermediate matrix K by an amplification coefficient vL of a coupling connection domain 2; another part of the coupled input matrix L from the previous cycle (i.e., L (n-1)) is multiplied by its decay time constant al; and adding the two operation results to obtain a coupling connection input matrix L (namely L (n)) of the current cycle.
The pulse generation domain 3 is provided with a dynamic threshold matrix E, an amplification coefficient vE of the pulse generation domain 3, an attenuation time constant alpha E of the dynamic threshold matrix E, a pulse output matrix Y, a maximum value T of image gray and a neuron internal activity item matrix U;
in the pulse generation domain 3, the dynamic threshold matrix E is composed of two parts: a part of the pulse output matrix Y from the previous cycle is multiplied by the amplification factor vE of the pulse generation field 3; another portion of the dynamic threshold matrix E from the previous cycle (i.e., E (n-1)) is multiplied by its decay time constant α E; and adding the two operation results to obtain a dynamic threshold matrix E (namely E (n)) of the current cycle.
Meanwhile, the current neuron internal activity item matrix U also consists of two parts: a portion of the contributions from the central neurons, the input matrix F of the current cycle; the influence of the pulse output matrix Y of the other part of the peripheral neurons from the previous cycle is multiplied by a connection coefficient beta on the basis of the current cycle coupling connection input matrix L, and then the basic weight is added to 1; the result of the multiplication of the two parts is the neuron internal activity term matrix U.
When the activity item matrix U in the neuron is larger than the dynamic threshold matrix E, generating pulse excitation, generating a pulse output matrix Y with the strength positively correlated with the maximum value T by a pulse generator, and carrying out the next judgment or operation; and accumulating the pulse output matrix Y generated by each cycle to obtain a global feature matrix Q which is a final output result.
The above description is only an embodiment of the present invention, and is not intended to limit the scope of the present invention, and all equivalent structural changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to other related technical fields, are included in the scope of the present invention.

Claims (6)

1. An image definition evaluation method based on a pulse coupled neural network is characterized by comprising the following steps:
step 1: constructing a pulse coupling neural network;
the pulse coupling neural network comprises a feedback input domain (1), a coupling connection domain (2) and a pulse generation domain (3);
the feedback input domain (1) comprises an input matrix F, the coupling domain (2) comprises a coupling input matrix L, and the pulse generation domain (3) comprises a dynamic threshold matrix E, a neuron internal activity item matrix U and a pulse output matrix Y;
step 2: preprocessing an image and setting initialization parameters;
2.1) obtaining a gray matrix of the image, and carrying out normalization processing on the gray matrix to obtain a stimulation input matrix S;
the size i x j of the stimulation input matrix S is determined according to the resolution of the image, i is the ith row of the stimulation input matrix S, and j is the jth column of the stimulation input matrix S;
2.2) assigning the values of the stimulus input matrix S to an input matrix F;
2.3) initializing a coupling connection input matrix L, a neuron internal activity item matrix U, a pulse output matrix Y and a global feature matrix Q in the pulse coupling neural network into a zero matrix;
2.4) setting a feedback input domain coefficient matrix M and a coupling connection domain coefficient matrix W in the pulse coupling neural network;
2.5) calculating a dynamic threshold matrix E;
solving the convolution of the input matrix F Laplace operator, and returning a matrix with the same size as the input matrix F; subtracting the two to obtain an initial dynamic threshold matrix E;
2.6) screening the maximum value T in the input matrix F;
2.7) setting an amplification factor and a decay time constant;
setting an amplification factor vF of a feedback input domain (1), an amplification factor vL of a coupling connection domain (2), an amplification factor vE of a pulse generation domain (3), an attenuation time constant alpha F of an input matrix F, an attenuation time constant alpha L of a coupling connection input matrix L and an attenuation time constant alpha E of a dynamic threshold matrix E; wherein the value ranges of vF, vL and vE are natural numbers which are more than or equal to 1, the value ranges of alpha F, alpha L and alpha E are that alpha F is more than 0 and less than 1, alpha L is more than 0 and less than 1, and alpha E is more than 0 and less than 1;
2.8) setting a connection coefficient beta of the coupling connection input matrix L to the neuron internal activity item matrix U, wherein beta is a proportional relation between the neuron internal activity item matrix U and the coupling connection input matrix L, and the value range of beta is more than 0 and less than 1;
2.9) setting the cycle number to be N, wherein N is more than or equal to N and more than or equal to 1, and N is the upper limit of the cycle number; setting an initial value n of the cycle number to be 1;
and step 3: based on the pulse coupling neural network, calculating a dynamic threshold matrix E (n), a neuron internal activity item matrix U (n) and a pulse output matrix Y (n);
and 4, step 4: judging the result of the cycle
4.1) when U (n) is less than or equal to E (n), Y (n) is 0 matrix, and the cycle is ended, and the step 5 is executed;
when U (N) > E (N), judging whether N is equal to N;
if N is not equal to N, making N equal to N +1, and returning to the step 3;
if N is equal to N, the loop is ended, and step 5 is executed;
and 5: outputting the result
5.1) computing the Global feature matrix Q
Figure FDA0003440649050000021
5.2) standardizing the global feature matrix Q obtained in the step 5.1 and outputting the standardized global feature matrix Q;
5.3) calculating the sum of all elements in the standardized global feature matrix Q obtained in the step 5.2, namely the gray sum of the image corresponding to the matrix, and taking the sum as an image definition evaluation function value.
2. The method for evaluating image sharpness based on the impulse coupled neural network according to claim 1, wherein: in step 3, the calculating of the dynamic threshold matrix e (n), the neuron internal activity item matrix u (n), and the pulse output matrix y (n) based on the pulse coupled neural network includes:
3.1) calculating the convolution of the pulse output matrix Y (n-1) and the feedback input domain coefficient matrix M, and storing the convolution as an intermediate matrix J (n); when n is 1, Y (n-1), that is, Y (0), represents an initial value of the pulse output matrix Y;
3.2) calculating the convolution of the pulse output matrix Y (n-1) and the coupling connection domain coefficient matrix W, and storing the convolution as an intermediate matrix K (n);
3.3) calculate the input matrix F (n) ═ exp (- α F) × F (n-1) + vF × J (n-1)
In the formula:
f (n-1) is the value of F (n) in the previous cycle, and F (n-1), i.e. F (0), represents the initial value of the input matrix F when n is 1;
j (n-1) is the value of J (n) in the previous cycle, J (n-1), i.e. J (0), represents the initial value of the intermediate matrix J when n is 1, and the initial value of the intermediate matrix J is a zero matrix;
3.4) calculate the coupling input matrix L (n) ═ exp (- α L) × L (n-1) + vL × K (n-1)
In the formula:
l (n-1) is the value of L (n) in the previous cycle, and L (n-1), i.e. L (0), represents the initial value of the coupling input matrix L when n is 1;
k (n-1) is the value of K (n) in the previous cycle, when n is 1, K (n-1), i.e. K (0), represents the initial value of the intermediate matrix K, and the initial value of the intermediate matrix K is a zero matrix;
3.5) calculating the dynamic threshold matrix E (n) ═ exp (-alpha E) × E (n-1) + vE × Y (n-1)
In the formula:
e (n-1) is the value of E (n) in the previous cycle, and E (n-1), i.e. E (0), represents the initial value of the dynamic threshold matrix E when n is 1;
y (n-1) is the value of Y (n) in the previous cycle;
3.6) calculating the composition items U of the neuron internal activity item matrix U (n)ij(n) the formula is:
Uij(n)=Fij(n)*(1+β*Lij(n))
in the formula:
Uij(n) is the ith row and jth column of U (n);
Fij(n) is the ith row and jth column of F (n);
Lij(n) is the ith row and jth column of L (n);
3.7) calculate the pulse output matrix y (n) ((lnT- (n-1) × E) × (u (n) -E (n)).
3. The method for evaluating image sharpness based on the impulse coupled neural network according to claim 2, further comprising the step 6 of: and (4) repeating the step 1 to the step 5.3, obtaining an image definition evaluation function value corresponding to the image sequence, and drawing a curve chart to represent a definition transformation trend corresponding to the image sequence.
4. The pulse coupled neural network-based image sharpness evaluation method of claim 1 or 2, wherein: in the step 2.3), the size of the coefficient matrix M of the feedback input domain (1) and the size of the coefficient matrix W of the coupling connection domain (2) are both 3 × 3.
5. The pulse coupled neural network-based image sharpness evaluation method of claim 1 or 2, wherein: the input of the feedback input domain (1) is the pulse output of the last cycle, and the output is the feedback input of the current cycle; the input of the coupling domain (2) is the pulse output of the peripheral neuron in the last cycle, and the output is the coupling input of the current cycle; the input of the pulse generation field (3) is the internal activity item intensity determined by the current feedback input and the coupling input, and the output is the current pulse output and the dynamic threshold determining the output intensity.
6. A terminal device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, characterized in that: the processor, when executing the computer program, realizes the steps of the method according to any of claims 1 to 5.
CN202111629067.8A 2021-12-28 2021-12-28 Image definition evaluation method based on pulse coupling neural network and terminal equipment Active CN114359200B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111629067.8A CN114359200B (en) 2021-12-28 2021-12-28 Image definition evaluation method based on pulse coupling neural network and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111629067.8A CN114359200B (en) 2021-12-28 2021-12-28 Image definition evaluation method based on pulse coupling neural network and terminal equipment

Publications (2)

Publication Number Publication Date
CN114359200A true CN114359200A (en) 2022-04-15
CN114359200B CN114359200B (en) 2023-04-18

Family

ID=81102416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111629067.8A Active CN114359200B (en) 2021-12-28 2021-12-28 Image definition evaluation method based on pulse coupling neural network and terminal equipment

Country Status (1)

Country Link
CN (1) CN114359200B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008537A (en) * 2013-11-04 2014-08-27 无锡金帆钻凿设备股份有限公司 Novel noise image fusion method based on CS-CT-CHMM
CN108985252A (en) * 2018-07-27 2018-12-11 陕西师范大学 The image classification method of improved pulse deep neural network
WO2021012752A1 (en) * 2019-07-23 2021-01-28 中建三局智能技术有限公司 Spiking neural network-based short-range tracking method and system
CN112785539A (en) * 2021-01-30 2021-05-11 西安电子科技大学 Multi-focus image fusion method based on image adaptive decomposition and parameter adaptive

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008537A (en) * 2013-11-04 2014-08-27 无锡金帆钻凿设备股份有限公司 Novel noise image fusion method based on CS-CT-CHMM
CN108985252A (en) * 2018-07-27 2018-12-11 陕西师范大学 The image classification method of improved pulse deep neural network
WO2021012752A1 (en) * 2019-07-23 2021-01-28 中建三局智能技术有限公司 Spiking neural network-based short-range tracking method and system
CN112785539A (en) * 2021-01-30 2021-05-11 西安电子科技大学 Multi-focus image fusion method based on image adaptive decomposition and parameter adaptive

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
D. AGRAWAL 等: "Multifocus image fusion using modified pulse coupled neural network for improved image quality", 《IET IMAGE PROCESSING》 *
KANGJIAN HE 等: "Multi-focus image fusion combining focus-region-level partition and pulse-coupled neural network", 《METHODOLOGIES AND APPLICATION》 *
TAOYU CHEN 等: "Research on Auto-focusing Method Based on Pulse Coupled Neural Network", 《INTERNATIONAL CONFERENCE ON ADVANCED ALGORITHMS AND CONTROL ENGINEERING (ICAACE 2021)》 *
王爱文等: "基于脉冲耦合神经网络的图像分割", 《计算机科学》 *
陈广秋等: "基于图像质量评价参数的FDST域图像融合", 《光电子.激光》 *

Also Published As

Publication number Publication date
CN114359200B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
Li et al. Fast multi-scale structural patch decomposition for multi-exposure image fusion
Li et al. Luminance-aware pyramid network for low-light image enhancement
CN111507993B (en) Image segmentation method, device and storage medium based on generation countermeasure network
Schmidt et al. Cascades of regression tree fields for image restoration
CN106875352A (en) A kind of enhancement method of low-illumination image
Guo et al. A pipeline neural network for low-light image enhancement
CN113011567B (en) Training method and device of convolutional neural network model
İlk et al. The effect of Laplacian filter in adaptive unsharp masking for infrared image enhancement
Salih et al. Adaptive fuzzy exposure local contrast enhancement
CN111275625B (en) Image deblurring method and device and electronic equipment
Prabhakar et al. Self-gated memory recurrent network for efficient scalable HDR deghosting
CN109345497B (en) Image fusion processing method and system based on fuzzy operator and computer program
CN114359200B (en) Image definition evaluation method based on pulse coupling neural network and terminal equipment
Goel et al. Gray level enhancement to emphasize less dynamic region within image using genetic algorithm
Park et al. Contrast enhancement using sensitivity model-based sigmoid function
CN116309168A (en) Low-illumination image enhancement method and system for parallel hybrid attention progressive fusion
CN107341501B (en) A kind of image interfusion method and device based on PCNN and classification focusing technology
CN116110033A (en) License plate generation method and device, nonvolatile storage medium and computer equipment
Cui et al. Pyramid ensemble structure for high resolution image shadow removal
CN113822809A (en) Dim light enhancement method and system
Lee et al. Disentangled feature-guided multi-exposure high dynamic range imaging
Venkatesh et al. Image Enhancement and Implementation of CLAHE Algorithm and Bilinear Interpolation
CN106530213B (en) A kind of diameter radar image automatic visual method
KR102617391B1 (en) Method for controlling image signal processor and control device for performing the same
CN110111286A (en) The determination method and apparatus of image optimization mode

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant