CN109493319B - Fusion image effect quantification method and device, computer equipment and storage medium - Google Patents

Fusion image effect quantification method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN109493319B
CN109493319B CN201811178810.0A CN201811178810A CN109493319B CN 109493319 B CN109493319 B CN 109493319B CN 201811178810 A CN201811178810 A CN 201811178810A CN 109493319 B CN109493319 B CN 109493319B
Authority
CN
China
Prior art keywords
tested
training sample
sample set
fused image
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811178810.0A
Other languages
Chinese (zh)
Other versions
CN109493319A (en
Inventor
沈强
曲杰
王莹珑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan United Imaging Healthcare Co Ltd
Original Assignee
Wuhan United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan United Imaging Healthcare Co Ltd filed Critical Wuhan United Imaging Healthcare Co Ltd
Priority to CN201811178810.0A priority Critical patent/CN109493319B/en
Publication of CN109493319A publication Critical patent/CN109493319A/en
Application granted granted Critical
Publication of CN109493319B publication Critical patent/CN109493319B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The application relates to a fusion image effect quantification method, a device, a computer device and a storage medium, wherein a computer device is used for obtaining a plurality of feature vectors of fusion images, a quantification model is constructed according to a preset self-organizing mapping neural network algorithm and the feature vectors, then the fusion image effect to be tested is quantified according to the constructed quantification model, and because the input of the model is the feature vectors (four feature information of signal-to-noise ratio, mean value, information entropy and definition) of the fusion images, the method comprehensively considers a plurality of purposes of the fusion images (evaluates a plurality of feature information contained in the images), avoids evaluating the fusion effect from only one purpose of the fusion images, provides a comprehensive evaluation result, and in addition, the quantification result is a degree value which can directly reflect the degree of the fusion image effect, so that the evaluation result is very intuitive.

Description

Fusion image effect quantification method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of fused image technologies, and in particular, to a method and an apparatus for quantizing a fused image effect, a computer device, and a storage medium.
Background
The medical image fusion can collect respective information of the source images and show richer information. For example, when a Positron Emission Tomography (PET) image and a Computed Tomography (CT) image are fused, clear anatomical structure information and corresponding organ function metabolic information can be obtained in one image at the same time.
Due to the sensitivity of the medical application field, it is required that the medical fused image may not lose any useful information, otherwise medical accidents may be caused by the incompleteness of the image information, and therefore the evaluation of the fused image must be accurate, intuitive and effective. The effect of fusing images is not only affected by the noise of the source image, the parameters of the fusion algorithm, the region of interest of the observer, etc., but also evaluated depending on the purpose of the image. Currently, the general evaluation principle of medical fusion images is to determine whether the information content is increased, whether noise is suppressed, whether suppression of uniform region noise is enhanced, whether edge information is retained, whether an image mean value is increased, and the like, and based on these principles, the current objective evaluation method mainly includes: statistical property-based evaluations (e.g., mean, standard deviation), information-content-based evaluations (e.g., entropy), signal-to-noise ratio-based evaluations (e.g., signal-to-noise ratio), gradient value-based evaluations (e.g., sharpness), but also fuzzy integration-based, rough set, evidence theory, convolutional neural network-based evaluations, and the like.
However, the above conventional objective evaluation methods select only one application of the fused image for evaluation, and cannot provide a comprehensive evaluation result, and the evaluation result is not intuitive.
Disclosure of Invention
Therefore, it is necessary to provide a method, an apparatus, a computer device and a storage medium for quantifying the effect of a fused image, aiming at the technical problems that the conventional fused image evaluation method only selects a certain application for evaluation, cannot provide a comprehensive evaluation result, and has an insufficiently intuitive evaluation result.
In a first aspect, an embodiment of the present invention provides a method for quantizing a fused image effect, where the method includes:
acquiring feature vectors of a plurality of fusion images; the feature vector is used for representing feature information of the fused image;
constructing a quantitative model according to a preset self-organizing mapping neural network algorithm and the characteristic vector;
and quantifying the effect of the fusion image to be tested according to the quantification model.
In one embodiment, the constructing a quantization model according to a preset self-organizing map neural network algorithm and the feature vector includes:
generating a training sample set according to the feature vector; the training sample set is a data set obtained according to the feature vectors of the multiple fused images;
initializing an initial weight matrix of a neural network according to the training sample set;
updating the initial weight matrix according to the learning rate of the neural network, the neighborhood radius and a preset self-adaptive stopping criterion to obtain a feature space of the training sample set;
and determining the quantization model according to the feature space.
In one embodiment, the initializing an initial weight matrix of a neural network according to the training sample set includes:
obtaining the value range of each feature vector in the training sample set;
and uniformly and randomly distributing the values in the value range to each neuron of the neural network competition layer to obtain the initial weight matrix.
In one embodiment, the updating the initial weight matrix according to the learning rate of the neural network, the neighborhood radius, and a preset adaptive stop criterion to obtain the feature space of the training sample set includes:
calculating Euclidean distances between the training sample set and each neuron in the neural network according to the initial weight matrix;
determining the neuron with the minimum Euclidean distance as a winning neuron;
determining a winning neighborhood according to the winning neuron and the neighborhood radius;
and updating the weight of the neuron in the dominant neighborhood according to the self-adaptive stopping criterion to obtain the feature space of the training sample set.
In one embodiment, the adaptive stopping criterion includes that the maximum variation of the quasi-center weight vector in the updated weight matrix is smaller than a preset threshold.
In one embodiment, the quantifying the effect of the fused image to be tested according to the quantification model includes:
calculating Euclidean distances between a sample to be tested and each weight vector of the quantitative model; the sample to be tested is obtained according to the feature vector of the fused image to be tested;
and determining the minimum Euclidean distance as a first quantization value of the fused image effect to be tested.
In one embodiment, the method further comprises:
calculating a confidence level of the first quantized value of the sample to be tested relative to the average level of the first quantized values of the training sample set;
and determining the confidence coefficient as a second quantitative value of the fused image effect to be tested.
In a second aspect, an embodiment of the present invention provides a fused image effect quantization apparatus, including:
the acquisition module is used for acquiring the feature vectors of a plurality of fusion images; the feature vector is used for representing feature information of the fused image;
the construction module is used for constructing a quantitative model according to a preset self-organizing mapping neural network algorithm and the characteristic vector;
and the quantification module is used for quantifying the effect of the fusion image to be tested according to the quantification model.
In a third aspect, an embodiment of the present invention provides a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the following steps when executing the computer program:
acquiring feature vectors of a plurality of fusion images; the feature vector is used for representing feature information of the fused image;
constructing a quantitative model according to a preset self-organizing mapping neural network algorithm and the characteristic vector;
and quantifying the effect of the fusion image to be tested according to the quantification model.
In a fourth aspect, an embodiment of the present invention is a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of:
acquiring feature vectors of a plurality of fusion images; the feature vector is used for representing feature information of the fused image;
constructing a quantitative model according to a preset self-organizing mapping neural network algorithm and the characteristic vector;
and quantifying the effect of the fusion image to be tested according to the quantification model.
The application provides a method, a device, a computer device and a storage medium for quantifying fusion image effect, which are characterized in that the computer device is used for obtaining the characteristic vectors of a plurality of fusion images, a quantification model is constructed according to a preset self-organizing mapping neural network algorithm and the characteristic vectors, then the effect of the fusion image to be tested is quantified according to the constructed quantification model, and the model is input into the characteristic vectors (four characteristic information including signal-to-noise ratio, mean value, information entropy and definition) of the fusion image, so the method comprehensively considers a plurality of purposes of the fusion image (evaluates a plurality of characteristic information contained in the image), avoids evaluating the fusion effect only from the single purpose of the fusion image, provides a comprehensive evaluation result, and the quantification result is a degree value which can directly reflect the degree of the good and bad effect of the fusion image, so that the evaluation result is very intuitive.
Drawings
Fig. 1 is an application environment diagram of a fusion image effect quantification method according to an embodiment;
fig. 2 is a schematic flowchart of a method for quantifying the effect of a fusion image according to an embodiment;
fig. 3 is a schematic flowchart of a method for quantifying the effect of a fusion image according to an embodiment;
fig. 4 is a flowchart illustrating a method for quantifying the effect of a fusion image according to an embodiment;
fig. 5 is a flowchart illustrating a method for quantifying the effect of a fusion image according to an embodiment;
fig. 5.1 is a network hit diagram of the fusion image effect quantization method according to an embodiment;
FIG. 6 is a flowchart illustrating a method for quantifying the effect of a fused image according to an embodiment;
FIG. 6.1 is a schematic view of a CV curve corresponding to a scaling parameter C0 according to an embodiment;
fig. 7 is a block diagram illustrating a structure of a device for quantizing a fused image effect according to an embodiment;
fig. 8 is a block diagram illustrating a structure of a device for quantizing a fused image effect according to an embodiment;
fig. 9 is a block diagram illustrating a structure of a device for quantizing a fused image effect according to an embodiment;
fig. 10 is a block diagram illustrating a structure of a fused image effect quantization apparatus according to an embodiment;
FIG. 11 is a diagram illustrating an internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The method for quantifying the effect of the fused image can be applied to a computer device shown in fig. 1, the computer device can be a server, and the internal structure diagram of the computer device can be shown in fig. 1. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data of the fused image effect quantification method.
The embodiment of the application provides a fusion image effect quantification method, a fusion image effect quantification device, computer equipment and a storage medium, and aims to solve the technical problems that an existing fusion image evaluation method in the prior art only selects a certain application to evaluate, cannot provide a comprehensive evaluation result, and cannot achieve visual evaluation results. The following describes in detail the technical solutions of the present application and how the technical solutions of the present application solve the above technical problems by embodiments and with reference to the drawings. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Note that the execution subject in the following embodiments is a computer device.
In an embodiment, a method for quantifying the effect of a fused image is provided as shown in fig. 2, and the embodiment relates to a specific process in which a computer device determines a quantification model according to a preset self-organizing map neural network algorithm and a feature vector of the fused image, so as to quantify the effect of the fused image. As shown in fig. 2, the method includes:
s101, obtaining feature vectors of a plurality of fusion images; the feature vector is used for characterizing feature information of the fused image.
The fusion image may belong to the fields of remote sensing, computer vision, weather forecasting, military target recognition, medicine and the like, for example, a Positron Emission Tomography (PET) image and a Computed Tomography (CT) image in the medical field are fused, and from the aspect of hierarchy, the fusion image may also be data-level fusion, feature-level fusion, decision-level fusion and the like. Wherein the feature vector characterizes feature information of the fused image, such as: the average value, entropy, standard deviation, signal-to-noise ratio, average gradient (sharpness), etc. can be used to evaluate the effect of the fused image, for example: the information entropy is an important index for measuring the richness of the image information, the detail expression capability of the fused image can be compared through comparison of the information entropy of the fused image, if the information entropy of the fused image is larger, the information amount of the fused image is increased, and the specific content of the feature information of the fused image is not limited in the embodiment. The feature vector may be 1 or 2, or more than 2 pieces of feature information as exemplified above. This embodiment is not limited to this. The acquisition of the fused image may be calculated and rendered according to a registration algorithm and a rendering engine fusion algorithm, and the embodiment does not limit the acquired algorithm. The multiple fused images may be obtained from a local disk of a computer device, or may be obtained from a Picture Archiving and Communication System (PACS) or a cloud server, which is not limited in this embodiment.
For example, taking the fused image as a PET/CT image in the medical field, and taking four feature information including a signal-to-noise ratio, a mean value, an information entropy and a definition as an example, in practical application, a computer device acquires a plurality of PET/CT fused images and calculates a feature vector of each PET/CT fused image, that is, four feature information including a signal-to-noise ratio, a mean value, an information entropy and a definition, and the calculation method may be as follows:
(1) signal-to-noise ratio (SNR): the signal-to-noise ratio describes the ratio of useful information to noise in an image, and generally, the higher the signal-to-noise ratio is, the better the effect of fusing images is. The calculation formula is as follows:
Figure BDA0001824474900000071
in the above formula, R (i, j) is an image pixel value at the ith row and jth column position of the CT or PET original image, and F (i, j) is an image pixel value at the ith row and jth column position of the PET/CT fusion image. The two parameters of R (i, j) and F (i, j) can be stored in the image file in advance, and can be directly read when in use. During calculation, R (i, j) in the CT and PET original images and F (i, j) in the PET/CT fusion images are substituted into the formula respectively to obtain PSNR (peak signal-to-noise ratio) values of the two original images and the fusion image respectively, and then an average value is calculated for the two PSNR values, wherein the average value is the signal-to-noise ratio (SNR) of the PET/CT fusion image.
(2) Information entropy (H): the information entropy of the image is an important index for measuring the richness of the image information, and the detail expression capability of the image can be compared by comparing the information entropy of the image. Generally, if the information entropy of the fused image is larger, it is indicated that the information amount of the fused image increases. The calculation formula is as follows:
Figure BDA0001824474900000081
wherein L is the total gray level of the PET/CT fusion image, piCalculating the probability of occurrence of the pixel with the gray level i in the fused image by using L and piThe information entropy (H) of the PET/CT fusion image can be obtained by substituting the formula, wherein L can be 16, 32 or 64 in general, and the higher the gray level is, the richer the color is.
(3) Definition of
Figure BDA0001824474900000082
The term average gradient in some research fields refers to the definition of an image measured by a gradient method, and generally, if the average gradient value of the image is larger, the definition of the image is higher. The sharpness reflects the improvement in image quality and also reflects the contrast of the fine details and texture changing features in the image. The calculation formula is as follows:
Figure BDA0001824474900000083
wherein, Delta IxAnd Δ IyThe first-order difference in the X and Y directions, which is the difference Δ yx between adjacent pixels, is taken as the origin of the starting point at the upper left corner of the PET/CT fused image, and n is the size of the PET/CT fused image.
(4) Mean value
Figure BDA0001824474900000084
The mean value represents the average value of the gray levels of the pixels of the fused image, which is reflected as the average brightness to human eyes, and if the mean value is moderate (the gray value of the pixels is around 128), the visual effect is good. The calculation formula is as follows:
Figure BDA0001824474900000091
wherein M and N are the width and height (pixel unit) of the PET/CT fused image respectively, and F (i, j) is the image pixel value at the ith row and jth column position of the PET/CT fused image.
It should be noted that the methods for calculating the snr, the average value, the entropy and the sharpness are not limited to the above formulas, and the snr, the average value, the entropy and the sharpness that include the above formulas or are obtained by transforming the above formulas are all included in the scope of the embodiments of the present application.
And S102, constructing a quantitative model according to a preset self-organizing mapping neural network algorithm and the characteristic vector.
The preset self-organizing mapping neural network algorithm may be a self-organizing mapping neural network algorithm that is set based on a self-organizing mapping neural network and combined with the feature vector of the fusion image (the four feature information of the signal-to-noise ratio, the mean value, the information entropy, and the definition described above), for example: the initial weight matrix may be preset, or the preset iteration number of updating the weight, or other parameters, which is not limited in this embodiment.
Generally, the evaluation of the effect of fused images by using the self-organizing map neural network is essentially a difference measure, which must select a reference, and the effect of the fused images is usually determined for a certain target, such as obtaining the fused images with low noise, clear images, large information amount, high resolution and the like. Therefore, the differences between all other fused images to be tested and the quantization model can be measured by taking the self-organizing mapping neural network model trained by the fused images with good fusion effect, namely, the quantization model as a reference.
Specifically, based on the feature vectors (signal-to-noise ratio, mean value, entropy, and sharpness) of the multiple fused images calculated by the computer device in the step S101, according to the preset self-organizing map neural network algorithm, the computer device may start to construct a quantization model using the feature vectors of the fused images as the input of the neural network, where the quantization model may be a self-organizing map neural network model constructed by the feature vectors of the fused images selected from the multiple fused images with a better effect.
S103, quantifying the effect of the fusion image to be tested according to the quantification model.
It should be noted that, the quantifying the effect of the fused image described in the embodiments of the present application may be understood as evaluating the effect of the fused image by quantifying each parameter that can characterize the effect, and the quantified result is a degree value, for example: and the confidence coefficient can visually represent the quality of the fused image effect according to the confidence coefficient.
In this embodiment, the feature vector of the fused image to be tested is input into the quantization model, and a quantization value of the fused image to be tested is obtained through multiple iterations, where the quantization value may be a confidence level, an euclidean distance between the fused image to be tested and the quantization model, and the like.
In the method for quantifying the effect of the fused image, the characteristic vectors of a plurality of fused images are obtained through computer equipment, a quantification model is constructed according to a preset self-organizing mapping neural network algorithm and the characteristic vectors, and then the effect of the fused image to be tested is quantified according to the constructed quantification model.
In an embodiment, as shown in fig. 3, a method for quantifying the effect of a fusion image is provided, and this embodiment relates to a specific process in which a computer device constructs a quantification model according to a preset self-organizing map neural network algorithm and the feature vectors. As shown in fig. 3, one implementation manner of S102 described above includes:
s201, generating a training sample set according to the feature vector; the training sample set is a data set obtained according to the feature vectors of the multiple fused images.
In this embodiment, the training sample set is a sample set composed of fused images with a better fusion effect, where each sample is a feature vector of one fused image, that is, each sample is four pieces of feature information, i.e., a signal-to-noise ratio, an average value, an information entropy, and a definition, of one fused image, and specifically, according to the fused image obtained in S101 and the calculated feature vectors of all the fused images, the fused image with a better feature vector is selected as a sample, so that the selected better sample can be composed into the training sample set. The selection criterion may be determined according to the purpose, for example, the image with clear structural information, the accurate fusion region, and the sufficient metabolic information, or may be determined by an expert or an experienced clinician according to subjective experience, and the selection criterion of the better sample is not limited in this embodiment.
S202, initializing an initial weight matrix of the self-organizing map neural network according to the training sample set.
It should be noted that, the learning of the neural network is to continuously approximate the distribution of the training samples from the randomly assigned initial weights of the network. The setting of the initial weight has a great influence on network convergence, if the initial weight distribution is far from the target distribution, the learning time is longer, and even convergence may not be possible, otherwise, if the initial weight is closer to the target distribution, the network convergence can be quickly achieved. The currently commonly used neural network initial weight setting mainly includes two methods: a weight method and a random weight method are given. Wherein the weighting method is to directly set a fixed number for all weight vectors, e.g.
Figure BDA0001824474900000111
(n is the dimension of the input vector), which may cause the network to fail to converge; the random weight rule is to set the initial weight as the weightRandom fractions close to 0, the purpose of which is to fully disperse the weight vectors in the sample space, and this arrangement adjusts only the weight vectors close to the samples with respect to the more concentrated sample set, while the weights of the more distant networks are not adjusted, which may group the results into one group.
In order to ensure that the setting of the initial weight can make the network converge and all weight vectors can be adjusted, the similarity between the initial weight and the input space needs to be ensured, and the discreteness of the initial weight needs to be ensured. Specifically, in this embodiment, the computer device first obtains a value range of a feature vector (i.e., four feature information, i.e., the signal-to-noise ratio, the mean value, the information entropy, and the definition, described in the foregoing embodiment) of each fused image in the training sample set, and then uniformly and randomly allocates values in the feature vector range to each neuron in the competition layer, so that not only can the similarity between the value of the initial weight and the training sample be ensured, but also the discreteness of the weight vector in the sample space can be ensured through random allocation.
Optionally, one implementation manner of S202 is: obtaining the value range of each feature vector in the training sample set; and uniformly and randomly distributing the values in the value range to each neuron of the neural network competition layer to obtain the initial weight matrix. The feature vector is the feature information of the fused image described in step S301. The random and uniform distribution can be that the feature information of each fusion image is randomly and uniformly distributed according to a single attribute in the feature vector. Therefore, the initial weight setting is completely finished according to the values of the training samples, so that the similarity of the initial weight and the training samples can be ensured, the algorithm can be quickly converged, and the operation speed of the network is greatly improved.
By way of example, assume that the training sample set is:
X=[x1,x2,...,xn]T
xi=[xi1,xi2,...,xim]T
wherein n is the number of samples in the training sample set, and m is in the fusion image feature vectorThe number of characteristic information is generally taken as In, m, and K, where In is the number of input neurons and K is the number of output neurons
Figure BDA0001824474900000121
Where K to must be an integer.
The specific steps of obtaining the initial weight matrix are as follows:
the first step is as follows: firstly, setting the maximum value of the jth attribute in the feature vector of the training sample set as MAXjMinimum value of MINjNamely:
Figure BDA0001824474900000131
Figure BDA0001824474900000132
the second step is that: at MAXjAnd MINjK values were collected uniformly between, and are expressed as follows:
Zj=[z1j,z2j,...,zKj],(j=1,2,...,m)
Figure BDA0001824474900000133
wherein Z isjAnd representing the uniformly sampled vector in the value range of the jth attribute sample in the feature vector.
The third step: will ZjThe values in (1) are disordered in sequence, randomly distributed to the jth value in the weight vectors of the K neurons, and all attributes are distributed in sequence according to the method, so that the method can be obtained:
wi=[zi1,zi2,...zij,...,ziK],(i=1,2,...,K;j=1,2,...,m)
wherein, wiWeight vector, zi, representing the ith neuronjRepresenting the ith value in the jth attribute.
Then, w found finallyiThe matrix is the set initial weight matrix. Therefore, the initial weight matrix is set according to the values in the training sample set, the similarity of the initial weight matrix and the training sample set can be ensured, the neural network algorithm can be quickly converged, and the training speed of the self-organizing mapping neural network is greatly increased.
S203, updating the initial weight matrix according to the learning rate of the neural network, the neighborhood radius and a preset self-adaptive stopping criterion to obtain a feature space of the training sample set.
In this embodiment, based on the initial weight matrix determined in step S202, in this embodiment, the computer device trains the training sample set according to the learning rate of the neural network, the neighborhood radius, and the preset adaptive stop criterion, and when the training times reach the preset adaptive stop criterion, the training is stopped, and finally a feature space that is the training sample set is formed, where the learning rate and the neighborhood radius of the neural network are monotonic functions of the iteration times and are changed along with the iteration times.
It should be noted that, in the self-organizing map neural network algorithm, usually, the maximum iteration number T of network training is artificially specified according to experience, and a variation formula of the network learning rate and the winning domain radius is set according to the maximum iteration number, such as the following formula:
η(i)=ηmax-(i/T)*(ηmaxmin)
r(i)=rmax-(i/T)*(rmax-rmin)
wherein, eta (i), r (i) respectively represent the learning rate and the winning domain radius of the ith iteration, etamax,rmaxMaximum value of learning rate and maximum value of radius of winning field, etamin,rminThe minimum value of the learning rate and the minimum value of the radius of the winning domain are respectively, and T represents the maximum iteration number set artificially. Typically, the training of the neural network is stopped after a maximum number of iterations. Because the iteration times are too few, the training effect cannot be achieved; if too much, the redundant calculation amount is greatly increased, soGenerally, a larger number of iterations is set to obtain the best training result, which greatly affects the computational efficiency of the algorithm.
However, since the maximum number of iterations is not set in this embodiment, the learning rate and the radius of the winning domain change accordingly for each iteration, and the representation function of the learning rate and the winning neighborhood according to Δ w (representing the change of all cluster centers) is:
Figure BDA0001824474900000141
Figure BDA0001824474900000142
during the first iteration, a larger value of Δ w, for example 10000, can be given, so that the learning rate and the radius of the winner domain of the first training are both larger, and as learning progresses, Δ w becomes smaller and smaller, and the learning rate and the radius of the winner domain also become smaller and smaller, that is, the requirement that the weight is adjusted to be coarse adjustment first and then fine adjustment is met. Therefore, whether the network stops or not is automatically judged by the fact that the maximum value of the weight variable quantity is smaller than a certain smaller value, the learning rate and the winning domain radius of each learning are automatically adjusted according to the requirement of network training, the minimum iteration times of the network training effect can be guaranteed, and the calculated quantity is reasonably reduced.
In this embodiment, the preset adaptive stop criterion is the set maximum iteration number, and the adaptive stop criterion needs to automatically determine the iteration number according to the learning effect of the network itself, so that the network learning effect is ensured, and the calculation amount is not increased blindly. Specifically, in this embodiment, in the neural network learning process, each neuron in the competition layer is equivalent to a cluster center, and as the feature vector of the input layer training sample set is continuously input, the cluster center will be continuously close to the center of each type of data, and when the weight on the competition layer does not change much as the number of iterations increases, it indicates that the number of iterations in the preset adaptive stop criterion is reached, and the neural network training is basically completed.
Optionally, the adaptive stopping criterion includes that a maximum variation of the quasi-center weight vector in the updated weight matrix is smaller than a preset threshold. In the adaptive stopping criterion, if the variation of the weight on the competition layer is always smaller than the preset threshold, it indicates that the weight does not change much along with the increase of the iteration times of all cluster centers, so that the training of the neural network can be stopped.
For example, since the representation form of the cluster center is the weight vector of each competition layer neuron, the change of the cluster center can be represented according to the change of the weight vector, i.e. it can be represented as the following formula:
ΔW=Wn-Wn-1
wherein, Δ W represents the variation of all cluster center weight vectors, and is a matrix with M × N dimensions, wherein W isnRepresents the weight matrix, W, after the current trainingn-1Representing the weight matrix after the previous training.
If the maximum weight change value in Δ W is smaller than a certain preset threshold value epsilon, it can be said that all the weight change amounts are smaller than a certain small value epsilon, which also indicates that the weight change is not large along with the increase of the iteration times of all the cluster centers. Thus, the variation of the cluster center can be described by an infinite norm of Δ W. Namely:
Figure BDA0001824474900000151
where Δ W is the element with the largest absolute value among Δ W, M is the number of input neurons, and N is the number of competition layer neurons.
Therefore, an adaptive stopping criterion with a sufficiently small descending amount can be adopted, the smaller value epsilon is given, and if delta w is less than epsilon, the network learning is ended, so that the neural network is automatically stopped through the adaptive stopping criterion, and the training speed of the self-organizing mapping neural network is greatly accelerated.
And S204, determining the quantization model according to the feature space.
Based on the feature space of the training sample set determined in step S203 above, the computer device determines the training sample set feature space as a corresponding quantization model. The computer device determines the feature space of the training sample set as a corresponding quantization model, and uses an algorithm, which is not limited in this embodiment.
The method for quantizing the effect of the fused image according to this embodiment initializes an initial weight matrix of a neural network according to a training sample set, updates the initial weight matrix according to a learning rate, a neighborhood radius, and a preset adaptive stop criterion of the neural network to obtain a feature space of the training sample set, and determines the quantization model according to the feature space, so that since the training sample set is a sample set composed of feature vectors of a plurality of fused images with good fusion effect, and then the quantization model is constructed using the training sample set, when the quantization model is used to evaluate the effect of the fused image, the feature vectors of the fused images are input, multiple uses of the fused image (evaluation is performed from the image including multiple feature information) are comprehensively considered, and evaluation of the fusion effect from only a single use of the fused image is avoided, giving a comprehensive evaluation result.
In an embodiment, as shown in fig. 4, a method for quantifying fusion image effects is provided, and this embodiment relates to a specific process in which a computer device updates the initial weight matrix according to a learning rate of the neural network, a neighborhood radius, and a preset adaptive stop criterion, so as to obtain a feature space of the training sample set. As shown in fig. 4, one implementation manner of S203 described above includes:
s301, according to the initial weight matrix, calculating Euclidean distances between the training sample set and each neuron in the neural network.
In this embodiment, based on the initial weight matrix determined in step S202, the computer device calculates euclidean distances between a training sample set and each neuron in the neural network, where the training sample set is a sample set composed of fused images with good fusion effects described in the above embodiments. And respectively calculating the Euclidean distance between each training sample set and the neuron in the neural network by the computer equipment. For example, the initialization weight matrix is set as:
W=[w1,w2,...,wK]
wj=[wj1,wj2,...,wjm],(j=1,2,...,K)
suppose a sample in the training sample set is xiThen the sample xiEuclidean distance d from neuron jijThe calculation method is as follows:
Figure BDA0001824474900000171
s302, determining the neuron with the minimum Euclidean distance as a winning neuron.
Based on the euclidean distances between the training sample sets and the neurons in the neural network calculated by the computer device in step S301, the computer device determines the neuron with the smallest euclidean distance as the winning neuron, where the method for determining the smallest euclidean distance by the computer device may be to sort all the euclidean distances from small to large, and determine the euclidean distance arranged at the first position as the smallest euclidean distance, or may be in other manners, and this embodiment does not limit this.
S303, determining a winning neighborhood according to the winning neuron and the neighborhood radius.
In this embodiment, based on the neuron with the minimum euclidean distance determined in step S302 as a winning neuron, and by combining a preset neighborhood radius, the computer device determines a region with the winning neuron as a center of a circle and the neighborhood radius as a radius, and determines the region as a winning neighborhood.
S304, updating the weight of the neuron in the dominant neighborhood according to the self-adaptive stopping criterion to obtain the feature space of the training sample set.
In this embodiment, the adaptive stopping criterion may be that, when the maximum variation of the quasi-center weight vector in the updated weight matrix described in the step S203 is smaller than a preset threshold, the network training may be stopped. When the adaptive stopping criterion is not met during network training, the weights of the neurons in the win neighborhood need to be updated each time the network is trained until the adaptive stopping criterion is met, and the training is stopped. And after the neural network stops training, the formed feature space is the feature space of each fused image in the training sample set input during the training of the neural network.
The specific process of updating the initial weight matrix by the computer device according to the learning rate of the neural network, the neighborhood radius and the preset adaptive stop criterion to obtain the feature space of the training sample set can be described as follows: also, assume that the training sample set is:
X=[x1,x2,...,xn]T
xi=[xi1,xi2,...,xim]T
wherein N is the number of samples In the training sample set, M is the number of attributes In the feature vector, the number of input neurons is set to be In, In is M, the number of output neurons is set to be K M N, and the number is generally taken
Figure BDA0001824474900000181
And (integer), wherein the values of m and n are as close as possible, and the topological structure of the network competition layer can be a quadrilateral structure.
As previously mentioned, the model parameters of the neural network include a weight matrix W, a learning rate η, and a domain radius r, where η and r are both monotonically decreasing functions as the number of iterations increases. Then, the initialization weight matrix is set as:
W=[w1,w2,...,wK]
wj=[wj1,wj2,...,wjm],(j=1,2,...,K)
assume that the input sample is xiThen calculate the input sample xiEuclidean distance d from neuron jij
Figure BDA0001824474900000182
Then for the winning neuron c:
dic=min{dj},(j=1,2,…,K)
because the iteration times are unknown, in order to ensure that the learning rate and the neighborhood radius are along with the iteration, the learning rate and the neighborhood radius are larger in the initial stage of training, coarse adjustment is carried out, fine adjustment is carried out when the learning rate and the neighborhood radius are smaller in the later stage, and finally the learning rate and the neighborhood radius can be reduced to a value smaller than 0, so that the learning rate eta (t) and the neighborhood distance function h can be setci(t) and neighborhood radius r (t) are expressed as:
Figure BDA0001824474900000191
Figure BDA0001824474900000192
Figure BDA0001824474900000193
wherein etamaxFor a given maximum learning rate, rmaxFor a given maximum neighborhood radius, Δ w is the maximum change in the class-centered weight vector, δciRepresenting the distance between neurons c and i.
According to the winning neuron, the learning rate eta (t) and the neighborhood distance function h determined aboveci(t) and neighborhood radius r (t), on the competitive layer, determining a winning neighborhood by taking the winning neuron c as the center of a circle and r as the radius, and updating the weights of the neurons in the neighborhood to different degrees according to the following formula:
wi(t+1)=wi(t)+η(t)*hci*[xi-wi(t)]
wherein t represents the number of iterations, η (t) represents the learning rate, and is the ratio of tFunction of decreasing of pitch, hci(t) represents a neighborhood distance function. Giving a threshold value epsilon of the maximum variation of the class center weight vector, if delta w<And epsilon, finishing the training, wherein the weight matrix of the network competition layer formed after the neural network training is finished is the feature space forming the training sample.
In the method for quantizing the effect of the fused image provided in this embodiment, the computer device calculates the euclidean distance between the training sample set and each neuron in the neural network according to the initial weight matrix, then determines the neuron with the smallest euclidean distance as the winning neuron, determines the winning neighborhood according to the winning neuron and the neighborhood radius, updates the weight of the neuron in the winning neighborhood according to the preset adaptive stop criterion, obtains the feature space of the training sample set, since the training sample set is a sample set composed of feature vectors of a plurality of fused images with better fusion effect, then uses the training sample set to construct the quantization model, so that when the quantization model is used to evaluate the effect of the fused image, the feature vectors of each fused image are used as input, and multiple purposes of the fused image are comprehensively considered (evaluation is performed from the image including multiple feature information), the fusion effect is prevented from being evaluated only from the single purpose of the fusion image, and a comprehensive evaluation result is provided.
In an embodiment, a method for quantifying the effect of the fused image is provided as shown in fig. 5, and the embodiment relates to a specific process of quantifying the effect of the fused image to be tested by a computer device according to the quantification model. As shown in fig. 5, one implementation manner of S103 includes:
s401, calculating Euclidean distances between a sample to be tested and each weight vector of the quantitative model; the sample to be tested is obtained according to the feature vector of the fused image to be tested.
In this embodiment, the sample to be tested may be a sample composed of feature vectors of the fusion image to be tested selected at random, and the fusion image to be tested is the fusion image to be evaluated. Taking the feature vector of the fused image corresponding to the sample to be tested as the input of the quantization model, calculating in the quantization model constructed in the step S102 to obtain the distribution of the sample to be tested, and then calculating the euclidean distance between the sample to be tested and each weight vector of the training sample set (the quantization model is formed by training the training sample set) according to the distribution of the sample to be tested in the quantization model.
S402, determining the minimum Euclidean distance as a first quantization value of the fused image effect to be tested.
Based on the euclidean distances between the sample to be tested and the weight vectors of the quantization model determined in the above S401, the computer device determines the minimum euclidean distance, and may evaluate the fused image to be tested according to all the euclidean distances, that is, the larger the euclidean distance MQE is, the worse the fusion effect of the sample is, and on the contrary, the better the fusion effect is. The mode of determining the minimum euclidean distance by the computer device may be to sort all the euclidean distances in a descending order, determine the euclidean distance located at the head as the minimum euclidean distance, and this embodiment does not limit the mode of determining the minimum euclidean distance by the computer device.
Illustratively, again, assume that the sample to be tested is x ═ x1,x2,...,xm]The weight matrix of the trained neural network is W ═ W1,w2,...,wK]According to MQE | | | x-wBMUCalculating the Euclidean distances between the sample x to be tested and all weight vectors in the weight matrix W, taking the minimum Euclidean distance as an index for evaluating the effect of the fused image, and defining the minimum Euclidean distance as a minimum quantization error, namely a first quantization value; where MQE is the minimum quantization error, wBMUIs the weight vector of the winning neuron. The larger MQE, the poorer the fusion effect of the sample, and conversely, the better the fusion effect.
Optionally, in the step S103: another implementation manner of quantifying the fused image effect to be tested according to the quantification model may further include: mapping the training sample set and the sample to be tested to a two-dimensional plane to form a network hit map; and quantifying the effect of the fused image according to the network hit map. In this embodiment, the distribution of the samples to be tested in the quantization model and the training sample set in the quantization model are mapped onto a two-dimensional plane, that is, onto neurons of the competition layer, so as to form a network hit map, as shown in fig. 5.1, where the region 1 indicates the concentrated region onto which the training sample set is mapped, and this is not limited to the mapping method in this embodiment, and according to the distribution of the samples to be tested mapped onto the two-dimensional plane, the training samples can be visually seen to be concentrated together, and according to this fig. 5.1, the samples to be tested can be evaluated, if the distance between the training set to be tested and the training sample set is farther, the fusion effect is worse, and if the distance between the training set and the training sample set is closer, the effect is better.
In the quantization method for fusing image effects provided by this embodiment, euclidean distances between a sample to be tested and each weight vector of the quantization model are calculated; then, the minimum Euclidean distance is determined to be the first quantization value of the fused image effect to be tested. In addition, the training sample set and the sample to be tested can be mapped to a two-dimensional plane to form a network hit map; the effect of the fused image is quantified according to the network hit map, so that the effect of the fused image can be embodied through data, and the effect of the fused image can be visually expressed through graphic representation, so that the evaluation mode of the effect of the fused image is more diversified.
Considering that the MQE just describes the distance of each sample to be tested relative to the feature space of the training sample set, it is only a distance value, and the degree of the fusion effect of the current sample cannot be reflected very intuitively. Optionally, as shown in fig. 6, the manner in which the computer device quantizes the fused image effect to be tested according to the quantization model may further include:
s501, calculating the confidence of the average level of the first quantized value of the sample to be tested relative to the first quantized value of the training sample set.
Considering that the distance between the sample with better fusion effect and the trained sample space should be very small or close to 0, it can be known that the fusion effect of the training sample should be close to 0, and therefore, the confidence of the sample to be tested MQE relative to the training sample set MQE can be determined based on the MQE value of the training sample set.
In this embodiment, the sample to be tested may be a training set composed of a plurality of fusion images selected at random, and the plurality of fusion images are fusion images to be evaluated. Wherein, the confidence coefficient of the average level of the first quantization value of the sample to be tested relative to the first quantization value of the training sample set is calculated based on MQE determined in the above step S402, and similarly, the computer device calculates the feature vector (four feature information of signal-to-noise ratio, mean value, information entropy and definition) of each fusion image in the plurality of samples to be tested first, calculating Euclidean distance between the sample to be tested and each weight vector of a training sample set (the quantization model is formed by training the training sample set) according to the feature vector, then determining a minimum Euclidean distance of MQE, then the computer device determines a confidence level between the quantized value of the sample to be tested and the quantized value average level of the training sample set according to MQE, the method for determining the confidence level by the computer device is not limited in this embodiment, and may be, for example, determined in the following manner:
Figure BDA0001824474900000221
where CV represents the confidence in the fusion effect relative to the baseline, MQE is MQE, c for the sample or training sample set to be tested0Is a proportional parameter, and selects a proper c0The CV value of the reference sample is made closer to 1, see the set scale parameter c as shown in FIG. 6.10And no setting of the ratio parameter c0The corresponding CV curves are shown. It should be noted that, since more than one sample in the training sample set is a training sample set composed of a plurality of training samples, when MQE of the training sample set is calculated, a plurality of MQE samples are obtained, and thus the reference sample MQE is an average value of the plurality of MQE samples.
S502, determining the confidence coefficient as a second quantization value of the fused image effect to be tested.
In this step, based on the confidence determined in the above S501, the confidence is determined as a second quantization value of the fused image effect to be tested, that is, the confidence is used to evaluate the fused effect of the sample to be tested, where the confidence value ranges from 0 to 1, and if the confidence value is larger, it indicates that the sample to be tested is closer to the reference (training sample set), that is, the fused effect is better, otherwise, the fused effect is worse.
In the method for quantifying the effect of the fused image provided by this embodiment, the confidence between the quantization value of the training sample set and the quantization value of the sample to be tested is calculated, the confidence is determined to be the second quantization value of the effect of the fused image to be tested, and the confidence is adopted to represent the final fusion evaluation result, so that the evaluation result is a degree value, and thus the effect of the fused image can be evaluated more comprehensively and intuitively.
It should be understood that although the various steps in the flow charts of fig. 2-6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-6 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In an embodiment, as shown in fig. 7, a schematic structural diagram of an apparatus for quantizing a fused image effect according to an embodiment is provided, and as shown in fig. 7, the apparatus includes: the device comprises an acquisition module 10, a construction module 11 and a quantification module 12.
An obtaining module 10, configured to obtain feature vectors of a plurality of fused images; the feature vector is used for representing feature information of the fused image;
the construction module 11 is configured to construct a quantization model according to a preset self-organizing mapping neural network algorithm and the feature vector;
and the quantification module 12 is used for quantifying the fused image effect to be tested according to the quantification model.
The implementation principle and technical effect of the quantization apparatus for fusing image effects provided by the above embodiment are similar to those of the above embodiment, and are not described herein again.
In an embodiment, as shown in fig. 8, which is a schematic structural diagram of a quantization apparatus for fusing image effects provided in an embodiment, as shown in fig. 8, the building module 11 includes: a generating unit 111, an initializing unit 112, an updating unit 113 and a building unit 114.
A generating unit 111, configured to generate a training sample set according to the feature vector; the training sample set is a data set obtained according to the feature vectors of the multiple fused images;
an initializing unit 112, configured to initialize an initial weight matrix of the neural network according to the training sample set;
an updating unit 113, configured to update the initial weight matrix according to a learning rate of the neural network, a neighborhood radius, and a preset adaptive stop criterion, to obtain a feature space of the training sample set;
a construction unit 114 configured to determine the quantization model according to the feature space.
The implementation principle and technical effect of the quantization apparatus for fusing image effects provided by the above embodiment are similar to those of the above embodiment, and are not described herein again.
In an embodiment, the initialization unit 112 is specifically configured to obtain a value range of each feature vector in the training sample set; and uniformly and randomly distributing the values in the value range to each neuron of the neural network competition layer to obtain the initial weight matrix.
In an embodiment, the constructing unit 114 is specifically configured to calculate, according to the initial weight matrix, euclidean distances between the training sample set and each neuron in the neural network; determining the neuron with the minimum Euclidean distance as a winning neuron; determining a winning neighborhood according to the winning neuron and the neighborhood radius; and updating the weight of the neuron in the dominant neighborhood according to the self-adaptive stopping criterion to obtain the feature space of the training sample set.
In one embodiment, the adaptive stopping criterion includes that the maximum variation of the quasi-center weight vector in the updated weight matrix is smaller than a preset threshold.
In an embodiment, as shown in fig. 9, which is a schematic structural diagram of a quantization apparatus for fusing image effects according to an embodiment, as shown in fig. 9, the quantization module 12 includes: a calculation unit 121 and a determination unit 122.
A calculating unit 121, configured to calculate euclidean distances between the sample to be tested and each weight vector of the quantization model; the sample to be tested is obtained according to the feature vector of the fused image to be tested;
a determining unit 122, configured to determine the minimum euclidean distance as the first quantization value of the fused image effect to be tested.
The implementation principle and technical effect of the quantization apparatus for fusing image effects provided by the above embodiment are similar to those of the above embodiment, and are not described herein again.
In an embodiment, as shown in fig. 10, which is a schematic structural diagram of an apparatus for quantizing a fused image effect according to an embodiment, as shown in fig. 10, the apparatus further includes: a calculation module 13 and a determination module 14.
A calculating module 13, configured to calculate a confidence level of the first quantized value of the sample to be tested with respect to the average level of the first quantized values of the training sample set;
a determining module 14, configured to determine the confidence as a second quantization value of the fused image effect to be tested.
The implementation principle and technical effect of the quantization apparatus for fusing image effects provided by the above embodiment are similar to those of the above embodiment, and are not described herein again.
For the specific limitation of the quantization apparatus for the fused image effect, reference may be made to the above limitation on the quantization method for the fused image effect, and details are not described herein again. The modules in the quantization device for fusing image effects can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 11. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of fused image effect quantification. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 11 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory storing a computer program and a processor, the processor when executing the computer program being adapted to perform the steps of:
acquiring feature vectors of a plurality of fusion images; the feature vector is used for representing feature information of the fused image;
constructing a quantitative model according to a preset self-organizing mapping neural network algorithm and the characteristic vector;
and quantifying the effect of the fusion image to be tested according to the quantification model.
The implementation principle and technical effect of the computer device provided by the above embodiment are similar to those of the above method embodiment, and are not described herein again.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring feature vectors of a plurality of fusion images; the feature vector is used for representing feature information of the fused image;
constructing a quantitative model according to a preset self-organizing mapping neural network algorithm and the characteristic vector;
and quantifying the effect of the fusion image to be tested according to the quantification model.
The implementation principle and technical effect of the computer-readable storage medium provided by the above embodiments are similar to those of the above method embodiments, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for quantifying fused image effects, the method comprising:
acquiring feature vectors of a plurality of fusion images; the feature vector is used for representing feature information of the fused image;
constructing a quantitative model according to a preset self-organizing mapping neural network algorithm and the characteristic vector;
quantifying the effect of the fusion image to be tested according to the quantification model;
wherein, the quantifying the effect of the fused image to be tested according to the quantification model comprises:
calculating Euclidean distances between a sample to be tested and each weight vector of the quantitative model; determining the minimum Euclidean distance as a first quantization value of the fused image effect to be tested; by the formula
Figure FDA0002879112320000011
Calculating the confidence coefficient of the average level of the first quantization value of the sample to be tested relative to the first quantization value of the training sample set, and determining the confidence coefficient as a second quantization value of the fused image effect to be tested; the sample to be tested is obtained according to the feature vector of the fused image to be tested; wherein CV in the formula represents a confidence level of the first quantized value of the sample to be tested with respect to the average level of the first quantized values of the training sample set, MQE is the first quantized value of the sample to be tested, C0The CV calculated for the first quantized value of the training sample set is closest to the scale parameter of 1.
2. The method according to claim 1, wherein the constructing a quantitative model according to a preset self-organizing map neural network algorithm and the feature vector comprises:
generating a training sample set according to the feature vector; the training sample set is a data set obtained according to the feature vectors of a plurality of fusion images with good fusion effects;
initializing an initial weight matrix of the self-organizing mapping neural network according to the training sample set;
updating the initial weight matrix according to the learning rate of the self-organizing mapping neural network, the neighborhood radius and a preset self-adaptive stopping criterion to obtain a feature space of the training sample set;
and determining the quantization model according to the feature space.
3. The method of claim 2, wherein initializing an initial weight matrix of a neural network according to the training sample set comprises:
obtaining the value range of each feature vector in the training sample set;
and uniformly and randomly distributing the values in the value range to each neuron of the neural network competition layer to obtain the initial weight matrix.
4. The method according to claim 2 or 3, wherein the updating the initial weight matrix according to the learning rate of the neural network, the neighborhood radius and a preset adaptive stop criterion to obtain the feature space of the training sample set comprises:
calculating Euclidean distances between the training sample set and each neuron in the neural network according to the initial weight matrix;
determining the neuron with the minimum Euclidean distance as a winning neuron;
determining a winning neighborhood according to the winning neuron and the neighborhood radius;
and updating the weight of the neuron in the dominant neighborhood according to the self-adaptive stopping criterion to obtain the feature space of the training sample set.
5. The method according to claim 2 or 3, wherein the adaptive stopping criterion comprises that the maximum variation of the class-center weight vector in the updated weight matrix is smaller than a preset threshold.
6. The method of any of claims 1-3, wherein quantifying fused image effects to be tested according to the quantification model further comprises:
mapping the training sample set and the sample to be tested to a two-dimensional plane to form a network hit map; and quantifying the effect of the fused image according to the network hit map.
7. The method according to any one of claims 1-3, wherein the feature vector comprises signal-to-noise ratio, mean, entropy, sharpness.
8. A fused image effect quantization apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring the feature vectors of a plurality of fusion images; the feature vector is used for representing feature information of the fused image;
the construction module is used for constructing a quantitative model according to a preset self-organizing mapping neural network algorithm and the characteristic vector;
the quantification module is used for quantifying the effect of the fusion image to be tested according to the quantification model;
the quantization module is specifically used for calculating Euclidean distances between a sample to be tested and each weight vector of the quantization model; determining the minimum Euclidean distance as a first quantization value of the fused image effect to be tested; by the formula
Figure FDA0002879112320000031
Calculating the confidence coefficient of the average level of the first quantization value of the sample to be tested relative to the first quantization value of the training sample set, and determining the confidence coefficient as a second quantization value of the fused image effect to be tested; the sample to be tested is obtained according to the feature vector of the fused image to be tested; wherein CV in the formula represents a confidence level of the first quantized value of the sample to be tested with respect to the average level of the first quantized values of the training sample set, MQE is the first quantized value of the sample to be tested, C0The CV calculated for the first quantized value of the training sample set is closest to the scale parameter of 1.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN201811178810.0A 2018-10-10 2018-10-10 Fusion image effect quantification method and device, computer equipment and storage medium Active CN109493319B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811178810.0A CN109493319B (en) 2018-10-10 2018-10-10 Fusion image effect quantification method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811178810.0A CN109493319B (en) 2018-10-10 2018-10-10 Fusion image effect quantification method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109493319A CN109493319A (en) 2019-03-19
CN109493319B true CN109493319B (en) 2021-06-22

Family

ID=65690249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811178810.0A Active CN109493319B (en) 2018-10-10 2018-10-10 Fusion image effect quantification method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109493319B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112449175B (en) * 2019-08-29 2022-05-17 浙江宇视科技有限公司 Image splicing test method, device, equipment and storage medium
CN111475532B (en) * 2020-03-05 2023-11-03 拉扎斯网络科技(上海)有限公司 Data processing optimization method and device, storage medium and terminal
CN111798414A (en) * 2020-06-12 2020-10-20 北京阅视智能技术有限责任公司 Method, device and equipment for determining definition of microscopic image and storage medium
CN113674157B (en) * 2021-10-21 2022-02-22 广东唯仁医疗科技有限公司 Fundus image stitching method, computer device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101334893A (en) * 2008-08-01 2008-12-31 天津大学 Fused image quality integrated evaluating method based on fuzzy neural network
CN106910192A (en) * 2017-03-06 2017-06-30 长沙全度影像科技有限公司 A kind of image syncretizing effect appraisal procedure based on convolutional neural networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9710714B2 (en) * 2015-08-03 2017-07-18 Nokia Technologies Oy Fusion of RGB images and LiDAR data for lane classification

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101334893A (en) * 2008-08-01 2008-12-31 天津大学 Fused image quality integrated evaluating method based on fuzzy neural network
CN106910192A (en) * 2017-03-06 2017-06-30 长沙全度影像科技有限公司 A kind of image syncretizing effect appraisal procedure based on convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于自组织特征映射网络的聚类算法研究;吴红艳;《中国优秀博硕士学位论文全文数据库 信息科技辑》;20061215;I138-918/第39-60页 *

Also Published As

Publication number Publication date
CN109493319A (en) 2019-03-19

Similar Documents

Publication Publication Date Title
CN109493319B (en) Fusion image effect quantification method and device, computer equipment and storage medium
CN111507343B (en) Training of semantic segmentation network and image processing method and device thereof
CN111780763B (en) Visual positioning method and device based on visual map
WO2020108474A1 (en) Picture classification method, classification identification model generation method and apparatus, device, and medium
US20210182613A1 (en) Image aesthetic processing method and electronic device
CN107423551B (en) Imaging method and imaging system for performing medical examinations
JP2022502751A (en) Face keypoint detection method, device, computer equipment and computer program
CN108334733B (en) Medical image display method, display system and computer-readable storage medium
DE102017006563A1 (en) Image patch matching using probability based sampling based on prediction
JP2020098587A (en) Object Shape Regression Using Wasserstein Distance
CN111723780A (en) Directional migration method and system of cross-domain data based on high-resolution remote sensing image
CN110866909A (en) Training method of image generation network, image prediction method and computer equipment
CN111144234A (en) Video SAR target detection method based on deep learning
CN113139462A (en) Unsupervised face image quality evaluation method, electronic device and storage medium
CN112257603A (en) Hyperspectral image classification method and related equipment
WO2022206729A1 (en) Method and apparatus for selecting cover of video, computer device, and storage medium
CN112101513A (en) Machine learning device
Chen et al. A novel face super resolution approach for noisy images using contour feature and standard deviation prior
CN111275059B (en) Image processing method and device and computer readable storage medium
CN107369138B (en) Image optimization display method based on high-order statistical model
US11699108B2 (en) Techniques for deriving and/or leveraging application-centric model metric
CN113221645A (en) Target model training method, face image generation method and related device
CN110275895B (en) Filling equipment, device and method for missing traffic data
Sun et al. An improved cuckoo search algorithm for multi-level gray-scale image thresholding
CN115859765B (en) Urban expansion prediction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder

Address after: 430206 Lianying medical headquarters base, No. 99, gaokeyuan Road, Donghu high tech Development Zone, Wuhan, Hubei Province

Patentee after: WUHAN UNITED IMAGING HEALTHCARE Co.,Ltd.

Address before: B1-7, 818 Gaoxin Avenue, Donghu hi tech Development Zone, Hongshan District, Wuhan City, Hubei Province 430206

Patentee before: WUHAN UNITED IMAGING HEALTHCARE Co.,Ltd.

CP02 Change in the address of a patent holder