CN109493319A - Blending image effect quantization method, device, computer equipment and storage medium - Google Patents

Blending image effect quantization method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN109493319A
CN109493319A CN201811178810.0A CN201811178810A CN109493319A CN 109493319 A CN109493319 A CN 109493319A CN 201811178810 A CN201811178810 A CN 201811178810A CN 109493319 A CN109493319 A CN 109493319A
Authority
CN
China
Prior art keywords
blending image
blending
tested
training sample
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811178810.0A
Other languages
Chinese (zh)
Other versions
CN109493319B (en
Inventor
沈强
曲杰
王莹珑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan United Imaging Healthcare Co Ltd
Original Assignee
Wuhan United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan United Imaging Healthcare Co Ltd filed Critical Wuhan United Imaging Healthcare Co Ltd
Priority to CN201811178810.0A priority Critical patent/CN109493319B/en
Publication of CN109493319A publication Critical patent/CN109493319A/en
Application granted granted Critical
Publication of CN109493319B publication Critical patent/CN109493319B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

This application involves a kind of blending image effect quantization methods, device, computer equipment and storage medium, the feature vector of multiple blending images is obtained by computer equipment, and according to preset self-organizing map neural network algorithm and this feature vector, construct quantitative model, then blending image effect to be tested is quantified according to the quantitative model of the building, due to the feature vector (signal-to-noise ratio that the input of the model is blending image, mean value, comentropy, clarity this four characteristic informations), comprehensively consider blending image multiple use in this way (the multiple characteristic informations for including from image are evaluated), it avoids and only evaluates syncretizing effect from blending image special purpose, give a comprehensive evaluation result, separately, its result quantified is a degree value, it can directly reflect the degree of blending image effect quality , so that evaluation result is very intuitive.

Description

Blending image effect quantization method, device, computer equipment and storage medium
Technical field
This application involves blending image technical fields, more particularly to a kind of blending image effect quantization method, device, meter Calculate machine equipment and storage medium.
Background technique
Medical image fusion can collect the respective information of source images, show richer information.Such as positive electron is sent out It penetrates computerized tomograph (Positron Emission Tomography, abbreviation PET) image and electronic computer tomography is swept It retouches (Computed Tomography, abbreviation CT) image co-registration to get up, so that it may clearly be solved simultaneously in piece image Cut open structural information and corresponding organ function metabolic information.
Due to the sensibility of field of medical applications, need medicine blending image that cannot lose any useful information, it is no Then may lead to malpractice because of the imperfect of image information, must be accurate and visual to the evaluation of blending image and have Effect.The effect of blending image not only will receive the noise of source images, the parameter of blending algorithm, the factors such as region of interest of observer Influence, also rely on the purposes of image and make evaluation.Currently, the evaluation rule of medicine blending image is to judge to believe Whether breath amount improves, whether noise is inhibited, whether the inhibition of homogeneous area noise is strengthened, whether marginal information obtains Whether reservation, image mean value improve, and are based on these principles, and current method for objectively evaluating is specifically included that based on statistical property Evaluation (such as: mean value, standard deviation), the evaluation (such as: entropy) based on information content, the evaluation (such as: signal-to-noise ratio) based on signal-to-noise ratio, base Evaluation (such as: clarity), the also evaluation based on fuzzy integral, rough set, evidence theory, convolutional neural networks in gradient value Etc..
But above-mentioned existing method for objectively evaluating is all that only the purposes of selection blending image in a certain respect is evaluated, A comprehensive evaluation result can not be provided, and its evaluation result is not intuitive enough.
Summary of the invention
Based on this, it is necessary to only select purposes in a certain respect to be commented for above-mentioned existing image fusion evaluation method Valence, a comprehensive evaluation result, and the technical problem that its evaluation result is not intuitive enough can not be provided, a kind of blending image is provided Effect quantization method, device, computer equipment and storage medium.
In a first aspect, the embodiment of the present invention provides a kind of blending image effect quantization method, which comprises
Obtain the feature vector of multiple blending images;Described eigenvector is used to characterize the feature letter of the blending image Breath;
According to preset self-organizing map neural network algorithm and described eigenvector, quantitative model is constructed;
Blending image effect to be tested is quantified according to the quantitative model.
In one of the embodiments, it is described according to preset self-organizing map neural network algorithm and the feature to Amount constructs quantitative model, comprising:
Training sample set is generated according to described eigenvector;The training sample set is according to the multiple blending image The data set that feature vector obtains;
The initial weight matrix of neural network is initialized according to the training sample set;
According to the learning rate of the neural network, the radius of neighbourhood and preset adaptive stopping criterion, to the initial power Value matrix is updated, and obtains the feature space of the training sample set;
The quantitative model is determined according to the feature space.
The initial weight square that neural network is initialized according to the training sample set in one of the embodiments, Battle array, comprising:
Obtain the value range that the training sample concentrates each described eigenvector;
By the uniformly random distribution of the value in the value range to each neuron of the neural network competition layer, obtain The initial weight matrix.
It is described according to the learning rate of the neural network, the radius of neighbourhood and preset adaptive in one of the embodiments, Stopping criterion is answered, the initial weight matrix is updated, the feature space of the training sample set is obtained, comprising:
According to the initial weight matrix, calculate in the training sample set and the neural network between each neuron Euclidean distance;
Determine that the smallest neuron of the Euclidean distance is triumph neuron;
Winning neighborhood is determined according to the triumph neuron and the radius of neighbourhood;
According to the adaptive stopping criterion, the weight of neuron in the winning neighborhood is updated, the trained sample is obtained The feature space of this collection.
The adaptive stopping criterion includes class center weight in updated weight matrix in one of the embodiments, The maximum variable quantity of vector is less than preset threshold.
In one of the embodiments, it is described according to the quantitative model to the blending image effect amount of progress to be tested Change, comprising:
Calculate the Euclidean distance between sample to be tested and each weight vector of the quantitative model;The sample to be tested For according to obtained by the feature vector of blending image to be tested;
Determine that the smallest Euclidean distance is the first quantized value of the blending image effect to be tested.
In one of the embodiments, the method also includes:
Calculate first quantized value average water of first quantized value relative to the training sample set of the sample to be tested Flat confidence level;
Determine that the confidence level is the second quantized value of the blending image effect to be tested.
Second aspect, the embodiment of the present invention provide a kind of blending image effect quantization device, and described device includes:
Module is obtained, for obtaining the feature vector of multiple blending images;Described eigenvector is for characterizing the fusion The characteristic information of image;
Module is constructed, for according to preset self-organizing map neural network algorithm and described eigenvector, building quantization Model;
Quantization modules, for being quantified according to the quantitative model to blending image effect to be tested.
The third aspect, the embodiment of the present invention provide a kind of computer equipment, including memory and processor, the storage Device is stored with computer program, and the processor performs the steps of when executing the computer program
Obtain the feature vector of multiple blending images;Described eigenvector is used to characterize the feature letter of the blending image Breath;
According to preset self-organizing map neural network algorithm and described eigenvector, quantitative model is constructed;
Blending image effect to be tested is quantified according to the quantitative model.
A kind of fourth aspect, computer readable storage medium of the embodiment of the present invention, is stored thereon with computer program, institute It states when computer program is executed by processor and performs the steps of
Obtain the feature vector of multiple blending images;Described eigenvector is used to characterize the feature letter of the blending image Breath;
According to preset self-organizing map neural network algorithm and described eigenvector, quantitative model is constructed;
Blending image effect to be tested is quantified according to the quantitative model.
A kind of blending image effect quantization method, device, computer equipment and storage medium provided by the present application, pass through meter The feature vector that machine equipment obtains multiple blending images is calculated, and according to preset self-organizing map neural network algorithm and this feature Vector constructs quantitative model, is then quantified according to the quantitative model of the building to blending image effect to be tested, due to The input of the model is the feature vector (this four characteristic informations of signal-to-noise ratio, mean value, comentropy, clarity) of blending image, institute Comprehensively consider blending image multiple use (the multiple characteristic informations for including from image are evaluated) in this way, avoid only from Blending image special purpose evaluates syncretizing effect, gives a comprehensive evaluation result, in addition, result of its quantization is one Degree value can directly reflect the degree of blending image effect quality, so that evaluation result is very intuitive.
Detailed description of the invention
Fig. 1 is the applied environment figure for the blending image effect quantization method that one embodiment provides;
Fig. 2 is the flow diagram for the blending image effect quantization method that one embodiment provides;
Fig. 3 is the flow diagram for the blending image effect quantization method that one embodiment provides;
Fig. 4 is the flow diagram for the blending image effect quantization method that one embodiment provides;
Fig. 5 is the flow diagram for the blending image effect quantization method that one embodiment provides;
Fig. 5 .1 is that the blending image effect quantization method network that one embodiment provides hits schematic diagram;
Fig. 6 is the flow diagram for the blending image effect quantization method that one embodiment provides;
Fig. 6 .1 is the corresponding CV curve synoptic diagram of scale parameter C0 that one embodiment provides;
Fig. 7 is the structural block diagram for the blending image effect quantization device that one embodiment provides;
Fig. 8 is the structural block diagram for the blending image effect quantization device that one embodiment provides;
Fig. 9 is the structural block diagram for the blending image effect quantization device that one embodiment provides;
Figure 10 is the structural block diagram for the blending image effect quantization device that one embodiment provides;
Figure 11 is the internal structure chart of computer equipment in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, not For limiting the application.
Blending image effect quantization method provided by the present application, can be applied to computer equipment as shown in Figure 1, the meter Calculating machine equipment can be server, and internal structure chart can be as shown in Figure 1.The computer equipment includes being connected by system bus Processor, memory, network interface and the database connect.Wherein, the processor of the computer equipment is calculated and is controlled for providing Ability processed.The memory of the computer equipment includes non-volatile memory medium, built-in storage.The non-volatile memory medium is deposited Contain operating system, computer program and database.The built-in storage is operating system and meter in non-volatile memory medium The operation of calculation machine program provides environment.The database of the computer equipment is used to store the number of blending image effect quantization method According to.
The embodiment of the present application provides a kind of blending image effect quantization method, device, computer equipment and storage medium, purport Existing image fusion evaluation method only selects purposes in a certain respect to be evaluated in solving traditional technology, can not provide One comprehensive evaluation result, and the technical problem that its evaluation result is not intuitive enough.To have below by embodiment and in conjunction with attached drawing It is described in detail to how the technical solution of the application and the technical solution of the application solve above-mentioned technical problem to body.Under These specific embodiments of face can be combined with each other, may be in certain embodiments for the same or similar concept or process It repeats no more.It should be noted that the executing subject in following embodiment is computer equipment.
In one embodiment, as Fig. 2 provides a kind of blending image effect quantization method, what is involved is meters for the present embodiment It calculates machine equipment and quantitative model is determined according to the feature vector of preset self-organizing map neural network algorithm and blending image, thus The detailed process that blending image effect is quantified.As shown in Fig. 2, this method comprises:
S101 obtains the feature vector of multiple blending images;Described eigenvector is used to characterize the spy of the blending image Reference breath.
Wherein, the blending image fields can be remote sensing, computer vision, weather forecast, military target identification and Medicine etc., for example, electron emission computerized tomograph (Positron Emiss ion Tomography, the letter of medical domain Claim PET) image and CT scan (Computed Tomograp hy, abbreviation CT) image co-registration, from level side Face, the blending image are also possible to pixel-based fusion, feature-based fusion, decision level fusion etc., and the present embodiment is to blending image Type, field and particular number without limitation, based on actual demand.Wherein, feature vector characterization is the fusion figure The characteristic information of picture, such as: average value, entropy, standard deviation, signal-to-noise ratio, average gradient (clarity) etc. can be to the fusion figures As the information that effect is evaluated, such as: comentropy is the important indicator measured image information and enrich degree, by melting The comparison for closing image information entropy can contrast the details expressive ability of blending image, if the comentropy of blending image is bigger, Then illustrate that the information content of blending image increases, the present embodiment to the characteristic information particular content of blending image without limitation.Wherein This feature vector can for illustrate above 1 or 2, can also be 2 or more characteristic informations.The present embodiment to this not It limits.Wherein, the acquisition of blending image can be calculate and render according to registration Algorithm and rendering engine blending algorithm come , the present embodiment to the algorithm of acquisition without limitation.Multiple blending image can be from the local disk of computer equipment It obtains, be also possible to from image archiving and communication system (Picture Archiving and Communication Systems, abbreviation PACS) or Cloud Server in obtain, the present embodiment does not limit this.
Illustratively, using blending image as the PET/CT image of medical domain, feature vector be signal-to-noise ratio, mean value, comentropy, For this four characteristic informations of clarity, in practical applications, computer equipment obtains multiple PET/CT blending images, and calculates The feature vector of each PET/CT blending image, that is, this four characteristic informations of signal-to-noise ratio, mean value, comentropy, clarity, meter Calculation method can be such that
(1) signal-to-noise ratio (SNR): signal-to-noise ratio describes the ratio of useful information and noise in image, and general signal-to-noise ratio is got over Height shows that blending image effect is better.Its calculation formula is as follows:
Wherein, R (i, j) is the image pixel on the i-th row jth column position of CT PET original image in above-mentioned formula Value, F (i, j) are the image pixel value on the i-th row jth column position of PET/CT blending image.Wherein, R (i, j) and F (i, j) this Two parameters can be stored in advance in image file, and when use directly reads.It is respectively that CT and PET is original when calculating The F (i, j) in R (i, j) and PET/CT blending image in image is substituted into above-mentioned formula, can acquire two original images respectively With PSNR (Y-PSNR) value of blending image, average value then is calculated to two PSNR values, average value is should The signal-to-noise ratio (SNR) of PET/CT blending image.
(2) comentropy (H): the comentropy of image is the important indicator measured image information and enrich degree, by right The comparison of image information entropy can contrast the details expressive ability of image.Generally if the comentropy of blending image is bigger, Illustrate that the information content of blending image increases.Its calculation formula is as follows:
Wherein, L is the gray level sum of PET/CT blending image, piThe pixel for being i for gray level in the blending image goes out Existing probability, when calculating, by L and piThe comentropy (H) that the PET/CT blending image can be obtained in above-mentioned formula is substituted into, wherein L generally desirable 16,32 or 64, gray level is higher, and color is abundanter.
(3) clarityIt is also referred to as average gradient in certain research fields, the clarity of image is adopted in expression It is measured with gradient method, generally, if the average gradient value of image is larger, illustrates that the clarity of image is higher.Clarity Reflect the improvement of picture quality, while also can reflect minor detail contrast and texture transformation feature in image.Its calculation formula Are as follows:
Wherein, Δ IxWith Δ IyRespectively in PET/CT blending image, using the starting point in the image upper left corner as the X of origin With the first-order difference in Y-direction, i.e., the adjacent difference DELTA yx=yx+1-yx for being separated by pixel, (x=0,1,2 ...), n is should The size of PET/CT blending image size.
(4) mean valueMean value indicates the average gray of blending image pixel, is reflected as average brightness to human eye, such as Fruit mean value is moderate (gray value of pixel is near 128), then visual effect is good.Its calculation formula is:
Wherein M, N are respectively the width and height (pixel unit) of PET/CT blending image, and F (i, j) is PET/CT blending image The i-th row jth column position on image pixel value.
It should be noted that the calculation method of above-mentioned signal-to-noise ratio, mean value, comentropy, clarity is not limited to above-mentioned formula, It is all to be all contained in the application including above-mentioned formula or the signal-to-noise ratio obtained by above-mentioned formula deformation, mean value, comentropy, clarity Scope of embodiments in.
S102 constructs quantitative model according to preset self-organizing map neural network algorithm and described eigenvector.
Wherein, preset self-organizing map neural network algorithm can be based on self-organizing map neural network, and combine Feature vector (above-mentioned described four signal-to-noise ratio, mean value, comentropy, clarity characteristic informations) setting of the blending image Self-organizing map neural network algorithm, such as: it can be and preset initial weight matrix, be also possible to pre-set update The number of iterations of weight, can also be other parameters, and the present embodiment does not limit this.
It generally, is substantially a kind of otherness degree to the assessment of blending image effect using self-organizing map neural network Amount, this measurement will must first select a benchmark, and the effect of blending image is made certainly typically to certain target Fixed, it is small such as to obtain noise, image clearly, contains much information, the blending image of resolution ratio height etc..It therefore, can be with syncretizing effect The topology-conserving maps of preferable blending image training, that is, quantitative model is all to measure other as benchmark The otherness of blending image and quantitative model to be tested.
Specifically, based on each feature vector (letter for multiple blending images that in above-mentioned S101 step, computer equipment is calculated Make an uproar ratio, mean value, comentropy, clarity), according to the preset self-organizing map neural network algorithm, computer equipment can be adopted The input for using the feature vector of the blending image as neural network starts to construct quantitative model, wherein the quantitative model can be with It is the self-organizing map neural that each feature vector for the preferable blending image of effect selected by above-mentioned multiple blending images is built Network model.
S103 quantifies blending image effect to be tested according to the quantitative model.
It should be noted that carrying out quantization to blending image effect described in embodiments herein can be understood as Blending image effect is evaluated, evaluating the method that uses is that will characterize each parameter of its effect to be quantified, and is quantified It as a result is a degree value, such as: confidence level can intuitively show the superiority and inferiority of blending image effect according to confidence level.
In the present embodiment, the feature vector of blending image to be tested is input in quantitative model, by successive ignition The quantized value of blending image to be tested is obtained, which can be confidence level, blending image to be tested and quantitative model Between Euclidean distance etc., blending image effect can intuitively be judged according to the quantized value, quantized value can also be mapped as Two dimension or three-dimensional image format show, intuitive to observe blending image effect.
Blending image effect quantization method provided in this embodiment, the spy of multiple blending images is obtained by computer equipment Vector is levied, and according to preset self-organizing map neural network algorithm and this feature vector, constructs quantitative model, then basis should The quantitative model of building quantifies blending image effect to be tested, due to the feature that the input of the model is blending image Vector, this method can comprehensively consider blending image multiple use, avoid and only evaluate fusion from blending image special purpose Effect gives a comprehensive evaluation result, in addition, the result of its quantization is a degree value, can directly reflect fusion figure As the degree of effect quality, so that evaluation result is very intuitive.
In one embodiment, as Fig. 3 provides a kind of blending image effect quantization method, what is involved is meters for the present embodiment Machine equipment is calculated according to preset self-organizing map neural network algorithm and described eigenvector, constructs the specific mistake of quantitative model Journey.As shown in figure 3, a kind of achievable mode of above-mentioned S102 includes:
S201 generates training sample set according to described eigenvector;The training sample set is according to the multiple fusion The data set that the feature vector of image obtains.
In the present embodiment, which is the sample set of the preferable blending image composition of syncretizing effect, wherein each Sample is the feature vector of a blending image, that is, each sample is the signal-to-noise ratio of a blending image, mean value, comentropy, clear Clear this four characteristic informations of degree, specifically, according to the feature of the S101 blending image obtained and all blending images calculated Vector, therefrom the preferable blending image of selected characteristic vector is as sample, in this way, selecting the preferable sample come can group At training sample set.Wherein, selection criteria can be according to its purposes and judge, such as structural information clearly image, fusion Position is accurate, and metabolic information abundance etc. goes to judge, is also possible to expert or experienced clinician according to subjective experience It determines, selection standard the present embodiment of the preferable sample do not limit this.
S202 initializes the initial weight matrix of the self-organizing map neural network according to the training sample set.
It should be noted that the study of neural network is constantly to go to force from the network initial weight being randomly assigned The distribution of nearly training sample.The setting of initial weight has significant impact to network convergence, if initial weight distribution and target Differing distribution farther out, then the time learnt can be longer, in some instances it may even be possible to can not restrain, conversely, if initial weight relatively mesh Mark distribution, then, it will be able to rapidly restrain.And there are mainly two types of methods for currently used neural network initial weight setting: Given weight method and random weight method.Wherein, giving weight method is to set a fixed number directly to all weight vectors, Such as(dimension that n is input vector), setting, which may result in network, in this way to restrain;Random weight rule is will be initial Weight be set as it is random close to 0 decimal, it is this to set the purpose is to be well dispersed in weight vector in sample space It sets the sample set more concentrated relative to distribution and only adjusts the close weight vector of distance sample, and apart from farther away network weight It cannot adjust, result may be made to be polymerized to one kind in this way.
In order to guarantee that the setting of initial weight can make network convergence and all weight vectors can be adjusted, need to guarantee The similitude of initial weight and the input space guarantees the discreteness of initial weight again.Specifically, computer is set in the present embodiment It is standby first obtain training sample concentrate each blending image feature vector (as signal-to-noise ratio, mean value described in previous embodiment, Comentropy, clarity this four characteristic informations) value range, then by the uniformly random distribution of value within the scope of each feature vector To each neuron of competition layer, in this way, can both guarantee the value of initial weight and the similitude of training sample, random point With again weight vector can be guaranteed in the discreteness of sample space.
Optionally, a kind of achievable mode of S202 are as follows: obtain the training sample and concentrate taking for each described eigenvector It is worth range;By the uniformly random distribution of the value in the value range to each neuron of the neural network competition layer, obtain The initial weight matrix.Wherein, this feature vector is each characteristic information of blending image described in S301 step.Its In, random uniform distribution can be according to single attribute in feature vector at random uniformly by the characteristic information of each blending image It is allocated.In this way, can guarantee two in this way since initial weight setting is entirely to complete according to the value of training sample The similitude of person enables algorithm fast convergence, greatly improves the arithmetic speed of network.
Illustratively, it is assumed that training sample set are as follows:
X=[x1,x2,...,xn]T
xi=[xi1,xi2,...,xim]T
Wherein, n is the number that training sample concentrates sample, and m is the number of characteristic information in blending image feature vector, if The number of neuron is inputted as In, and In=m, the number of output neuron is K, is generally takenWherein K is to necessary It is integer.
Then obtain the specific steps of initial weight matrix are as follows:
Step 1: setting the maximum value of j-th of attribute in training sample set feature vector first as MAXj, minimum value MINj, That is:
Step 2: in MAXjAnd MINjBetween uniformly acquire K value, be expressed as:
Zj=[z1j,z2j,...,zKj], (j=1,2 ..., m)
Wherein, ZjIndicate the vector of uniform sampling in j-th of attribute sample value range in feature vector.
Step 3: by ZjIn value sequence upset, j-th of value being randomly assigned in K neuron weight vector, by institute There is attribute successively according to said method to distribute, can obtain as a result:
wi=[zi1,zi2,...zij,...,ziK], (i=1,2 ..., K;J=1,2 ..., m)
Wherein, wiIndicate the weight vector of i-th of neuron, zijIndicate i-th of value in j-th of attribute.
Then, the w finally acquirediMatrix is the initial weight matrix set.In this way, initial weight matrix is according to training sample The value of this concentration is set, and can either guarantee the similitude of initial weight matrix and training sample set, and can make nerve net Network algorithm fast convergence, greatly accelerates the training speed of self-organizing map neural network.
S203, according to the learning rate of the neural network, the radius of neighbourhood and preset adaptive stopping criterion, to described first Beginning weight matrix is updated, and obtains the feature space of the training sample set.
In the present embodiment, based on the initial weight matrix determined in above-mentioned steps S202, computer is set in the present embodiment For according to the learning rate of neural network, the radius of neighbourhood and preset adaptive stopping criterion, the training sample set is instructed Practice, the deconditioning when trained number reaches the above preset adaptive stopping criterion is then ultimately formed as training sample The feature space of collection, wherein the learning rate and the radius of neighbourhood of neural network are the monotonic function of the number of iterations, are with iteration time It is several and change.
It should be noted that usually rule of thumb artificially specified network is instructed in self-organizing map neural network algorithm Practice maximum number of iterations T, and according to the variation formula of maximum number of iterations setting network learning rate and winning domain radius, it is such as following Formula:
η (i)=ηmax-(i/T)*(ηmaxmin)
R (i)=rmax-(i/T)*(rmax-rmin)
Wherein, η (i), r (i) respectively indicate the learning rate and winning domain radius of i-th iteration, ηmax,rmaxRespectively learn The maximum value of the maximum value of rate and winning domain radius, ηmin,rminThe respectively minimum of the minimum value of learning rate and winning domain radius Value, T indicate the maximum number of iterations being manually set.After usual neural metwork training to maximum number of iterations, training will stop Only.It is very few due to working as the number of iterations, then trained effect is not achieved;If it is excessive, and redundant computation amount can be greatly increased, so one As in order to obtain optimal training result, can all set biggish the number of iterations, this will largely effect on the computational efficiency of algorithm.
But maximum number of iterations is set due to no in the present embodiment, the learning rate of each iteration and winning domain Radius also occurs accordingly to change, then learning rate and winning neighborhood according to Δ w (situation of change for indicating all cluster centres) come Representative function are as follows:
Wherein, when first time iteration, mono- the larger value of Δ w, such as 10000 can be given, then training for the first time Habit rate and winning domain radius are all bigger, and with going deep into for study, Δ w is smaller and smaller, learning rate and winning domain radius also all by Gradual change is small, that is, meets the requirement for being adjusted to finely tune after first coarse adjustment of weight.Therefore, it is less than by the maximum value of weight variable quantity Some smaller value is come judge automatically whether network stops, and learnt each time according to the demand adjust automatically of network training Habit rate and winning domain radius, can guarantee the minimum the number of iterations of network training effect in this way, reasonably reduce calculation amount.
In the present embodiment, preset adaptive stopping criterion is the maximum number of iterations set, and the adaptive stopping is quasi- It needs the effect learnt according to network itself then to automatically determine the number of iterations, has not only guaranteed the effect of e-learning, but also is not blind Mesh increases calculation amount.Specifically, in the present embodiment, during neural network learning, each of competition layer nerve Member is all equivalent to a cluster centre, and as the feature vector of input layer training sample set continually enters, cluster centre can be continuous The center close to every a kind of data, when cluster centre is with the increase of the number of iterations, and the weight variation in competition layer is little, Illustrate to have reached the number of iterations in preset adaptive stopping criterion, then neural metwork training is basically completed.
Optionally, the adaptive stopping criterion includes that the maximum of class center weight vector in updated weight matrix becomes Change amount is less than preset threshold.In the adaptive stopping criterion, if both less than this is pre- always for the variable quantity of the weight in competition layer If threshold value, then it represents that for all cluster centres with the increase of the number of iterations, weight variation is little, can thus stop nerve The training of network.
Illustratively, because the forms of characterization of cluster centre is the weight vector of each competition layer neuron, it is possible to according to The variation of weight vector changed to indicate cluster centre, it can be expressed as following formula:
Δ W=Wn-Wn-1
Wherein, Δ W indicates the situation of change of all cluster centre weight vectors, is the matrix of M*N dimension, wherein WnTable Weight matrix after showing current training, Wn-1Weight matrix before indicating after primary training.
If the maximum value of weight variation is both less than some preset threshold ε in Δ W, then can illustrate all weights Variable quantity is both less than some small value ε, this also illustrates all cluster centres with the increase of the number of iterations, and weight changes not Greatly.Therefore, the variation of cluster centre can be described with the Infinite Norm of Δ W.That is:
Wherein, Δ w is the element of maximum absolute value in Δ W, and M is the number for inputting neuron, and N is competition layer neuron Number.
It is therefore possible to use the adaptive stopping criterion that slippage is sufficiently small, gives smaller value ε, if Δ w < ε, illustrates E-learning terminates, in this way, being automatically stopped neural network by the adaptive stopping criterion, greatly accelerates Self-organizing Maps The training speed of neural network.
S204 determines the quantitative model according to the feature space.
Based on the feature space of the training sample set determined in above-mentioned steps S203, computer equipment is by the training sample set Feature space is determined as corresponding quantitative model.Wherein, the feature space of the training sample set is determined as pair by computer equipment The mode for the quantitative model answered, the algorithm used, the present embodiment do not limit this.
Blending image effect quantization method provided in this embodiment initializes the initial of neural network according to training sample set Weight matrix, according to the learning rate of neural network, the radius of neighbourhood and preset adaptive stopping criterion, to the initial weight square Battle array is updated, and is obtained the feature space of the training sample set, is determined the quantitative model further according to the feature space, this The sample set that sample is made of due to the training sample set each feature vector of the preferable blending image of multiple syncretizing effects, then Quantitative model is constructed using the training sample set, it is defeated when so that carrying out effect assessment to blending image using the quantitative model Enter for the feature vector of each blending image, has comprehensively considered blending image multiple use (from image comprising multiple characteristic informations Evaluated), it avoids and only evaluates syncretizing effect from blending image special purpose, give a comprehensive evaluation result.
In one embodiment, as Fig. 4 provides a kind of blending image effect quantization method, what is involved is meters for the present embodiment Machine equipment is calculated according to the learning rate of the neural network, the radius of neighbourhood and preset adaptive stopping criterion, to the initial power Value matrix is updated, and obtains the detailed process of the feature space of the training sample set.As shown in figure 4, the one of above-mentioned S203 Planting achievable mode includes:
S301 calculates each neuron in the training sample set and the neural network according to the initial weight matrix Between Euclidean distance.
In the present embodiment, based on the initial weight matrix determined in above-mentioned S202 step, computer equipment calculates training Euclidean distance in sample set and the neural network between each neuron, wherein training sample set is to retouch in above-described embodiment The sample set of the preferable blending image composition of the syncretizing effect stated.Computer equipment calculates separately each training sample set and nerve net Euclidean distance in network between neuron.Illustratively, if initialization weight matrix are as follows:
W=[w1,w2,...,wK]
wj=[wj1,wj2,...,wjm], (j=1,2 ..., K)
Assuming that it is x that training sample, which concentrates some sample,i, then sample xiWith the Euclidean distance d of neuron jijCalculating Mode are as follows:
S302 determines that the smallest neuron of the Euclidean distance is triumph neuron.
In each training sample set calculated based on computer equipment in above-mentioned steps S301 and neural network between neuron Euclidean distance, computer equipment determine the smallest neuron of Euclidean distance be triumph neuron, wherein computer equipment determine The smallest method of Euclidean distance can be first is ranked up all Euclidean distances from small to large, European by what is made number one Distance is determined as minimum euclidean distance, is also possible to other modes, and the present embodiment does not limit this.
S303 determines winning neighborhood according to the triumph neuron and the radius of neighbourhood.
In the present embodiment, the neuron based on the above-mentioned steps S302 minimum euclidean distance determined is triumph neuron, knot The preset radius of neighbourhood is closed, computer equipment is determined using the triumph neuron as the center of circle, and the radius of neighbourhood is the region of radius, will The region is determined as winning neighborhood.
S304 updates the weight of neuron in the winning neighborhood, obtains the instruction according to the adaptive stopping criterion Practice the feature space of sample set.
In the present embodiment, which can be to work as updated weight described in above-mentioned S203 step When the maximum variable quantity of class center weight vector is less than preset threshold in matrix, network training can be stopped.In network training When not meeting the adaptive stopping criterion, the network is trained every time, requires the weight for updating neuron in winning neighborhood, directly To meeting the adaptive stopping criterion deconditioning.After the neural network deconditioning, being formed by feature space is to instruct The training sample inputted when practicing the neural network concentrates the feature space of each blending image.
Computer equipment involved in the present embodiment according to the learning rate of the neural network, the radius of neighbourhood and it is preset from Stopping criterion is adapted to, the initial weight matrix is updated, the specific mistake of the feature space of the training sample set is obtained Journey can be exemplified description are as follows: it is similarly assumed that training sample set are as follows:
X=[x1,x2,...,xn]T
xi=[xi1,xi2,...,xim]T
Wherein, n is the number that training sample concentrates sample, and m is the number of attribute in feature vector, if input neuron Number is In, and the number of In=m, output neuron are K=M*N, is generally taken(integer), wherein m, n value are as far as possible Close, the topological structure of network competition layer can choose quadrilateral structure.
It is said as aforementioned, the model parameter of neural network includes weight matrix W, learning rate η, field radius r, wherein η and r It is all the function with the increase monotone decreasing of the number of iterations.Then set initialization weight matrix are as follows:
W=[w1,w2,...,wK]
wj=[wj1,wj2,...,wjm], (j=1,2 ..., K)
Assuming that the sample of input is xi, then input sample x is calculatediWith the Euclidean distance d of neuron jij,
Then for triumph neuron c, then:
dic=min { dj, (j=1,2 ..., K)
Since the number of iterations is unknown, in order to guarantee learning rate and the radius of neighbourhood with iterations going on, so that trained Initial stage learning rate and the radius of neighbourhood are larger, carry out coarse adjustment, the level-learning rate and radius of neighbourhood is smaller is finely adjusted below, Therefore one, which can finally be dropped to, can not set learning rate η (t), neighborhood distance function h for 0 smaller valueci(t) and radius of neighbourhood r (t) it is expressed as following formula:
Wherein ηmaxFor given maximum learning rate, rmaxFor the given maximum radius of neighbourhood, Δ w is class center weight vector Maximum variable quantity, δciIndicate the distance between neuron c and i.
According to triumph neuron and learning rate η (t), neighborhood distance function h out determined aboveci(t) and radius of neighbourhood r (t), in competition layer, using triumph neuron c as the center of circle, r is that radius determines winning neighborhood, to the weight root of neuron in neighborhood Different degrees of update is carried out according to following formula:
wi(t+1)=wi(t)+η(t)*hci*[xi-wi(t)]
Wherein, t indicates the number of iterations, and it is with the function of t monotone decreasing, h that η (t), which indicates learning rate,ci(t) neighborhood is indicated Distance function.The threshold epsilon of the maximum variable quantity of given class center weight vector, if Δ w < ε, training terminates, neural network instruction The weight matrix of the network competition layer formed after white silk as constitutes the feature space of training sample.
The quantization method of blending image effect provided in this embodiment, computer equipment is first according to the initial weight square Battle array, calculates the Euclidean distance in the training sample set and the neural network between each neuron, then most by Euclidean distance Small neuron is determined as triumph neuron, and determines winning neighborhood according to triumph neuron and the radius of neighbourhood, further according to default Adaptive stopping criterion, update the weight of neuron in winning neighborhood, obtain the feature space of the training sample set, due to The sample set that the training sample set is made of each feature vector of the preferable blending image of multiple syncretizing effects, then using should Training sample set constructs quantitative model, when so that carrying out effect assessment to blending image using the quantitative model, using respectively melting The feature vector of image is closed as input, has comprehensively considered blending image multiple use (from image comprising multiple characteristic informations Evaluated), it avoids and only evaluates syncretizing effect from blending image special purpose, give a comprehensive evaluation result.
In one embodiment, as Fig. 5 provides a kind of blending image effect quantization method, what is involved is meters for the present embodiment Calculate the detailed process that machine equipment quantifies blending image effect to be tested according to the quantitative model.On as shown in figure 5, A kind of achievable mode for stating S103 includes:
S401 calculates the Euclidean distance between sample to be tested and each weight vector of the quantitative model;It is described to be measured Sample sheet is obtained by the feature vector according to blending image to be tested.
In the present embodiment, which can be the feature vector composition of the blending image to be tested randomly selected Sample, which is blending image to be evaluated.By the spy of the corresponding blending image of the sample to be tested Levy input of the vector as quantitative model, calculated in the quantitative model constructed in above-mentioned S102 step, obtain one to Then the distribution of test sample calculates in the quantitative model, according to the distribution of the sample to be tested, calculate the sample to be tested with Euclidean distance between training sample set (quantitative model is formed by training sample set training) each weight vector.
S402 determines that the smallest Euclidean distance is the first quantized value of the blending image effect to be tested.
Based in above-mentioned S401 determine sample to be tested and the quantitative model each weight vector between Euclidean away from From, computer equipment determines the smallest Euclidean distance, according to all Euclidean distances can to the blending image to be tested into Row evaluation, i.e., MQE is bigger, and the samples fusion effect is poorer, conversely, syncretizing effect is better.Wherein, computer equipment determines minimum Euclidean distance mode, can be it is existing all Euclidean distances are ranked up in the way of from small to large, first place will be located at Euclidean distance be determined as the smallest Euclidean distance, determine the mode this embodiment of the smallest Euclidean distance to this in computer Without limitation.
Illustratively, it is similarly assumed that sample to be tested is x=[x1,x2,...,xm], trained neural network weight matrix For W=[w1,w2,...,wK], according to MQE=| | x-wBMU| | calculate weight all in the sample x to be tested and weight matrix W The Euclidean distance of vector takes a wherein index of the smallest Euclidean distance as blending image recruitment evaluation, can be defined For minimum quantization error, i.e. the first quantized value;Wherein, MQE is minimum quantization error, wBMUFor the weight vector of triumph neuron. Then MQE is bigger, and the samples fusion effect is poorer, conversely, syncretizing effect is better.
Optionally, above-mentioned S103: blending image effect to be tested is quantified according to the quantitative model another The achievable mode of kind can also include: that the training sample set and the sample to be tested are mapped on two-dimensional surface, constitute Network hit figure;The blending image effect is quantified according to the network hit figure.In the present embodiment, sample to be tested Training sample set is mapped in a two-dimensional plane in distribution situation and quantitative model in quantitative model, that is, is mapped to competing It strives on the neuron of layer, may be constructed a width network hit figure, as shown in Fig. 5 .1, wherein No. 1 region indicates that training sample set reflects The concentrated area penetrated, wherein mapping mode the present embodiment is not limited this, it is to be measured on two-dimensional surface according to being mapped to The distribution situation of sample sheet can intuitively see that training sample is concentrated together, can be to the sample to be tested according to Fig. 5 .1 This is evaluated, if test training set and training sample set relatively far apart, syncretizing effect is poor, if the two be separated by it is closer, Effect is better.
The quantization method of blending image effect provided in this embodiment calculates each of sample to be tested and the quantitative model Euclidean distance between weight vector;Then, it is determined that the smallest Euclidean distance is the of the blending image effect to be tested One quantized value.Also, the training sample set and the sample to be tested can be mapped on two-dimensional surface, network hit is constituted Figure;The blending image effect is quantified according to the network hit figure, in this way, can both embody fusion by data The effect of image can also intuitively give expression to the effect of blending image, so that the evaluation of the blending image effect by illustrating Mode is more diversified.
In view of above-mentioned MQE merely depicts distance of each sample to be tested relative to training sample set feature space, only It is a distance value, can not intuitively reflects very much the degree of current samples fusion effect quality.Optionally, as shown in fig. 6, Computer equipment can also include: according to the mode that the quantitative model quantifies blending image effect to be tested
S501, the first quantized value for calculating the sample to be tested are flat relative to the first quantized value of the training sample set Horizontal confidence level.
In view of the preferable sample of syncretizing effect should be very small relative to the distance of trained sample space, Huo Zhejie It is bordering on 0, it is known that the syncretizing effect quality degree of training sample should be close to 0, it therefore, can be with training sample set MQE value on the basis of, seek confidence level of the sample MQE to be tested relative to training sample set MQE.
In the present embodiment, sample to be tested can be the training set of the composition of the multiple blending images randomly selected, this is more A blending image is blending image to be evaluated.Wherein, the first quantized value of the sample to be tested is calculated relative to described The confidence level of first quantized value average level of training sample set equally, is calculated based on the MQE determined in above-mentioned S402 step Machine equipment first calculates feature vector (signal-to-noise ratio, mean value, comentropy, the clarity of each blending image in multiple sample to be tested This four characteristic informations), calculating the sample to be tested according to this feature vector, (quantitative model is by training sample with training sample set The training of this collection forms) Euclidean distance between each weight vector, then determine that minimum euclidean distance is MQE, then computer Equipment is determined according to MQE between the quantized value of the sample to be tested and the quantized value average level of the training sample set Confidence level, the mode this embodiment that wherein computer equipment determines confidence level without limitation, illustratively, can be according to lower section Formula determines:
Wherein, CV indicates the confidence level relative to benchmark syncretizing effect, and MQE is sample to be tested or training sample set MQE, c0It is a scale parameter, chooses a suitable c0Value, so that the CV value of baseline sample closer to 1, is referred to such as figure Set scale parameter c shown in 6.10Be not provided with scale parameter c0Corresponding CV curve synoptic diagram.Wherein need to illustrate It is, due to the sample more than one that training sample is concentrated, the training sample set that it is made of many training samples, therefore When calculating the MQE of training sample set, multiple MQE can be obtained, so baseline sample MQE is the average value of multiple MQE.
S502 determines that the confidence level is the second quantized value of the blending image effect to be tested.
In this step, based on the confidence level determined in above-mentioned S501, determine that the confidence level is the fusion to be tested Second quantized value of image effect treats test sample using the confidence level and carries out syncretizing effect evaluation, wherein the confidence level Value range is between 0 to 1, if confidence value is bigger, then it represents that sample to be tested more levels off to benchmark (training sample set) State, i.e., syncretizing effect is better, conversely, syncretizing effect is poorer.
The quantization method of blending image effect provided in this embodiment calculates the quantized value of the training sample set and described Confidence level between the quantized value of sample to be tested determines that the confidence level is the second of the blending image effect to be tested Quantized value characterizes last fusion assessment result using confidence level, so that evaluation result is a degree value, so as to more It is comprehensive intuitively to evaluate blending image effect.
It should be understood that although each step in the flow chart of Fig. 2-6 is successively shown according to the instruction of arrow, These steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly stating otherwise herein, these steps Execution there is no stringent sequences to limit, these steps can execute in other order.Moreover, at least one in Fig. 2-6 Part steps may include that perhaps these sub-steps of multiple stages or stage are not necessarily in synchronization to multiple sub-steps Completion is executed, but can be executed at different times, the execution sequence in these sub-steps or stage is also not necessarily successively It carries out, but can be at least part of the sub-step or stage of other steps or other steps in turn or alternately It executes.
In one embodiment, the quantization device structure if Fig. 7 is a kind of blending image effect that an embodiment provides is shown It is intended to, as shown in fig. 7, the device includes: to obtain module 10, building module 11, quantization modules 12.
Module 10 is obtained, for obtaining the feature vector of multiple blending images;Described eigenvector is for characterizing described melt Close the characteristic information of image;
Module 11 is constructed, for according to preset self-organizing map neural network algorithm and described eigenvector, building amount Change model;
Quantization modules 12, for being quantified according to the quantitative model to blending image effect to be tested.
The quantization device of blending image effect provided by the above embodiment, implementing principle and technical effect and the above method Embodiment is similar, and details are not described herein.
In one embodiment, the quantization device structure if Fig. 8 is a kind of blending image effect that an embodiment provides is shown It is intended to, as shown in figure 8, above-mentioned building module 11 includes: generation unit 111, initialization unit 112, updating unit 113 and building Unit 114.
Generation unit 111, for generating training sample set according to described eigenvector;The training sample set is according to institute State the data set that the feature vector of multiple blending images obtains;
Initialization unit 112, for initializing the initial weight matrix of neural network according to the training sample set;
Updating unit 113, for quasi- according to the learning rate of the neural network, the radius of neighbourhood and preset adaptive stopping Then, the initial weight matrix is updated, obtains the feature space of the training sample set;
Construction unit 114, for determining the quantitative model according to the feature space.
The quantization device of blending image effect provided by the above embodiment, implementing principle and technical effect and the above method Embodiment is similar, and details are not described herein.
In one embodiment, above-mentioned initialization unit 112 is specifically used for obtaining each spy of the training sample concentration Levy the value range of vector;By each mind of the uniformly random distribution of the value in the value range to the neural network competition layer Through member, the initial weight matrix is obtained.
In one embodiment, above-mentioned construction unit 114 is specifically used for calculating the instruction according to the initial weight matrix Practice the Euclidean distance in sample set and the neural network between each neuron;Determine that the smallest neuron of the Euclidean distance is Triumph neuron;Winning neighborhood is determined according to the triumph neuron and the radius of neighbourhood;It is quasi- according to the adaptive stopping Then, the weight for updating neuron in the winning neighborhood, obtains the feature space of the training sample set.
The adaptive stopping criterion includes class center weight in updated weight matrix in one of the embodiments, The maximum variable quantity of vector is less than preset threshold.
In one embodiment, the quantization device structure if Fig. 9 is a kind of blending image effect that an embodiment provides is shown It is intended to, as shown in figure 9, above-mentioned quantization modules 12 include: computing unit 121 and determination unit 122.
Computing unit 121, for calculate the Euclidean between sample to be tested and each weight vector of the quantitative model away from From;The sample to be tested is obtained by the feature vector according to blending image to be tested;
Determination unit 122, for determining that the smallest Euclidean distance is the first amount of the blending image effect to be tested Change value.
The quantization device of blending image effect provided by the above embodiment, implementing principle and technical effect and the above method Embodiment is similar, and details are not described herein.
In one embodiment, the quantization device structure if Figure 10 is a kind of blending image effect that an embodiment provides is shown It is intended to, as shown in Figure 10, described device further include: computing module 13 and determining module 14.
Computing module 13, for calculating the first quantized value of the sample to be tested relative to the training sample set The confidence level of one quantized value average level;
Determining module 14, for determining that the confidence level is the second quantized value of the blending image effect to be tested.
The quantization device of blending image effect provided by the above embodiment, implementing principle and technical effect and the above method Embodiment is similar, and details are not described herein.
The specific restriction of quantization device about blending image effect may refer to above for blending image effect The restriction of quantization method, details are not described herein.Modules in the quantization device of above-mentioned blending image effect can whole or portion Divide and is realized by software, hardware and combinations thereof.Above-mentioned each module can be embedded in the form of hardware or independently of computer equipment In processor in, can also be stored in a software form in the memory in computer equipment, in order to processor calling hold The corresponding operation of the above modules of row.
In one embodiment, a kind of computer equipment is provided, which can be terminal, internal structure Figure can be as shown in figure 11.The computer equipment includes the processor connected by system bus, memory, network interface, shows Display screen and input unit.Wherein, the processor of the computer equipment is for providing calculating and control ability.The computer equipment Memory includes non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system and computer Program.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The meter The network interface for calculating machine equipment is used to communicate with external terminal by network connection.When the computer program is executed by processor A kind of method to realize blending image effect quantization.The display screen of the computer equipment can be liquid crystal display or electronics Ink display screen, the input unit of the computer equipment can be the touch layer covered on display screen, are also possible to computer and set Key, trace ball or the Trackpad being arranged on standby shell, can also be external keyboard, Trackpad or mouse etc..
It will be understood by those skilled in the art that structure shown in Figure 11, only part relevant to application scheme The block diagram of structure, does not constitute the restriction for the computer equipment being applied thereon to application scheme, and specific computer is set Standby may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
In one embodiment, a kind of computer equipment, including memory and processor are provided, which is stored with Computer program, for realizing following step when which executes computer program:
Obtain the feature vector of multiple blending images;Described eigenvector is used to characterize the feature letter of the blending image Breath;
According to preset self-organizing map neural network algorithm and described eigenvector, quantitative model is constructed;
Blending image effect to be tested is quantified according to the quantitative model.
Computer equipment provided by the above embodiment, implementing principle and technical effect are similar with above method embodiment, Details are not described herein.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated Machine program performs the steps of when being executed by processor
Obtain the feature vector of multiple blending images;Described eigenvector is used to characterize the feature letter of the blending image Breath;
According to preset self-organizing map neural network algorithm and described eigenvector, quantitative model is constructed;
Blending image effect to be tested is quantified according to the quantitative model.
Computer readable storage medium provided by the above embodiment, implementing principle and technical effect and the above method are implemented Example is similar, and details are not described herein.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, To any reference of memory, storage, database or other media used in each embodiment provided herein, Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance Shield all should be considered as described in this specification.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art It says, without departing from the concept of this application, various modifications and improvements can be made, these belong to the protection of the application Range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.

Claims (10)

1. a kind of blending image effect quantization method, which is characterized in that the described method includes:
Obtain the feature vector of multiple blending images;Described eigenvector is used to characterize the characteristic information of the blending image;
According to preset self-organizing map neural network algorithm and described eigenvector, quantitative model is constructed;
Blending image effect to be tested is quantified according to the quantitative model.
2. the method according to claim 1, wherein described according to preset self-organizing map neural network algorithm And described eigenvector, construct quantitative model, comprising:
Training sample set is generated according to described eigenvector;The training sample set is preferably to be merged according to multiple syncretizing effects The data set that the feature vector of image obtains;
The initial weight matrix of the self-organizing map neural network is initialized according to the training sample set;
According to the learning rate of the self-organizing map neural network, the radius of neighbourhood and preset adaptive stopping criterion, to described Initial weight matrix is updated, and obtains the feature space of the training sample set;
The quantitative model is determined according to the feature space.
3. according to the method described in claim 2, it is characterized in that, described initialize neural network according to the training sample set Initial weight matrix, comprising:
Obtain the value range that the training sample concentrates each described eigenvector;
By each neuron of the uniformly random distribution of the value in the value range to the neural network competition layer, obtain described Initial weight matrix.
4. according to the method in claim 2 or 3, which is characterized in that the learning rate according to the neural network, neighborhood Radius and preset adaptive stopping criterion, are updated the initial weight matrix, obtain the spy of the training sample set Levy space, comprising:
According to the initial weight matrix, calculate European between each neuron in the training sample set and the neural network Distance;
Determine that the smallest neuron of the Euclidean distance is triumph neuron;
Winning neighborhood is determined according to the triumph neuron and the radius of neighbourhood;
According to the adaptive stopping criterion, the weight of neuron in the winning neighborhood is updated, the training sample set is obtained Feature space.
5. according to the method in claim 2 or 3, which is characterized in that the adaptive stopping criterion includes updated power The maximum variable quantity of class center weight vector is less than preset threshold in value matrix.
6. according to the method in claim 2 or 3, which is characterized in that described to be melted according to the quantitative model to be tested Image effect is closed to be quantified, comprising:
Calculate the Euclidean distance between sample to be tested and each weight vector of the quantitative model;The sample to be tested is root Obtained by feature vector according to blending image to be tested;
Determine that the smallest Euclidean distance is the first quantized value of the blending image effect to be tested.
7. according to the method described in claim 6, it is characterized in that, the method also includes:
Calculate the first quantized value average level of the first quantized value of the sample to be tested relative to the training sample set Confidence level;
Determine that the confidence level is the second quantized value of the blending image effect to be tested.
8. a kind of blending image effect quantization device, which is characterized in that described device includes:
Module is obtained, for obtaining the feature vector of multiple blending images;Described eigenvector is for characterizing the blending image Characteristic information;
Module is constructed, for constructing quantitative model according to preset self-organizing map neural network algorithm and described eigenvector;
Quantization modules, for being quantified according to the quantitative model to blending image effect to be tested.
9. a kind of computer equipment, including memory and processor, the memory are stored with computer program, feature exists In the step of processor realizes any one of claims 1 to 7 the method when executing the computer program.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program The step of method described in any one of claims 1 to 7 is realized when being executed by processor.
CN201811178810.0A 2018-10-10 2018-10-10 Fusion image effect quantification method and device, computer equipment and storage medium Active CN109493319B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811178810.0A CN109493319B (en) 2018-10-10 2018-10-10 Fusion image effect quantification method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811178810.0A CN109493319B (en) 2018-10-10 2018-10-10 Fusion image effect quantification method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109493319A true CN109493319A (en) 2019-03-19
CN109493319B CN109493319B (en) 2021-06-22

Family

ID=65690249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811178810.0A Active CN109493319B (en) 2018-10-10 2018-10-10 Fusion image effect quantification method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109493319B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111475532A (en) * 2020-03-05 2020-07-31 拉扎斯网络科技(上海)有限公司 Data processing optimization method and device, storage medium and terminal
CN111798414A (en) * 2020-06-12 2020-10-20 北京阅视智能技术有限责任公司 Method, device and equipment for determining definition of microscopic image and storage medium
CN112449175A (en) * 2019-08-29 2021-03-05 浙江宇视科技有限公司 Image splicing test method, device, equipment and storage medium
CN113674157A (en) * 2021-10-21 2021-11-19 广东唯仁医疗科技有限公司 Fundus image stitching method, computer device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101334893A (en) * 2008-08-01 2008-12-31 天津大学 Fused image quality integrated evaluating method based on fuzzy neural network
US20170039436A1 (en) * 2015-08-03 2017-02-09 Nokia Technologies Oy Fusion of RGB Images and Lidar Data for Lane Classification
CN106910192A (en) * 2017-03-06 2017-06-30 长沙全度影像科技有限公司 A kind of image syncretizing effect appraisal procedure based on convolutional neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101334893A (en) * 2008-08-01 2008-12-31 天津大学 Fused image quality integrated evaluating method based on fuzzy neural network
US20170039436A1 (en) * 2015-08-03 2017-02-09 Nokia Technologies Oy Fusion of RGB Images and Lidar Data for Lane Classification
CN106910192A (en) * 2017-03-06 2017-06-30 长沙全度影像科技有限公司 A kind of image syncretizing effect appraisal procedure based on convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴红艳: "基于自组织特征映射网络的聚类算法研究", 《中国优秀博硕士学位论文全文数据库 信息科技辑》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112449175A (en) * 2019-08-29 2021-03-05 浙江宇视科技有限公司 Image splicing test method, device, equipment and storage medium
CN112449175B (en) * 2019-08-29 2022-05-17 浙江宇视科技有限公司 Image splicing test method, device, equipment and storage medium
CN111475532A (en) * 2020-03-05 2020-07-31 拉扎斯网络科技(上海)有限公司 Data processing optimization method and device, storage medium and terminal
CN111475532B (en) * 2020-03-05 2023-11-03 拉扎斯网络科技(上海)有限公司 Data processing optimization method and device, storage medium and terminal
CN111798414A (en) * 2020-06-12 2020-10-20 北京阅视智能技术有限责任公司 Method, device and equipment for determining definition of microscopic image and storage medium
CN113674157A (en) * 2021-10-21 2021-11-19 广东唯仁医疗科技有限公司 Fundus image stitching method, computer device and storage medium

Also Published As

Publication number Publication date
CN109493319B (en) 2021-06-22

Similar Documents

Publication Publication Date Title
CN109493319A (en) Blending image effect quantization method, device, computer equipment and storage medium
CN107423551B (en) Imaging method and imaging system for performing medical examinations
US10096107B2 (en) Intelligent medical image landmark detection
CN109872333A (en) Medical image dividing method, device, computer equipment and storage medium
CN106454108B (en) Track up method, apparatus and electronic equipment based on artificial intelligence
BR112021015232A2 (en) SHADOW AND CLOUD MASKING FOR REMOTE SENSING IMAGES IN AGRICULTURE APPLICATIONS USING MULTI-LAYER PERCEPTRON
CN109643383A (en) Domain separates neural network
WO2019187372A1 (en) Prediction system, model generation system, method, and program
CN109493417A (en) Three-dimension object method for reconstructing, device, equipment and storage medium
CN110378423A (en) Feature extracting method, device, computer equipment and storage medium
CN114556413A (en) Interactive training of machine learning models for tissue segmentation
CN112215129A (en) Crowd counting method and system based on sequencing loss and double-branch network
CN102216940A (en) Systems and methods for computing and validating a variogram model
JP2019128904A (en) Prediction system, simulation system, method and program
Chen et al. A subpixel mapping algorithm combining pixel-level and subpixel-level spatial dependences with binary integer programming
Montgomery et al. Calibrating ensemble forecasting models with sparse data in the social sciences
CN112801208B (en) Depth measurement learning method and device based on structured agent
CN112101438B (en) Left-right eye classification method, device, server and storage medium
CN106210710A (en) A kind of stereo image vision comfort level evaluation methodology based on multi-scale dictionary
CN110825903A (en) Visual question-answering method for improving Hash fusion mechanism
CN114792349B (en) Remote sensing image conversion map migration method based on semi-supervised generation countermeasure network
CN115953330A (en) Texture optimization method, device, equipment and storage medium for virtual scene image
Riley et al. A study of early stopping, ensembling, and patchworking for cascade correlation neural networks
CN117083632A (en) Method and system for visualizing information on a gigapixel full slice image
CN107832805A (en) It is a kind of that technology of the volumetric position error on the evaluation influence of remote sensing soft nicety of grading is eliminated based on probability positions model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 430206 Lianying medical headquarters base, No. 99, gaokeyuan Road, Donghu high tech Development Zone, Wuhan, Hubei Province

Patentee after: WUHAN UNITED IMAGING HEALTHCARE Co.,Ltd.

Address before: B1-7, 818 Gaoxin Avenue, Donghu hi tech Development Zone, Hongshan District, Wuhan City, Hubei Province 430206

Patentee before: WUHAN UNITED IMAGING HEALTHCARE Co.,Ltd.