CN110852978A - Outlier removing method used before saliency map fusion - Google Patents

Outlier removing method used before saliency map fusion Download PDF

Info

Publication number
CN110852978A
CN110852978A CN201910803396.6A CN201910803396A CN110852978A CN 110852978 A CN110852978 A CN 110852978A CN 201910803396 A CN201910803396 A CN 201910803396A CN 110852978 A CN110852978 A CN 110852978A
Authority
CN
China
Prior art keywords
saliency map
samples
sample set
sample
outlier removal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910803396.6A
Other languages
Chinese (zh)
Inventor
梁晔
马楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Union University
Original Assignee
Beijing Union University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Union University filed Critical Beijing Union University
Priority to CN201910803396.6A priority Critical patent/CN110852978A/en
Publication of CN110852978A publication Critical patent/CN110852978A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention provides a method and a system for removing outliers before saliency map fusion, wherein the method comprises the steps of inputting a sample set and further comprises the following steps: removing outliers from the input sample set using the RANSAC method; and generating and outputting the processed optimized sample set. The method is used for removing the outliers in the set before significance fusion, achieves the purpose of improving the fusion effect, and overcomes the defect that the outliers existing in the sample set can greatly influence the final significance detection result.

Description

Outlier removing method used before saliency map fusion
Technical Field
The invention relates to the field of computer vision and the field of image processing, in particular to a method for removing outliers before saliency map fusion.
Background
The image saliency detection aims at finding out the most important part in an image, is an important preprocessing step for reducing the computational complexity in the field of computer vision, and has wide application in the fields of image compression, target recognition, image segmentation and the like. Meanwhile, the method is a challenging problem in computer vision, the methods have own advantages and disadvantages, and even if the same significance detection method is used, the detection effects of different pictures are greatly different. Therefore, the results of a plurality of significance detection methods can be fused, and the method for obtaining a better significance map is particularly important. There are some traditional saliency map fusion methods, which treat various saliency maps equally by simply adding and averaging or simply multiplying and averaging a plurality of saliency maps, and set weights of various saliency detections to be the same value, which is unreasonable in practical attention because detection effects of various saliency detection methods are different for one picture or even each pixel point, and therefore weights of various saliency detection methods should be set differently. Some methods for fusing multiple saliency maps also exist currently, for example, Mai et al uses Conditional Random Fields (CRF) to fuse multiple saliency maps to obtain good effect, but the effect on recall rate is not satisfactory.
Research [ L.Mai, Y.Niu, and F.Liu.Saliency Aggregation: A Data-driven N applications IEEE Computer Society, CVPR 2013, page 1131 and 1138 ]) shows that the extraction performance of different extraction methods is different, and the extraction effect of different images is different even if the same extraction method is used. However, in the case of no reference binary label, it is very difficult to determine how to extract the saliency map, that is, how to select the saliency map with good extraction effect from the plurality of saliency maps for fusion, and research is very rare.
In the absence of a reference binary label, the document [ Long M, Liu f.computing sales object detection Results with out group Truth [ C ]. European Conference on computer vision. spring International Publishing, 2014: 76-91 ] fusion of various saliency maps was performed. This work defined 6 criteria for evaluating a well-developed image: the coverage of the salient region, the compactness of the salient map, the histogram of the salient map, the color separability of the salient region, the segmentation quality and the boundary quality of the salient map are sequenced according to the 6 rules, and finally the fused salient map is obtained. The method has large calculation amount and complicated processing process.
The invention patent application with the application number of CN106570851A discloses a saliency map fusion method based on a weighting distribution DS evidence theory, and solves the effective fusion problem of saliency maps obtained by a plurality of saliency detection methods. First, a saliency map of each is generated using a saliency detection method to be fused. And secondly, taking the obtained saliency maps as evidences, and defining the identification frameworks and the mass functions corresponding to the saliency detection methods according to the obtained saliency maps. And then, calculating the similarity coefficient and the similarity matrix seen by each evidence, and further obtaining the support degree and the trust degree of each evidence. Then, the mass function value is weighted and averaged by taking the credibility as the weight, and a saliency map is obtained. The weighted average evidence is then synthesized using a D-S synthesis rule to obtain another saliency map. And finally, weighting and summing the two obtained saliency maps again to obtain the final saliency map. In the method, a mass function is adopted for weighted average, but the application of the mass function in the D-S synthesis rule may influence the synthesis effect due to the change of the size of the conflict degree of the mass function, so that the final saliency map is unclear.
The invention application with the application number of CN106780422A discloses a significant map fusion method based on Choquet integration, and solves the problem of effective fusion of significant maps obtained by a plurality of significance detection methods. First, respective saliency maps are generated using a saliency detection method to be fused. And secondly, calculating a similarity coefficient and a similarity matrix among the saliency maps so as to obtain the supported degree and the credibility of each saliency map. The confidence of each saliency map is then taken as the measure of blur value in the Choquet integral. At the same time, the saliency maps to be fused are sorted at the pixel level, and the sorted discrete saliency values are taken as non-negative real measurable functions in the Choquet integral. And finally, calculating a Choquet integral value to obtain a final saliency map. The method uses a Choquet integral method for significant map fusion, has larger workload, needs more calculation and is inconvenient to use.
In the various fusion methods investigated above, researchers consider how to fuse, but do not study how to remove outliers in the saliency map before fusion, and the invention aims to solve the problem of how to remove outliers in the saliency map before fusion of the saliency map, thereby improving the fusion effect.
Disclosure of Invention
In order to solve the above technical problems, the present invention provides a method and a system for removing outliers before saliency map fusion, which are used for removing outliers in a set before saliency fusion, so as to improve the fusion effect and overcome the defect that the final saliency detection result is greatly affected by the outliers existing in a sample.
The first purpose of the present invention is to provide a method for outlier removal before saliency map fusion, which includes inputting a sample set, and is characterized by further including the following steps:
step 1: removing outliers from the input sample set using the RANSAC method;
step 2: and generating and outputting the processed optimized sample set.
Preferably, the RANSAC method comprises the following steps for each iteration:
step 11: carrying out random sampling;
step 12: and (5) carrying out consistency analysis.
In any of the above schemes, preferably, the step 11 comprises randomly selecting from a sample set Ω (i)1,i2,......,iM) To extract a part of the sample
(j1,j2,......,jn,......,jN)(jn1≤n≤NE Ω), where Ω represents the sample space, M represents the total number of samples, j represents the total number of samplesnDenotes the samples taken, N denotes the sample index 1. ltoreq. n.ltoreq.N, N denotes the number of samples taken.
In any of the above schemes, preferably, the extracted sample is fitted to a model ΘP(j1,j2,......,jN)。
In any of the above schemes, preferably, the step 12 includes determining the sample set Ω (i)1,i2,......,iM) Whether each sample point in (A) satisfies the model ΘP(j1,j2,......,jN)。
In any of the above schemes, preferably, the step 12 further comprises counting the number of samples N (Θ) satisfying the modelP)。
In any of the above schemes, preferably, the step 12 further includes sequentially iterating the random sampling and the consistency analysis steps, and finding the optimal model Θ*
In any of the above aspects, it is preferable that the optimum isModel theta of (2)*Satisfies the following conditions:
Θ*=argmax{N(Θ1),N(Θ2),......,N(ΘP)}。
in any of the above schemes, it is preferable that the RANSAC method requires the number of iterations n to satisfy:
Figure BDA0002182947880000041
wherein epsilon represents the proportion of the number of the outliers in the sample set; m represents the number of samples required by the sampling step; p represents the probability of the optimal model.
It is a second object of the present invention to provide a outlier removal system before saliency map fusion, comprising a sample acquisition module for inputting a sample set, comprising the following modules:
an outlier removal module: for removing outliers from the input sample set using the RANSAC method;
a generation output module: for generating and outputting the processed optimized sample set.
Preferably, the RANSAC method comprises the following steps for each iteration:
step 11: carrying out random sampling;
step 12: and (5) carrying out consistency analysis.
In any of the above schemes, preferably, the step 11 comprises randomly selecting from a sample set Ω (i)1,i2,......,iM) To extract a part of the sample
(j1,j2,......,jn,......,jN)(jn,1≤n≤NE Ω), where Ω represents the sample space, M represents the total number of samples, j represents the total number of samplesnDenotes the samples taken, N denotes the sample index 1. ltoreq. n.ltoreq.N, N denotes the number of samples taken.
In any of the above schemes, preferably, the extracted sample is fitted to a model ΘP(j1,j2,......,jN)。
In any of the above schemes, preferably, the step 12 includes determining the sample set Ω (i)1,i2,......,iM) Whether each sample point in (A) satisfies the model ΘP(j1,j2,......,jN)。
In any of the above schemes, preferably, the step 12 further comprises counting the number of samples N (Θ) satisfying the modelP)。
In any of the above schemes, preferably, the step 12 further includes sequentially iterating the random sampling and the consistency analysis steps, and finding the optimal model Θ*
In any of the above schemes, preferably, the optimal model Θ is*Satisfies the following conditions:
Θ*=argmax{N(Θ1),N(Θ2),......,N(ΘP)}。
in any of the above schemes, it is preferable that the RANSAC method requires the number of iterations n to satisfy:
Figure BDA0002182947880000051
wherein epsilon represents the proportion of the number of the outliers in the sample set; m represents the number of samples required by the sampling step; p represents the probability of the optimal model.
The invention provides a method for removing outliers before fusion of a saliency map, which can solve the problem of how to remove the outliers in the saliency map before the fusion of the saliency map and improve the fusion effect.
The RANSAC method is a random sampling consistency analysis method and is a non-deterministic model estimation method capable of removing abnormal samples from a sample set.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of a method for outlier removal prior to saliency map fusion in accordance with the present invention.
FIG. 2 is a block diagram of a preferred embodiment of an outlier removal system prior to saliency map fusion in accordance with the present invention.
FIG. 3 is a flow chart of a removal process for another preferred embodiment of the outlier removal method before saliency map fusion in accordance with the present invention.
Fig. 4 is a process diagram of saliency detection sample set screening for an embodiment of the RANSAC-based method for outlier removal before saliency map fusion in accordance with the present invention.
Detailed Description
The invention is further illustrated with reference to the figures and the specific examples.
Example one
As shown in fig. 1, step 100 is performed to input a saliency map sample set.
Step 110 is performed to remove outliers from the input set of significant patterns using the RANSAC method. The steps of each iteration of the RANSAC method include: step 111 is performed, randomly from the sample set Ω (i)1,i2,......,iM) To extract a part of the sample
(j1,j2,......,jn,......,jN)(jn1≤n≤NE Ω), where Ω represents the sample space, M represents the total number of samples, jn represents the samples taken, N represents the sample subscript 1. ltoreq. n.ltoreq.N, N represents the number of samples taken, and fitting the samples taken to a model ΘP(j1,j2,......,jN). Step 112 is executed to perform consistency analysis. The method comprises the following steps: judging the sample set omega (i)1,i2,......,iM) Whether each sample point in (1) satisfies the model
ΘP(j1,j2,......,jN) The number of samples N (theta) satisfying the model is countedP). The steps of random sampling and consistency analysis are iterated in sequence, and the optimal model theta is found*The optimal model Θ*The following steps are satisfied: theta*=argmax{N(Θ1),N(Θ2),......,N(ΘP)}。
The RANSAC method requires the number of iterations n to satisfy:
Figure BDA0002182947880000061
wherein epsilon represents the proportion of the number of the outliers in the sample set; m represents the number of samples required by the sampling step; p represents the probability of the optimal model, p is usually set to 0.8, and the selection range is [0, 1 ].
Step 120 is executed to generate and output the processed optimized sample set.
Example two
As shown in fig. 2, an outlier removal system for use before saliency map fusion includes a sample acquisition module 100, an outlier removal module 200, and a generation output module 300.
The sample acquisition module 100: for inputting a sample set.
Outlier removal module 200: for removing outliers from the input sample set using the RANSAC method. The RANSAC method comprises the following steps of each iteration: step 11: and carrying out random sampling. The method comprises the following steps: random slave sample set omega (i)1,i2,......,iM) To extract a part of the samples (j)1,j2,......,jn,......,jN)(jn,1≤n≤NE Ω), where Ω represents the sample space, M represents the total number of samples, j represents the total number of samplesnRepresenting the extracted samples, N represents the index of the sample, N is more than or equal to 1 and less than or equal to N, N represents the number of the extracted samples, and fitting the extracted samples to a model thetaP(j1,j2,......,jN). Step 12: and (5) carrying out consistency analysis. The method comprises the following steps: judging the sample set omega (i)1,i2,......,iM) Whether each sample point in (A) satisfies the model ΘP(j1,j2,......,jN) The number of samples N (theta) satisfying the model is countedP). The steps of random sampling and consistency analysis are iterated in sequence, and the optimal model theta is found*The most optimal model Θ*Satisfies the following conditions:
Θ*=argmax{N(Θ1),N(Θ2),......,N(ΘP)}。
the RANSAC method requires the number of iterations n to satisfy:
Figure BDA0002182947880000071
wherein epsilon represents the proportion of the number of the outliers in the sample set; m represents the number of samples required in the sampling step; p represents the probability of the optimal model, p is typically set to 0.8, and the range is chosen to be [0, 1 ].
The generation output module 300: for generating and outputting the processed optimized sample set.
EXAMPLE III
In various fusion methods in the prior art, researchers consider how to fuse, but do not study how to remove outliers in a saliency map before fusion, and the invention aims to solve the problem of how to remove outliers in the saliency map before fusion of the saliency map and improve the fusion effect.
In the saliency maps obtained by the different saliency detection methods, there are both correct saliency detection results (inner points) and false saliency detection results (outer points). In the process of fusing multiple saliency maps, if external points exist in a sample set, the final saliency detection result is greatly influenced. To do this, the outliers in the set need to be removed before significance fusion. The outlier removing method before the saliency map fusion provided by the invention can achieve the purpose of improving the fusion effect.
The random sample consensus analysis method (RANSAC method) is a non-deterministic model estimation method that can remove outlier samples from a sample set. The basic assumption is that the sample set contains both correct data (model-conforming samples: interior points) and outlier data (non-matched model-conforming samples: exterior points). These outliers are typically caused by estimating the model presence parameter overfitting, image salt and pepper noise, and the like. The RANSAC method is very suitable for the outlier removal link before significance fusion due to the characteristic of effectively removing abnormal data. The invention adopts a random sampling consistency analysis method to remove outliers.
The RANSAC method comprises the following steps for each iteration:
the first step is as follows: random sampling
Random slave sample set omega (i)1,i2,......,iM) To extract a part of the samples (j)1,j2,......,jn,......,jN)(jn1≤n≤NE Ω), where Ω represents the sample space, M represents the total number of samples, j represents the total number of samplesnRepresenting the samples taken, N representing the sample index 1. ltoreq. N. ltoreq.N, N representing the number of samples taken, from which the model theta is fittedP(j1,j2,......,jN)。
The second step is that: consistency analysis
Judging the sample set omega (i)1,i2,......,iM) Whether each sample point in (A) satisfies the model ΘP(j1,j2,......,jN) And counting the number of samples N (theta) satisfying the modelP)。
The steps of random sampling and consistency analysis are iterated in sequence, and the optimal model theta is found*Is satisfied with
Θ*=argmax{N(Θ1),N(Θ2),......,N(ΘP)}
The number of iterations n required for the RANSAC method is satisfied
Figure BDA0002182947880000081
Wherein epsilon represents the proportion of the number of the outliers in the sample set; m represents the number of samples required in the sampling step; p represents the probability of the optimal model (typically set to 0.8).
Example four
This embodiment lists a salient selection example based on RANSAC, and the flow chart of this method is shown in fig. 3, and according to this method, an interior point set and an exterior point set are determined. And then estimating the final significance by using the selected interior point combination.
Step 300 is performed to input a sample set.
Step 310 is executed to set a variable i e [1, 6], and set the initial value of i to 1.
Step 320 is performed to randomly sample, select sample i, for significance phiiAnd modeling.
Step 330 is performed to obtain a fitted model from the samples.
Step 340 is executed to calculate the fitting model phiiNumber of samples NiAnd then stored.
Step 350 is executed to set i to i + 1.
Step 360 is executed to determine whether i is greater than 6. If i is less than or equal to 6, then step 320 is executed again to perform random sampling, select sample i, and pair significance φiAnd modeling. If i is greater than 6, step 370 is executed to find the maximum value N satisfying the number of models under different fitting models*, N*=max(N1,N2,N3,N4,N5,N6)。
Go to step 380, maximum value N*The corresponding model is the optimal significance model phi*, φ*=max(φ1,φ2,φ3,φ4,φ5,φ6)。
The procedure of the above method is as follows:
Algorithm 1.An example for RANSAC based Saliency Selection
1:for each i∈[1,6]do
2:Random Sample:Select the sample i to model the saliencyφi
3:Consensus:Collect the samples Ωithat satisfy the saliency modelφi
4:Calculate the number of elements in this sample groupφi:Ni;
5:end for
6:Find the largest number:N*=max(N1,N2,N3,N4,N5,N6);
7:Find the optimal saliency model::
φ*=max(φ1,φ2,φ3,φ4,φ5,φ6)。
according to the method, an inner point set and an outer point set are judged. The final significance is then estimated using the filtered set of inliers.
EXAMPLE five
The process of removing significant outliers is illustrated in this embodiment. As shown in fig. 4, the following six significance detection methods are respectively adopted: GBVS, FT, CA, GC, HS and GBMR detect the significance probability of the pixel point (i, j) in the duck beak region in the graph. The significance results obtained by the six methods are {0.94, 0.85, 0.91, 0.93, 0.23, 0.88}, respectively. Randomly selecting a sample point Sk,1≤k≤6As the significance result Ps of the sample, the significance value S of the other method is judgedm,1≤m≤6,m≠kWhether or not | P is satisfiedm,1≤m≤6,m≠k-PIAnd | is less than or equal to delta, wherein delta is a preset RANSAC threshold value. If the constraint is satisfied, then sample m satisfies this model of probability of significance, otherwise this model is not satisfied. For example, the sample { S5} is selected to fit the significant result of the model, apparently the other five sets of samples { S1,S2,S3,S4,S6The probability model is not satisfied. And selects the sample { S }1In the fitted significance model, the samples meeting the constraint condition are as follows: { S2,S3, S4,S6}. The method is used for randomly selecting the minimum sample number to fit the significance probability model, and finding the optimal significance model, so that the number of sample points meeting the model is the maximum.
For a better understanding of the present invention, the foregoing detailed description has been given in conjunction with specific embodiments thereof, but not with the intention of limiting the invention thereto. Any simple modifications of the above embodiments according to the technical essence of the present invention still fall within the scope of the technical solution of the present invention. In the present specification, each embodiment is described with emphasis on differences from other embodiments, and the same or similar parts between the respective embodiments may be referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.

Claims (10)

1. A outlier removal method used before saliency map fusion comprises an input sample set, and is characterized by further comprising the following steps:
step 1: removing outliers from the input sample set using the RANSAC method;
step 2: and generating and outputting the processed optimized sample set.
2. The method for outlier removal before saliency map fusion as claimed in claim 1, wherein said RANSAC method comprises the following steps for each iteration:
step 11: carrying out random sampling;
step 12: and (5) carrying out consistency analysis.
3. The method for outlier removal before saliency map fusion of claim 2, said step 11 comprising randomly selecting from a set of samples Ω (i)1,i2,......,iM) To extract a part of the samples (j)1,j2,......,jn,......,jN)(jn,1≤n≤NE Ω), where Ω represents the sample space, M represents the total number of samples, j represents the total number of samplesnDenotes the samples taken, N denotes the sample index 1. ltoreq. n.ltoreq.N, N denotes the number of samples taken.
4. The method of claim 3 for outlier removal prior to saliency map fusion as recited in claim wherein said extracted samples are fitted to a model ΘP(j1,j2,......,jN)。
5. The outlier removal method before saliency map fusion as claimed in claim 4,wherein step 12 comprises determining a sample set Ω (i)1,i2,......,iM) Whether each sample point in (A) satisfies the model ΘP(j1,j2,......,jN)。
6. The method of claim 5 for outlier removal before saliency map fusion as described in claim 12 further comprising said step of 12 further comprising counting a number of samples N (Θ) that satisfy the modelP)。
7. The method for outlier removal before saliency map fusion as claimed in claim 6, said step 12 further comprising the steps of sequentially iterating said stochastic sampling and consistency analysis and finding an optimal model Θ*
8. The method for outlier removal before saliency map fusion of claim 7 said optimal model Θ*Satisfies the following conditions:
Θ*=argmax{N(Θ1),N(Θ2),......,N(ΘP)}。
9. the method for outlier removal before saliency map fusion as claimed in claim 2, wherein said RANSAC method requires a number of iterations n satisfying:
wherein epsilon represents the proportion of the number of the outliers in the sample set; m represents the number of samples required for the sampling step; p represents the probability of the optimal model.
10. A outlier removal system before saliency map fusion, comprising a sample acquisition module for inputting a sample set, characterized by further comprising the following modules:
an outlier removal module: for removing outliers from the input sample set using the RANSAC method;
a generation output module: for generating and outputting the processed optimized sample set.
CN201910803396.6A 2019-08-28 2019-08-28 Outlier removing method used before saliency map fusion Pending CN110852978A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910803396.6A CN110852978A (en) 2019-08-28 2019-08-28 Outlier removing method used before saliency map fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910803396.6A CN110852978A (en) 2019-08-28 2019-08-28 Outlier removing method used before saliency map fusion

Publications (1)

Publication Number Publication Date
CN110852978A true CN110852978A (en) 2020-02-28

Family

ID=69595483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910803396.6A Pending CN110852978A (en) 2019-08-28 2019-08-28 Outlier removing method used before saliency map fusion

Country Status (1)

Country Link
CN (1) CN110852978A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008075061A2 (en) * 2006-12-20 2008-06-26 Mitsubishi Electric Information Technology Centre Europe B.V. Multiple image registration apparatus and method
CN106447704A (en) * 2016-10-13 2017-02-22 西北工业大学 A visible light-infrared image registration method based on salient region features and edge degree
CN107103579A (en) * 2017-04-12 2017-08-29 湖南源信光电科技股份有限公司 A kind of RANSAC improved methods towards image mosaic

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008075061A2 (en) * 2006-12-20 2008-06-26 Mitsubishi Electric Information Technology Centre Europe B.V. Multiple image registration apparatus and method
CN106447704A (en) * 2016-10-13 2017-02-22 西北工业大学 A visible light-infrared image registration method based on salient region features and edge degree
CN107103579A (en) * 2017-04-12 2017-08-29 湖南源信光电科技股份有限公司 A kind of RANSAC improved methods towards image mosaic

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘超: "基于显著性检测的图像匹配", 《北京信息科技大学学报(自然科学版)》 *
赵向阳等: "一种全自动稳健的图像拼接融合算法", 《中国图象图形学报》 *

Similar Documents

Publication Publication Date Title
Gu et al. Learning a no-reference quality assessment model of enhanced images with big data
CN108229526B (en) Network training method, network training device, image processing method, image processing device, storage medium and electronic equipment
CN107133948B (en) Image blurring and noise evaluation method based on multitask convolution neural network
CN107529650B (en) Closed loop detection method and device and computer equipment
CN111127387B (en) Quality evaluation method for reference-free image
CN110807757B (en) Image quality evaluation method and device based on artificial intelligence and computer equipment
CN108510499B (en) Image threshold segmentation method and device based on fuzzy set and Otsu
Yue et al. Blind stereoscopic 3D image quality assessment via analysis of naturalness, structure, and binocular asymmetry
CN110782413B (en) Image processing method, device, equipment and storage medium
CN111582150A (en) Method and device for evaluating face quality and computer storage medium
CN112950581A (en) Quality evaluation method and device and electronic equipment
CN111401339B (en) Method and device for identifying age of person in face image and electronic equipment
Fang et al. Deep3DSaliency: Deep stereoscopic video saliency detection model by 3D convolutional networks
CN111935479A (en) Target image determination method and device, computer equipment and storage medium
CN112257738A (en) Training method and device of machine learning model and classification method and device of image
CN113378620B (en) Cross-camera pedestrian re-identification method in surveillance video noise environment
CN114724218A (en) Video detection method, device, equipment and medium
KAWAKAMI et al. Automated Color Image Arrangement Method Based on Histogram Matching-Investigation of Kansei impression between HE and HMGD
Yang et al. No-reference image quality assessment focusing on human facial region
CN108256578B (en) Gray level image identification method, device, equipment and readable storage medium
CN110852978A (en) Outlier removing method used before saliency map fusion
CN116051421A (en) Multi-dimensional-based endoscope image quality evaluation method, device, equipment and medium
Sujatha et al. Grey wolf optimiser-based feature selection for feature-level multi-focus image fusion
CN114841887A (en) Image restoration quality evaluation method based on multi-level difference learning
CN114677504A (en) Target detection method, device, equipment terminal and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination