CN109035196B - Saliency-based image local blur detection method - Google Patents

Saliency-based image local blur detection method Download PDF

Info

Publication number
CN109035196B
CN109035196B CN201810498275.0A CN201810498275A CN109035196B CN 109035196 B CN109035196 B CN 109035196B CN 201810498275 A CN201810498275 A CN 201810498275A CN 109035196 B CN109035196 B CN 109035196B
Authority
CN
China
Prior art keywords
image
detection
result
saliency
significance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810498275.0A
Other languages
Chinese (zh)
Other versions
CN109035196A (en
Inventor
方贤勇
丁成
汪粼波
王华彬
周健
李薛剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN201810498275.0A priority Critical patent/CN109035196B/en
Publication of CN109035196A publication Critical patent/CN109035196A/en
Application granted granted Critical
Publication of CN109035196B publication Critical patent/CN109035196B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

Aiming at the problem that a clear texture flat area is easy to be falsely detected as a fuzzy area due to lack of high-frequency information, the invention provides a saliency-based image local fuzzy detection method which comprises the following steps: combining singular value vectors representing image transform domain characteristics, reflecting local extreme points of image high-frequency information and entropy-weighted pooling DCT high-frequency coefficients (HiFST coefficients), complementing the two types of characteristic values mutually to obtain better characteristic vectors, inputting the obtained mixed characteristic vectors into a BP neural network to train to obtain a model, obtaining a preliminary result through prediction, combining the preliminary result with image significance detection, obtaining a further detection result through significance constraint of an image, and obtaining a final bilateral result through optimizing edge information through bilateral filtering. The qualitative and quantitative experiment results carried out on a public large data set show that the method has good fuzzy detection effect.

Description

Saliency-based image local blur detection method
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a saliency-based image local blur detection method.
Background
Image blurring is one of branches of the field of image processing, more and more scientific research workers pay much attention to the image blurring detection method, and the image local blurring detection technology is greatly broken through. Before the advent of single-image based blur detection methods, most detection methods were based on multiple consecutive blurred images, which obtained multiple images and camera parameters for blur detection, but such methods were very limited and required a lot of requirements to be met, e.g. required a lot of a priori knowledge such as camera parameters, and required that the background of the obtained picture was static. In recent years, many proposals have been made based on a single blurred image without any prior information. At present, a plurality of methods for blur detection only by using a single image mainly comprise two types: the fuzzy detection method based on single characteristic value for distinguishing fuzzy and clear areas, the fuzzy detection method based on mixed measurement of a plurality of characteristic values for distinguishing clear areas and fuzzy areas:
(1) a blur detection method based on a single characteristic value for distinguishing blur and clear regions: the blurred areas and the sharp areas in the image are substantially different, which is reflected in the gradient, frequency domain and transform domain. Many methods are therefore based on detecting a single feature value that distinguishes between sharp and fuzzy regions. Narvekar et al[1]The detection of the blurred region is performed by analyzing edge information of the image. Tang et al[2]And obtaining a preliminary detection result by researching the difference of the image frequency spectrum residual between a clear region and a fuzzy region, and performing iterative optimization on the preliminary detection result through the color information and the gradient information of adjacent regions to obtain a further fuzzy detection result. Su et al[3]According to the characteristic that the proportion of the first few items of the singular value of the pixel in the fuzzy area is higher than that of the singular value of the pixel in the clear area, the ratio of the first few items of the singular value is used for detecting the fuzzy area. Huang-Qin-Chun et al[4]On the basis of the research of su et al, singular value vectors are used as characteristics for distinguishing a clear area from a fuzzy area, and are combined with DCT coefficients, and training and prediction are carried out by BP to obtain a final detection result. Javaran et al[5]And analyzing the change of the DCT coefficients before and after blurring, blurring the images by Gaussian filtering again, comparing the ratios of the DCT coefficients in the two images at 9 multiplied by 9, 19 multiplied by 19 and 45 multiplied by 45, and taking the average value of the ratios of the three different scales as the characteristic for distinguishing the blurred area from the clear area. Yi and Eramian et al[6]A robust segmentation method is provided for segmenting focusing and defocusing pictures based on measurement scale in a local binary pattern and by using multi-scale information of the images. Alireza et al[7]A method for fuzzy detection of image based on DCT high-frequency coefficient combination and sorting is disclosed, which includes such steps as changing image to gradient image, and sequencingBy using a multi-scale method, DCT high-frequency coefficients of different scales are sequenced according to entropy, and then maximum pooling and Gaussian filtering processing are carried out to obtain a final detection result, but the detection result is detected in a frequency domain range, and the false detection exists on a flat area with clear texture and without high-frequency information. Shi et al [8]Image block blur detection based on sparse representation is proposed by means of a learning dictionary for detecting blur discernible by human eyes. The method based on the single characteristic value can only detect the blurred image from a frequency domain or a spatial domain, and false detection is easily generated for some clear regions containing similar frequency domains or spatial domains with the blurred region, such as the blurred region and the texture flat region which do not contain high-frequency information.
(2) The fuzzy detection method based on the mixed measurement of a plurality of characteristic values comprises the following steps: chakrabarti et al[9]The local frequency domain components, the local gradient transformation and the color information are combined under a Markov random domain segmentation framework, and the motion blur area can be detected through the knowledge of a blur kernel, but the detection effect on a defocused image or an image with insignificant spatial change is poor. Liu et al[10]And combining local power spectrum inclination, gradient distribution histogram, maximum color saturation and autocorrelation of local image color, gradient and spectral information by using a Bayes classifier to detect a fuzzy region and a clear region. Shi et al[11]Some new eigenvalues are presented for the fuzzy measurement: peak and heavy tail distribution in image gradient, frequency domain of spectrum and local filtering obtained by group-truth image, combining them by Bayes classifier, and performing multi-scale analysis to the obtained result to obtain the final result.
The method can divide the image from a frequency domain and divide the image from a space domain by using a plurality of characteristic values for distinguishing the fuzzy region and the clear region, the detection result of the fuzzy detection under multiple constraints is more accurate, but the selection and fusion problems of multiple characteristics become important factors influencing the detection result, the problem can be caused when the selected characteristic value type is too single or the selected characteristic value types are too diverse, for example, the detection has very large noise when the selected characteristic value is too diverse, a plurality of obvious fuzzy regions can be falsely detected into the clear region, and the false detection of some regions can also be carried out if the selected characteristic value is too single. However, the multi-feature fusion method is very effective, so that the selected multi-feature fusion mode is only the feature selection mode, namely, the singular value vector representing the information of the image transform domain, the local extreme point representing the information of the image frequency domain and the HiFST coefficient are combined, and the selected feature values are more complementary and effective in consideration of the transform domain and the frequency domain.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a saliency-based image local blur detection method, which comprises the following steps:
the image local fuzzy detection method based on the significance is carried out according to the following steps:
step 1: inputting a color image;
step 2: solving the eigenvector of the result of the step 1 to obtain the eigenvector;
and step 3: carrying out BP neural network training on the result of the step 2 to obtain a trained BP neural network;
and 4, step 4: detecting the result of the step 1 by using a significance method to obtain a significance detection diagram of the image;
and 5: obtaining a BP prediction result graph according to the result of the step 3;
step 6: segmenting the result of the step 1 by using a super-pixel segmentation method to obtain a super-pixel segmentation image of the image;
and 7: obtaining a detection result from the significance detection image obtained in the step 4, the BP prediction result image obtained in the step 5 and the super pixel segmentation image obtained in the step 6;
and 8: and (4) performing bilateral filtering on the detection result obtained in the step (7) to optimize, obtaining a final detection result and outputting the final detection result.
Further, the feature vector in step 2 includes a singular value vector, a local extreme point and a HiFST coefficient;
the singular value vector is a vector in a diagonal matrix obtained after singular value decomposition of the matrix;
the local extreme point refers to the characteristic of representing the high-frequency information of the image;
the HiFST coefficient is a coefficient obtained by entropy weighting and pooling discrete cosine transform coefficients.
Further, the BP neural network described in step 3 refers to a conventional three-layer BP network, and the network may be divided into an input layer, an implicit layer, and an output layer, and specifically includes the following steps:
3.1 setting the number of input nodes to 10, corresponding to the number of local extremum points of the "meter" type in 8 × 8 pixel blocks centered on pixel (i, j), the HiFST coefficient and the singular value vector, for a total of 1 × 10-dimensional vectors;
3.2 setting the number of output nodes to be 1, wherein the output nodes are fuzzy detection values obtained through BP neural network detection;
3.3 set the total of 20 hidden layer neurons, set the number of iterations in the experiment to 2000, set the error to 0.0006, and set the learning rate to 0.04.
Further, the saliency detection map described in step 4 refers to a detection result map obtained by performing saliency detection on an image, and a most salient region in the image is obtained.
Further, the BP training result graph described in step 5 refers to a result graph obtained by inputting an image into a BP neural network.
Further, the superpixel segmentation map described in step 6 refers to a detection result map obtained by performing superpixel segmentation on an image, and the image is segmented into a plurality of superpixel blocks.
Further, the specific implementation steps of the detection result of the significance constraint in step 7 are as follows:
7.1 the superpixel segmentation method used to divide the image into N superpixels;
7.2 training the result graph with BP as the initial value;
7.3 constraint relationship by significance, i.e. let Bc,SCAs a result of the blur detection and the saliency detection of the super-pixel p,
Figure BDA0001669327380000041
βcthe fuzzy detection result and the significance detection result of the neighborhood superpixel i (i is more than or equal to 1 and less than or equal to N) of the superpixel are shown as BcAnd SCThere is no direct relation, namely SCLarge area BcIt is not necessarily the larger, and vice versa
Figure BDA0001669327380000042
And betacThere is no direct relationship, but S is known from the above demonstrationCAnd betacThe relationship between the two will affect BcAnd
Figure BDA0001669327380000043
the relation between them is SCAnd betacThe smaller the difference between, BcAnd
Figure BDA0001669327380000044
the smaller the difference between should be, so the blur detection result of the super pixel p can be expressed as:
Figure BDA0001669327380000045
namely, it is
Figure BDA0001669327380000046
The relationship between the significance and the ambiguity can be understood from the above
Figure BDA0001669327380000047
The above formula shows that the fuzzy detection value of the pixel is determined by the fuzzy value and the significance value of the adjacent pixel; the closer the significance between two pixels is, the closer the fuzzy degree is, so the fuzzy value and the significance value are in accordance with the Gaussian distribution, and the method can obtain
Figure BDA0001669327380000048
Obtaining Bc
The optimum result of (2) is obtained by obtaining the maximum value of the following formula,
Figure BDA0001669327380000049
Bffor the detection result of the super-pixel p obtained by the BP network, BiThe parameter α is obtained from a significance difference between the super pixel p and the super pixel i, the larger the significance difference is, the larger α depends on the result of the preliminary detection, and the smaller α is, the more affected by the surrounding pixel blocks, α is:
α=max(β)=max(exp(-(||Sp||-||Si||)*10))
further, the bilateral filtering optimization is performed on the detection result in the step 8 to obtain a final result, and the bilateral filtering is used for optimizing the result obtained in the step 7.
Further, in step 2, the specific method of the singular value vector is
Figure BDA0001669327380000051
Where U, V are two orthogonal matrices, UiAnd viColumn vectors of orthogonal matrices U and V, respectively, S is a diagonal matrix and deltaIThe diagonal terms of S are singular value vectors.
Further, in step 2, the specific method of the local extreme point is
Figure BDA0001669327380000052
Si,jRepresenting the intensity of the pixel (i, j)And (4) degree.
Furthermore, in step 2, the specific method of the HiFST coefficient is
2.1, converting the color image I into a gradient image K;
2.2 calculate all DCT high frequency coefficients within 3 x 3,7 x 7,15 x 15,31 x 31 centered on pixel (i, j), and sort these values by magnitude to obtain a vector;
2.3 the maximum pooling of this vector yields the HiFST coefficients.
Advantageous technical effects
The invention provides a new solving method aiming at the problem that a clear texture flat area is easy to be falsely detected into a fuzzy area due to lack of high-frequency information, namely, a singular value vector representing transform domain information, a local extreme point representing image high-frequency information and a HiFST coefficient are combined, prediction and analysis are carried out through a BP neural network, a more accurate detection result is obtained, then a relationship between image significance and definition is carried out, global constraint is carried out through significance, a more accurate detection result is obtained, and finally, bilateral filtering is used for optimizing an experimental result, which can be seen through qualitative experiments or quantitative experiments, the method can obtain a good detection result in the clear texture flat area, and two different types of characteristic values selected by people can be mutually supplemented, has strong robustness.
Drawings
Fig. 1 shows a partially blurred original image and a detection result.
Fig. 2 is a comparison of the local extremum point numbers corresponding to the 4 image blocks in fig. 1.
Fig. 3 is a partially blurred image of the 4 blocks 8 by 8.
Fig. 4 is a graph comparing the singular value sizes corresponding to the 4 image blocks in fig. 3 (the singular value sizes of all blocks are enlarged by 100 times).
Fig. 5 is a saliency detection result and a blur detection result for a locally blurred image.
Fig. 6 is a saliency difference and a sharpness difference for a locally blurred image.
Fig. 7 is a flow chart of blur detection.
Fig. 8 is a comparison result of the blurred image processing method mentioned in the present invention and the background art.
Detailed Description
The structural features of the present invention will now be described in detail with reference to the accompanying drawings.
Referring to fig. 7, the saliency-based image local blur detection method is performed as follows
Step 1: inputting a color image;
step 2: solving the eigenvector of the result of the step 1 to obtain the eigenvector;
and step 3: carrying out BP neural network training on the result of the step 2 to obtain a trained BP neural network;
and 4, step 4: detecting the result of the step 1 by using a significance method to obtain a significance detection diagram of the image;
and 5: obtaining a BP prediction result graph according to the result of the step 3;
step 6: segmenting the result of the step 1 by using a super-pixel segmentation method to obtain a super-pixel segmentation image of the image;
and 7: obtaining a detection result from the significance detection image obtained in the step 4, the BP prediction result image obtained in the step 5 and the super pixel segmentation image obtained in the step 6;
and 8: and (4) performing bilateral filtering on the detection result obtained in the step (7) to optimize, obtaining a final detection result and outputting the final detection result.
Further, the feature vector in step 2 includes a singular value vector, a local extreme point, and a HiFST coefficient;
the singular value vector is a vector in a diagonal matrix obtained after singular value decomposition of the matrix;
the local extreme point refers to the characteristic of representing the high-frequency information of the image;
the HiFST coefficient is a coefficient obtained by entropy weighting and pooling discrete cosine transform coefficients.
Further, the BP neural network described in step 3 refers to a conventional three-layer BP network, and the network may be divided into an input layer, an implicit layer, and an output layer, and specifically includes the following steps:
3.1 setting the number of input nodes to 10, corresponding to the number of local extremum points of the "meter" type in 8 × 8 pixel blocks centered on pixel (i, j), the HiFST coefficient and the singular value vector, for a total of 1 × 10-dimensional vectors;
3.2 setting the number of output nodes to be 1, wherein the output nodes are fuzzy detection values obtained through BP neural network detection;
3.3 set the total of 20 hidden layer neurons, set the number of iterations in the experiment to 2000, set the error to 0.0006, and set the learning rate to 0.04.
Further, the saliency detection map described in step 4 refers to a detection result map obtained by performing saliency detection on an image, and a most salient region in the image is obtained.
Further, the BP training result graph described in step 5 refers to a result graph obtained by inputting an image into a BP neural network.
Further, the superpixel segmentation map described in step 6 refers to a detection result map obtained by performing superpixel segmentation on an image, and the image is segmented into a plurality of superpixel blocks.
Further, the specific implementation steps of the detection result of the significance constraint in step 7 are as follows:
7.1 the superpixel segmentation method used to divide the image into N superpixels;
7.2 training the result graph with BP as the initial value;
7.3 constraint relationship by significance, i.e. let Bc,SCAs a result of the blur detection and the saliency detection of the super-pixel p,
Figure BDA0001669327380000071
βcthe fuzzy detection result and the significance detection result of the neighborhood superpixel i (i is more than or equal to 1 and less than or equal to N) of the superpixel are shown as BcAnd SCThere is no direct relation, namely SCLarge area BcIt is not necessarily the larger, and vice versa
Figure BDA0001669327380000072
And betacThere is no direct relationship, but S is known from the above demonstrationCAnd betacThe relationship between the two will affect BcAnd
Figure BDA0001669327380000073
the relation between them is SCAnd betacThe smaller the difference between, BcAnd
Figure BDA0001669327380000074
the smaller the difference between should be, so the blur detection result of the super pixel p can be expressed as:
Figure BDA0001669327380000075
namely, it is
Figure BDA0001669327380000076
The relationship between the significance and the ambiguity can be understood from the above
Figure BDA0001669327380000081
The above formula shows that the fuzzy detection value of the pixel is determined by the fuzzy value and the significance value of the adjacent pixel; the closer the significance between two pixels is, the closer the fuzzy degree is, so the fuzzy value and the significance value are in accordance with the Gaussian distribution, and the method can obtain
Figure BDA0001669327380000082
Obtaining BcThe optimum result of (2) is to obtain the maximum value of the following formula,
Figure BDA0001669327380000083
Bffor the detection result of the super-pixel p obtained by the BP network, BiThe parameter α is obtained from a significance difference between the super pixel p and the super pixel i, the larger the significance difference is, the larger α depends on the result of the preliminary detection, and the smaller α is, the more affected by the surrounding pixel blocks, α is:
α=max(β)=max(exp(-(||Sp||-||Si||)*10))
further, the bilateral filtering optimization is performed on the detection result in the step 8 to obtain a final result, and the bilateral filtering is used for optimizing the result obtained in the step 7.
Further, in step 2, the specific method of the singular value vector is
Figure BDA0001669327380000084
Where U, V are two orthogonal matrices, UiAnd viColumn vectors of orthogonal matrices U and V, respectively, S is a diagonal matrix and deltaIThe diagonal terms of S are singular value vectors.
Further, in step 2, the specific method of the local extreme point is
Figure BDA0001669327380000085
Si,jRepresenting the intensity of the pixel (i, j).
Preferably, in step 2, the specific method of HiFST coefficient is
2.1, converting the color image I into a gradient image K;
2.2 calculating all DCT high frequency coefficients in the range of 3 x 3,7 x 7,15 x 15,31 x 31 centered on pixel (i, j), sorting the values by magnitude to obtain a vector;
2.3 the maximum pooling of this vector yields the HiFST coefficients.
Preferably, in step 4, the specific method for detecting significance is as follows:
4.1 super-pixel segmentation to segment the image to obtain a super-pixel segmentation image, and obtaining a plurality of super-pixel segmentation images which are respectively marked as { s } according to the preset number of different super-pixels1,s2,...,smIn which s is1Is the segmentation map, s, with the largest number of superpixelsmThe method is a segmentation graph with the least superpixels, and then the saliency detection is carried out on each image, in other words, the saliency detection of the image is divided into the detection of the superpixels in a single area, but the connection of adjacent superpixels can be ignored, so that when the paper carries out the saliency detection of a single superpixel, three different types of feature values are proposed for the saliency detection of the single superpixel, wherein the three types of feature values are the saliency feature of the single superpixel, the constraint feature between the adjacent superpixels and the background feature of the single superpixel. We describe these three different features separately below.
4.1.1 the first feature is the saliency feature of a single super-pixel. I.e. using features that greatly distinguish salient regions from non-salient regions. These features are mainly RGB features, LAB features, HSV features, standard perimeter features, LM filter variation features, etc., which together constitute a 34-dimensional vector.
4.1.2 the second feature is a constraint feature between two adjacent superpixels. If a super-pixel is a salient block, then super-pixels with similar features around it are also highly likely to be salient blocks. The constraint feature is mainly to compare the feature differences between the superpixel and the neighborhood superpixel, and the feature differences are mainly the RGB difference values, LAB difference values, LM filtering difference values, the maximum value difference value of the LM filtering difference values, LAB histogram difference values, color histogram difference values and saturation histogram difference values of the two superpixels, and form a 26-dimensional vector as the constraint feature.
4.1.3A third feature is that a single superpixel belongs to the background. I.e. to directly distinguish whether a superpixel is background or not by background color and texture. These features are obtained by comparing the difference between the features of the salient region and the non-background region, and these features have the same dimension as the constrained feature between the neighboring superpixels, and are also a feature vector with 26 dimensions.
And 4.2, combining the three different types of features to obtain eighty-six-dimensional vectors, putting the feature vectors into a random number forest for supervised training to obtain a significance detection result, wherein each image has M significance detection results, and determining the proportion of each image in the final significance result by parameter setting, namely the number of super pixels in each significance detection result to obtain the final significance detection result.
Preferably, in step 6, the specific method for dividing the super-pixels is as follows:
6.1 first the cluster center is initialized. The read color image is originally in RGB three-color space, and is converted from RGB color space to CIELAB color space by using a function, so that the calculation is convenient, and the image can be segmented in the CIELAB color space. Setting the clustering centers as K, initializing the K clustering centers, and expressing the clustering centers as five-dimensional characteristic vector Vi=[li,ai,bi,xi,yi]TWherein lab represents the three-dimensional color characteristics of the pixel points in the CIELAB color space, xy represents the position characteristics of the plane space, i belongs to [1, k ]]The method is characterized in that the number of a clustering center is adopted, then the clustering process is adopted, firstly, an image is uniformly divided into k grids, and the length and the width of each grid are
Figure BDA0001669327380000101
The total number of the pixel points in a color image. The SLIC algorithm also uses a 3 x 3 block to prevent the set cluster center from appearing on the edge or just on the noise point, and selects the point with the smallest gradient in this block as the cluster center.
6.2 clustering of pixels. The label of each pixel in the image is the same as the label of the nearest clustering center to the pixel, and in order to increase the calculation speed, the range for clustering the pixels is in a proper range with the clustering center as the center by the SLIC algorithm. Compared with the traditional K-means clustering method, the method has the advantages that the performance is greatly improved, the similarity calculation is carried out on each pixel and all clustering centers by the K-means clustering method, and the calculation method is more complex and time-consuming compared with an SLIC algorithm.
6.3 how the cluster center changes. After each pixel has been labeled with a corresponding label, the algorithm will recalculate the cluster center, the new center position being the average position of all pixels belonging to that cluster center. When the clustering center changes every time, the algorithm is recalculated, and new clustering is performed on all pixels. The updating is repeated until the variable of the cluster center is smaller than the threshold value set at the beginning of the program, and the algorithm is completed.
6.4 optimization. When the algorithm stops iterating, the problem may arise that superpixels with the same label are not adjacent, i.e. that two or more non-adjacent superpixels have the same label. To solve this problem, SLIC proposes an algorithm for merging adjacent regions of the same tag. Namely, adjacent superpixels with the same label are mutually merged, and due to the mode, the number of the K superpixels set by the SLIC is slightly larger than the number of the final segmentation superpixels. How to calculate the similarity between superpixels, the SLIC algorithm proposes a new distance measuring method D:
Figure BDA0001669327380000102
wherein d iscAnd dsExpressed as color distance and position distance, respectively:
Figure BDA0001669327380000111
Figure BDA0001669327380000112
where m and n are the pixels in the image, and the distance D between the two pixels is defined by the straight lineThe euclidean distance and the color difference between the two pixels are then calculated. Because a uniform unit for measuring two different characteristics, namely the color characteristic and the position characteristic, is not measured, the difference between the two characteristics is directly added to cause distance inaccuracy, and in order to link the two characteristic values, in the SLIC algorithm, the maximum color characteristic distance N is providedcAnd a maximum spatial position distance NsThe concept of (1) by defining two feature distances, normalizing the color component and the distance component, respectively, and redefining the distance D' after normalization, in such a way that the color feature and the distance feature have a uniform unit of measure, is defined as follows:
Figure BDA0001669327380000113
in the formula
Figure BDA0001669327380000114
Since there is a problem that the color difference between each pixel in an image may be large, in the SLIC paper, the maximum color feature distance N is setcAssuming m directly, then D' changes to:
Figure BDA0001669327380000115
in the above formula, the color distance d can be adjusted mainly by adjusting the parameter mcAnd a spatial distance dsThe specific gravity between them directly affects the calculation result of the distance, i.e. the closeness between adjacent super-pixels, because the parameter m is used to adjust the relationship between the color feature and the distance feature, the parameter m is called the closeness parameter, and in the SLIC algorithm, the value range of the parameter m is generally [1, 40%]In this way, optimal results can be obtained.
Preferably, in step 8, the specific method of bilateral filtering is as follows:
the bilateral filtering is shown as follows
Figure BDA0001669327380000116
Where f (k, l) represents the input, g (i, j) represents the output, and h (i, j, k, l) is the spatial filtering kernel, as shown in the following equation
Figure 1
k (i, j, k, l) is a value domain filter kernel function as shown in the following formula
Figure 2
(i, j) is the current point position, (k, l) is the center point position,
Figure BDA0001669327380000123
is a standard deviation of the spatial domain and is,
Figure BDA0001669327380000124
for the value range standard deviation, these two parameters need to be given by us.
Figure BDA0001669327380000125
Set to 3 in this experiment and,
Figure BDA0001669327380000126
set to 0.5.
Example (b):
it can be seen from the examples that fig. 8(a) represents the original image, fig. 8(b) represents the result graph of the artificial labeling, and we can see that the Liu method, the Chakrabarti method, the Su method, the Shi14 method, the Shi15 method, the Yi method, the Tang method, the aireza method, and the BP method are all prone to false detection of clear texture flat areas, and the detection effects of the methods are not good.
The technical problems and key innovation points aimed at in the invention are mainly discussed as follows:
local extreme point
Local extreme points of an image[12]The high frequency information of the image is represented, and the fuzzy area is less than the high frequency information of the clear area, so that the high frequency information can be used as a characteristic for distinguishing the clear area from the fuzzy area. The types of local extreme points of the two-dimensional image can be classified into 5 types, namely EM1, EM2, EM3, EM4 and EM 5. The first four are all one-dimensional local extrema points and the fifth is a two-dimensional local extrema point. The local extreme points of the image indicate the high-frequency information of the image, in this chapter we use Si,jIndicating the intensity of pixel (i, j), table 1 shows the fifth "meter" type local extreme point of the pixel within a 3 x 3 region, without considering noise.
Figure BDA0001669327380000127
TABLE 1 local extreme point types
In table 1, the fifth "meter" type local extreme points are all high-frequency information reflecting the image, and for the blurred image, the number of the "meter" type local extreme points is more different between the clear area and the blurred area, so that the blurred area is detected by using the fifth extreme points. As shown in fig. 1(a), the block 1 and the block 2 are two fuzzy blocks, the size is 40 × 40, the block 3 and the block 4 are two clear blocks, the block size is also 40 × 40, fig. 1(b) is a detection result graph obtained by using the number of the fifth "meter" type structure extreme points as features, fig. 2 is a statistics of the number of the five different types of local extreme points in the fuzzy blocks 1 and 2 and the clear blocks 3 and 4, it can be seen from fig. 2 that the difference between the "meter" type local extreme points in the clear block and the fuzzy block is large, and the distinction between the other four types of local extreme points in the clear block and the fuzzy block is not very obvious, so the "meter" type local extreme points are used as features for distinguishing the fuzzy region from the clear region in the present invention.
Singular value vector
For a given image I, let its size be m × n, its singular value decomposition be:
Figure BDA0001669327380000131
where U, V are two orthogonal matrices, UiAnd viColumn vectors of orthogonal matrices U and V, respectively, S being a diagonal matrix and δIFor diagonal terms of S, as can be seen from formula 1, an image I can be decomposed into a sum of r matrices which take singular values as weights and have a rank of 1, which is a feature of a transform domain, a large singular value represents an overall shape of the image, a small singular value corresponds to detail high-frequency information of the image, and since a blurred image lacks detail information, that is, lacks high-frequency information, the latter terms of a singular value vector are zero, so Su et al uses the specific gravity of all singular values occupied by the large singular values of the first terms as a criterion for judging whether the image is sharp and blurred, and since image detail information is lost due to image blurring, the specific gravity of the singular values of the first terms is close to 1, but there is a certain defect, as shown in fig. 3, we select 4 blocks of 8 × 8 in total, of which 2 are blurred blocks and two are sharp blocks. As shown in fig. 4, the first three terms of the singular values of the blur block have a high overall weight and the following five terms are substantially zero, while the first three terms of the sharpness block also have a high overall weight and the following five terms are substantially non-zero. So singular value specific gravity is not used in this chapter, but singular value vectors are used as features.
The Discrete Cosine Transform (DCT) is a transform that converts spatial domain information into frequency domain information. After the image is subjected to DCT transformation, the DCT coefficient can reflect the distribution condition of frequency domain information in an image, and blurring can cause the loss of high-frequency information of the image. The discrete cosine transform of an image I is shown in equation 2
Figure BDA0001669327380000132
Wherein:
Figure BDA0001669327380000141
in the present invention, HiFST coefficients are used[7]The HiFST coefficient is a coefficient obtained by entropy-weighting and pooling DCT coefficients, which are a feature for distinguishing a sharp region from a blurred region of an image. The calculation method is as follows: firstly, the color image I is converted into a gradient image K, then all DCT high-frequency coefficients within the range of 3 x 3,7 x 7,15 x 15,31 x 31 with the pixel (I, j) as the center are calculated, the values are sorted according to the size to obtain a vector, and finally the vector is subjected to maximum pooling to obtain the HiFST coefficient.
BP neural network structure
The BP neural network belongs to an artificial neural network with a forward topological structure, the topological structure has three layers, and the BP neural network is a widely used network. The BP network has the characteristics of self-learning, strong classification capability, easiness in implementation and the like, so that the BP network is selected as the classifier.
The invention adopts a traditional three-layer BP network, and the network can be divided into an input layer, a hidden layer and an output layer. The method comprises the following specific steps: the input node number is 10, corresponding to the number of local extrema points of the "meter" type in a block of 8 × 8 pixels centered on pixel (i, j), the HiFST coefficient and the singular value vector, together representing a 1 × 10-dimensional vector; the number of output nodes is 1, and the output nodes are fuzzy detection values obtained through BP neural network detection; a total of 20 hidden layer neurons were used, the number of iterations set in the experiment was set to 2000, the error set was set to 0.0006, and the learning rate was set to 0.04.
Constraint of significance
In the invention, the result obtained by BP detection is restrained by using image significance, so that the detection method can more accurately detect the texture flat area. Salient region detection is of great significance for image understanding and analysis, and the goal thereof is to detect salient regions in images, i.e., those places to which the human eyes are attracted. Saliency detection can be used in a number of ways, such as image segmentation, image recognition, image object reconstruction, image quality assessment, and so forth. For a blurred image, whether it is a defocused blurred image or a motion blurred image, the human eye firstly notices the more prominent and distinct salient regions.
The significance detection method used by the invention is DRFI[13]The significance detection method of (1). As shown in the following drawings, fig. 5(a) is an original image, fig. 5(b) is a result image of saliency detection, fig. 5(c) is a blur detection image obtained by using the HiFST method, and as can be clearly seen from fig. 6, the larger the saliency difference between two pixels is, the larger the sharpness difference between the two pixels is, and we propose a view according to the saliency phenomenon of the blur image: regions of similar saliency should have similar degrees of blurring and the salient regions in an image should be clustered together rather than distributed across the entire image. The optimization method of the present invention is proposed from this point of view, i.e. by superpixel segmentation, where the superpixel segmentation method we use here is SLIC[14]The image is divided into N superpixels, the preliminary result obtained by the BP classifier is used as the initial value of the N superpixels, then the saliency constraint is carried out on the image, namely if the difference value of the saliency of one superpixel and the adjacent superpixels is small, the fuzzy degree of the superpixel can be presumed to depend on the fuzzy detection result of the initial superpixel and the fuzzy detection result of the surrounding superpixels, and conversely, if the saliency of the superpixel is very different from the surrounding superpixels, the fuzzy degree of the superpixel mainly depends on the fuzzy detection result of the initial superpixel.
Let Bc,SCAs a result of the blur detection and the saliency detection of the super-pixel p,
Figure BDA0001669327380000151
βcthe fuzzy detection result and the significance detection result of the neighborhood superpixel i (i is more than or equal to 1 and less than or equal to N) of the superpixel are shown as BcAnd SCThere is no direct relation, namely SCLarge area BcIt is not necessarily the larger, and vice versa
Figure BDA0001669327380000152
And betacThere is no direct relationship, but S is known from the above demonstrationCAnd betacThe relationship between the two will affect BcAnd
Figure BDA0001669327380000153
the relation between them is SCAnd betacThe smaller the difference between, BcAnd
Figure BDA0001669327380000154
the smaller the difference between should be, so the blur detection result of the super pixel p can be expressed as:
Figure BDA0001669327380000155
namely, it is
Figure BDA0001669327380000156
The relationship between the significance and the ambiguity can be understood from the above
Figure BDA0001669327380000157
The above equation indicates that the blur detection value of a pixel is determined by the blur values of the pixels adjacent to the pixel and the saliency value. The closer the significance between two pixels is, the closer the fuzzy degree is, so the fuzzy value and the significance value are in accordance with the Gaussian distribution, and the method can obtain
Figure BDA0001669327380000158
Obtaining BcThe maximum value of equation (5) is obtained,
Figure BDA0001669327380000159
Bffor the detection result of the super-pixel p obtained by the BP network, BiThe parameter alpha is obtained from a significance difference value between the super pixel p and the super pixel i, the larger the significance difference value is, the larger alpha is, the more dependent on the result of the preliminary detection is, if the alpha value is smaller, the more influenced by surrounding pixel blocks, and alpha in the scheme of the invention is:
α=max(β)=max(exp(-(||Sp||-||Si||)*10)) (8)
method flow chart
Finally, the algorithmic process of the present invention is summarized as follows:
the first step is as follows: inputting an image to be detected, and predicting through the trained BP network to obtain a preliminary result.
The second step is that: and detecting the input picture by a saliency method and a super-pixel method to obtain a saliency detection image and a super-pixel segmentation image.
The third step: further detection results are obtained by the significance constraint method proposed above.
The fourth step: and optimizing the obtained result by using bilateral filtering to obtain a final fuzzy detection result.
The flow chart is shown in FIG. 7:
experiment comparison, analysis and verification
To prove the validity of the saliency-based fuzzy region detection method, the experimental results of this chapter and Liu et al[10]Results of (1), Chakrabarti et al[9]As a result of (1), Su et al[3]As a result of (1), Shi14 et al[3]As a result of (1), Shi15 et al[8]As a result of (1), Yi et al[6]As a result of (1), Tang et al[2]As a result, Alireza et al[7]The results of (c) are compared. We have used Shi14 et al[11]The disclosed data set (which included a total of 296 local motion blurred pictures and 704 local defocus blurred pictures).
The following are the results of fuzzy testing of some experimental samples with all the methods mentioned above. The method in this chapter has a good experimental effect in distinguishing a flat texture region from a clear texture region, and the fuzzy detection graph obtained by the method in this chapter is very close to a ground-route graph.
The experimental results show that the Liu method, the Chakrabarti method, the Su method, the Shi14 method, the Shi15 method, the Yi method, the Tang method, the Alireza method and the BP method are easy to carry out false detection on the clear texture flat area, but the detection result of the texture flat area by the detection method can be accurate through significant constraint under the condition that the BP detection result is inaccurate.
In conclusion, the invention provides a new solution to the problem that a clear texture flat area is easy to be falsely detected as a fuzzy area due to lack of high-frequency information, namely, a singular value vector representing transform domain information, a local extreme point representing image high-frequency information and a HiFST coefficient are combined, prediction and analysis are carried out through a BP neural network, a more accurate detection result is obtained, then global constraint is carried out through the relationship between image significance and definition degree, a more accurate detection result is obtained, and finally, bilateral filtering is used for optimizing the experimental result, and both qualitative experiments and quantitative experiments show that the method provided by the invention can obtain a good detection result in the clear texture flat area detection, and two different types of characteristic values selected by people can be mutually supplemented, has strong robustness.
Reference to the literature
[1]Narvekar N D,Karam L J.A no-reference image blur metric based on the cumulative probability of blur detection[J].IEEE Transactions on Image Processing,2011,20(9): 2678-2683.
[2]Tang,C.,et al.,A Spectral and Spatial Approach of Coarse-to-Fine Blurred Image Region Detection.IEEE Signal Processing Letters,2016.23(11):p.1652-1656.
[3]Su B,Lu S,Tan C L.Blurred image region detection and classification[C]//Proceedings of the ACM international conference on Multimedia,Toronto:IEEE,2011:1397-1400.
[4] Local fuzzy measurement of images based on BP neural network [ J ]. Chinese graphic report of images 2015, (1): 20-28
[5]Javaran,T.A.,H.Hassanpour,andV.Abolghasemi,Automatic estimation and segmentation of partial blur in natural images.The Visual Computer,2017.33(2):p.151-161.
[6]Yi,X.and M.Eramian,LBP-based segmentation of defocus blur.IEEE Transactions on Image Processing,2016.25(4):p.1626-1638.
[7]Golestaneh,S.A.and L.J.Karam,Spatially-Varying Blur Detection Based on Multiscale Fused and Sorted Transform Coefficients of Gradient Magnitudes.arXiv preprint arXiv:1703.07478,2017.
[8]Shi,J.,L.Xu,and J.Jia.Just noticeable defocus blur detection and estimation.in Proceedings of the IEEE Conference on Computer Vision andPattern Recognition.2015.
[9]Chakrabarti A,Zickler T,Freeman W T.Analyzing spatially-varying blur[C]//Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition,San Francisco:IEEE,2010:2512-2519.
[10]Liu R,Li Z,Jia J.Image partial blur detection and classification[C]//Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition,Anchorage:IEEE,2008:1-8.
[11]Shi,J.,L.Xu,and J.Jia.Discriminative blur detection features.in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2014.
[12]Zheng S,Jayasumana S,Romera-Paredes B,et al.Conditional random fields as recurrent neural networks[C]//Proceedings of the IEEE International Conference on Computer Vision, Boston:IEEE,2015:1529-1537.
[13]Jiang H,Wang J,Yuan Z,et al.Salient object detection:A discriminative regional feature integration approach[C]//Computer Vision and Pattern Recognition(CVPR),2013 IEEE Conference on.IEEE,2013:2083-2090.
Achanta R,Shaji A,Smith K,et al.SLIC superpixels compared to state-of-the-art superpixel methods[J].IEEE transactions on pattern analysis and machine intelligence,2012,34(11): 2274-2282。

Claims (9)

1. The method for detecting the local blurring of the image based on the significance is characterized by comprising the following steps
Step 1: inputting a color image;
step 2: solving the eigenvector of the result of the step 1 to obtain the eigenvector;
and step 3: carrying out BP neural network training on the result of the step 2 to obtain a trained BP neural network;
and 4, step 4: detecting the result of the step 1 by using a significance method to obtain a significance detection diagram of the image;
and 5: obtaining a BP prediction result graph according to the result of the step 3;
step 6: segmenting the result of the step 1 by using a super-pixel segmentation method to obtain a super-pixel segmentation image of the image;
and 7: obtaining a detection result from the significance detection image obtained in the step 4, the BP prediction result image obtained in the step 5 and the super-pixel segmentation image obtained in the step 6; the method comprises the following specific steps:
7.1 the superpixel segmentation method used to divide the image into N superpixels;
7.2 training the result graph with BP as the initial value;
7.3 by constraint relation of significance, let Bc,SCAs a result of the blur detection and the saliency detection of the super-pixel p,
Figure FDA0003208079760000011
βcis the fuzzy detection result and the significance detection result of a neighborhood superpixel i of the superpixel, i is more than or equal to 1N, so that the blur detection result of a super-pixel p can be expressed as:
Figure FDA0003208079760000012
to obtain BcThe optimum result of (2) is to obtain the maximum value of the following formula,
Figure FDA0003208079760000013
Bffor the detection result of the super-pixel p obtained by the BP network, BiIs a neighborhood superpixel, and the parameter α is obtained from the significance difference between the superpixel p and the neighborhood superpixel i, α is:
α=max(β)=max(exp(-(||Sp||-‖Si‖)*10));
and step 8: and (4) performing bilateral filtering on the detection result obtained in the step (7) to optimize, obtaining a final detection result and outputting the final detection result.
2. The method for detecting the local blur of the image based on the saliency of claim 1, wherein the feature vectors in the step 2 comprise singular value vectors, local extreme points and HiFST coefficients;
the singular value vector refers to a vector in a diagonal matrix obtained after singular value decomposition of the matrix;
the local extreme point refers to the characteristic of representing the high-frequency information of the image;
the HiFST coefficient is a coefficient obtained by entropy weighting and pooling discrete cosine transform coefficients.
3. The saliency-based image local blur detection method according to claim 1, characterized in that the BP neural network described in step 3 is a traditional three-layer BP network, and the network can be divided into an input layer, a hidden layer and an output layer, and the specific steps are as follows:
3.1 setting the number of input nodes to 10, corresponding to the number of local extremum points of the "meter" type in 8 × 8 pixel blocks centered on pixel (i, j), the HiFST coefficient and the singular value vector, for a total of 1 × 10-dimensional vectors;
3.2 setting the number of output nodes to be 1, wherein the output nodes are fuzzy detection values obtained through BP neural network detection;
3.3 set the total of 20 hidden layer neurons, set the number of iterations in the experiment to 2000, set the error to 0.0006, and set the learning rate to 0.04.
4. The method for detecting local blur of an image based on saliency, according to claim 1, characterized in that the saliency detection map described in step 4 refers to a detection result map obtained by subjecting the image to saliency detection, and a most salient region in the image is obtained.
5. The saliency-based image local blur detection method of claim 1, characterized in that the BP training result map described in step 5 is a result map obtained by inputting an image into a BP neural network.
6. The method for detecting local blur of an image based on saliency, according to claim 1, characterized in that the superpixel segmentation map described in step 6 is a detection result map obtained by superpixel segmentation of the image, and the image is segmented into a plurality of superpixel blocks.
7. The saliency-based image local blur detection method of claim 1, characterized in that bilateral filtering optimization is performed on the detection results in step 8 to obtain a final result, and bilateral filtering is used for optimization on the results obtained in step 7.
8. The method for detecting local blurring of images based on significance according to claim 2, wherein in step 2, the specific method of singular value vector is
Figure FDA0003208079760000021
Where U, V are two orthogonal matrices, UiAnd viColumn vectors of orthogonal matrices U and V, respectively, S is a diagonal matrix and deltaIThe diagonal terms for S are the singular value vectors.
9. The method for detecting local blur of an image based on saliency of claim 2, characterized in that in step 2, the specific method of the local extreme point is
Figure FDA0003208079760000031
Si,jRepresenting the intensity of the pixel (i, j).
CN201810498275.0A 2018-05-22 2018-05-22 Saliency-based image local blur detection method Active CN109035196B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810498275.0A CN109035196B (en) 2018-05-22 2018-05-22 Saliency-based image local blur detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810498275.0A CN109035196B (en) 2018-05-22 2018-05-22 Saliency-based image local blur detection method

Publications (2)

Publication Number Publication Date
CN109035196A CN109035196A (en) 2018-12-18
CN109035196B true CN109035196B (en) 2022-07-05

Family

ID=64611401

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810498275.0A Active CN109035196B (en) 2018-05-22 2018-05-22 Saliency-based image local blur detection method

Country Status (1)

Country Link
CN (1) CN109035196B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977892B (en) * 2019-03-31 2020-11-10 西安电子科技大学 Ship detection method based on local saliency features and CNN-SVM
CN110083430B (en) * 2019-04-30 2022-03-29 成都映潮科技股份有限公司 System theme color changing method, device and medium
CN110826726B (en) * 2019-11-08 2023-09-08 腾讯科技(深圳)有限公司 Target processing method, target processing device, target processing apparatus, and medium
CN110838150B (en) * 2019-11-18 2022-07-15 重庆邮电大学 Color recognition method for supervised learning
CN111208148A (en) * 2020-02-21 2020-05-29 凌云光技术集团有限责任公司 Dig hole screen light leak defect detecting system
CN117475091B (en) * 2023-12-27 2024-03-22 浙江时光坐标科技股份有限公司 High-precision 3D model generation method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100530222C (en) * 2007-10-18 2009-08-19 清华大学 Image matching method
US9990712B2 (en) * 2015-04-08 2018-06-05 Algotec Systems Ltd. Organ detection and segmentation
CN104915636B (en) * 2015-04-15 2019-04-12 北京工业大学 Remote sensing image road recognition methods based on multistage frame significant characteristics
CN106780479A (en) * 2016-12-31 2017-05-31 天津大学 A kind of high precision image fuzzy detection method based on deep learning
CN107274419B (en) * 2017-07-10 2020-10-13 北京工业大学 Deep learning significance detection method based on global prior and local context

Also Published As

Publication number Publication date
CN109035196A (en) 2018-12-18

Similar Documents

Publication Publication Date Title
CN109035196B (en) Saliency-based image local blur detection method
CN108537239B (en) Method for detecting image saliency target
CN111340824B (en) Image feature segmentation method based on data mining
CN109934224B (en) Small target detection method based on Markov random field and visual contrast mechanism
CN111652317B (en) Super-parameter image segmentation method based on Bayes deep learning
CN109978848B (en) Method for detecting hard exudation in fundus image based on multi-light-source color constancy model
CN106991686B (en) A kind of level set contour tracing method based on super-pixel optical flow field
CN110415260B (en) Smoke image segmentation and identification method based on dictionary and BP neural network
CN109840483B (en) Landslide crack detection and identification method and device
CN106157330B (en) Visual tracking method based on target joint appearance model
CN111986125A (en) Method for multi-target task instance segmentation
CN108734200B (en) Human target visual detection method and device based on BING (building information network) features
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
CN108647703B (en) Saliency-based classification image library type judgment method
Tangsakul et al. Single image haze removal using deep cellular automata learning
CN109741358B (en) Superpixel segmentation method based on adaptive hypergraph learning
CN110910497B (en) Method and system for realizing augmented reality map
CN110211106B (en) Mean shift SAR image coastline detection method based on segmented Sigmoid bandwidth
CN111797795A (en) Pedestrian detection algorithm based on YOLOv3 and SSR
CN108765384B (en) Significance detection method for joint manifold sequencing and improved convex hull
CN111539966A (en) Colorimetric sensor array image segmentation method based on fuzzy c-means clustering
CN113095332B (en) Saliency region detection method based on feature learning
CN107085725B (en) Method for clustering image areas through LLC based on self-adaptive codebook
CN112686222B (en) Method and system for detecting ship target by satellite-borne visible light detector
Tang et al. Research of color image segmentation algorithm based on asymmetric kernel density estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant