CN109035196A - Saliency-Based Image Local Blur Detection Method - Google Patents

Saliency-Based Image Local Blur Detection Method Download PDF

Info

Publication number
CN109035196A
CN109035196A CN201810498275.0A CN201810498275A CN109035196A CN 109035196 A CN109035196 A CN 109035196A CN 201810498275 A CN201810498275 A CN 201810498275A CN 109035196 A CN109035196 A CN 109035196A
Authority
CN
China
Prior art keywords
conspicuousness
pixel
image
result
super
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810498275.0A
Other languages
Chinese (zh)
Other versions
CN109035196B (en
Inventor
方贤勇
丁成
汪粼波
王华彬
周健
李薛剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN201810498275.0A priority Critical patent/CN109035196B/en
Publication of CN109035196A publication Critical patent/CN109035196A/en
Application granted granted Critical
Publication of CN109035196B publication Critical patent/CN109035196B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

Aiming at the problem that a clear texture flat area is easy to be falsely detected as a fuzzy area due to lack of high-frequency information, the invention provides a saliency-based image local fuzzy detection method which comprises the following steps: combining singular value vectors representing image transform domain characteristics, reflecting local extreme points of image high-frequency information and entropy-weighted pooling DCT high-frequency coefficients (HiFST coefficients), complementing the two types of characteristic values mutually to obtain better characteristic vectors, inputting the obtained mixed characteristic vectors into a BP neural network to train to obtain a model, obtaining a preliminary result through prediction, combining the preliminary result with image significance detection, obtaining a further detection result through significance constraint of an image, and obtaining a final bilateral result through optimizing edge information through bilateral filtering. The qualitative and quantitative experiment results carried out on a public large data set show that the method has good fuzzy detection effect.

Description

Image local fuzzy detection method based on conspicuousness
Technical field
The invention belongs to technical field of image processing, and in particular to the image local fuzzy detection method based on conspicuousness.
Background technique
Fuzzy one of the branch as field of image processing of image, more and more Scientific Research Workers all put into it A large amount of concern, image local fuzzy detection technology also obtain biggish breakthrough.In the fuzzy detection side based on single image Before method occurs, most detection method is all based on several continuous blurred pictures and is detected, and this method obtains more Width image and camera parameters carry out fuzzy detection, but this method limitation is very big, needs to meet many requirements, example The parameter of many priori knowledges such as video camera is such as needed, and requires to obtain picture background to be static.Recent years proposes It is enough based on single width blurred picture and do not need any prior information.Having at present many is to carry out mould with single image The method for pasting detection, this method are broadly divided into two classes: the fuzzy inspection based on the single feature value for distinguishing fuzzy and clear area Survey method, the fuzzy detection method based on multiple characteristic value hybrid measurements for distinguishing clear areas and confusion region:
(1) the fuzzy detection method based on the single feature value for distinguishing fuzzy and clear area: the fuzzy region in image There is difference substantially with clear area, this is embodied in gradient, frequency domain and transform domain.Therefore many methods are all based on area What the single feature value of point clear area and fuzzy region was detected.Narvekar et al.[1]By the edge letter for analyzing image Breath carries out the detection of fuzzy region.Tang et al.[2]By research image spectrum residual error clear area and fuzzy region not Together, obtain Preliminary detection as a result, again carry out the result of Preliminary detection by the colouring information of adjacent area and gradient information Iteration optimization obtains further fuzzy detection result.Su et al.[3]According to pixel singular value and image it is fuzzy between contact, The pixel singular value first few items proportion of the fuzzy region feature higher than the pixel singular value of clear area is proposed using odd Ratio shared by different value first few items detects fuzzy region.Huang kind spring et al.[4]Su et al. research on the basis of, by singular value to The feature as differentiation clear area and fuzzy region is measured, and is combined with DCT coefficient, is trained with BP, predicts to obtain most Whole testing result.Javaran et al.[5]By analyzing the variation of fuzzy front and back DCT coefficient, image is used into gaussian filtering again It is fuzzy, ratio of the DCT coefficient 9 × 9,19 × 19 and 45 × 45 in two images is compared, the ratio of these three different scales is taken The average value of value, as the feature for distinguishing fuzzy region and clear area.Yi and Eramian et al.[6]It proposes a kind of based on part Measurement scale under binary pattern, and using the information of Image Multiscale, a kind of dividing method of robust is proposed for dividing Focus and defocus picture.Alireza et al.[7]Propose it is a kind of based on DCT high frequency coefficient combine with sort method to image into Row fuzzy detection, first becomes gradient image for image, reuses multi-scale method, by the DCT high frequency coefficient of different scale according to Entropy is ranked up, then carries out maximum pondization and gaussian filtering process obtains testing result to the end, but this testing result is It is detected in frequency domain, to the flat site with the same clean mark without containing high-frequency information, there are this erroneous detections.Shi etc. People [8] proposes the image block fuzzy detection based on sparse expression for detecting the recognizable mould of human eye by way of learning dictionary Paste.This method based on single features value can only detect blurred picture from frequency domain or airspace, for certain and mould The clear area in similar frequency domain or airspace is contained in paste region, such as same fuzzy region and texture without containing high-frequency information is put down Smooth region, it is easy to generate erroneous detection.
(2) the fuzzy detection method based on multiple characteristic value hybrid measurements: Chakrabarti et al.[9]By local frequency domain at Point, partial gradient transformation and colouring information combined under Markov random field segmentation framework, pass through the knowledge of fuzzy core It can detect motion blur region, but for the out-of-focus image either unconspicuous image detection effect of spatial variations Difference.Liu et al. people[10]With Bayes classifier by partial power's spectral tilt, gradient distribution histogram, full color saturation and Topography's color, gradient, spectral information autocorrelation combine, for detecting fuzzy region and clear area.Shi et al. It mentions[11]Gone out some new characteristic values for fuzzy measurement: the frequency domain of peak value and heavytailed distribution, spectrum in image gradient, with And the part filter obtained by Ground-truth figure, they are combined with Bayes classifier, then by obtained result into Row multiscale analysis, finally obtains result.
Fuzzy and clear area characteristic value progress hybrid measurement is distinguished using multiple, the characteristic value for avoiding single type can The energy problem, this method can both divide image from frequency domain, and also airspace divides image from airspace, this The result for the detection that fuzzy detection under kind multiple constraint is is more accurate, but the selection of multiple features and fusion problem become Influence testing result key factor, the feature Value Types of selection it is excessively single or choose feature Value Types excessively multiplicity all It will cause problem, such as the characteristic value of selection is too various, detection just has very big noise, many apparent fuzzy region meetings It, equally also can be to some regions erroneous detection if the same characteristic value chosen is excessively single by erroneous detection at clear area.But this The method of kind of multiple features fusion or effectively, therefore be herein exactly the mode for this multiple features fusion chosen, only exist It is then the singular value vector that will indicate image transform domain information, the local extremum for indicating image frequency domain information herein in Feature Selection Point and HiFST coefficient combine, and consider simultaneously from transform domain and frequency domain, and the characteristic value of selection is more complementary effectively.
Summary of the invention
In view of the deficiencies of the prior art, the present invention provides a kind of image local fuzzy detection method based on conspicuousness, tool Body is as follows:
Image local fuzzy detection method based on conspicuousness carries out as follows:
Step 1: one color image of input;
Step 2: feature vector solution being carried out to the result of step 1, obtains feature vector;
Step 3: BP neural network training being carried out to the result of step 2, obtains the BP neural network that training is completed;
Step 4: the result of step 1 being detected using conspicuousness method, obtains the conspicuousness detection figure of image;
Step 5: by step 3 as a result, obtaining BP prediction result figure;
Step 6: the result of step 1 being split using superpixel segmentation method, obtains the super-pixel segmentation figure of image;
Step 7: by step 4 conspicuousness detection figure, step 5 BP prediction result figure obtained and step 6 institute obtained The super-pixel segmentation figure of acquisition, obtains testing result;
Step 8: bilateral filtering is carried out to the testing result that step 7 obtains and is optimized, final testing result is obtained, and Output.
Furtherly, feature vector described in step 2, including singular value vector, Local Extremum and HiFST coefficient;
The singular value vector refers to that matrix carries out the vector in the diagonal matrix obtained after singular value decomposition;
The Local Extremum refers to the feature of representative image high-frequency information;
The HiFST coefficient refers to a kind of coefficient that discrete cosine transform coefficient is carried out to entropy weighting and pond.
Furtherly, the BP neural network that step 3 describes, refers to that three layers of traditional BP network, network can be divided into input Layer, hidden layer and output layer, the specific steps are as follows:
3.1 set input node number as 10, correspond to " rice " type office in the block of pixels of the 8*8 centered on pixel (i, j) Number, HiFST coefficient and the singular value vector of portion's extreme point are the vector of one 1 × 10 dimension in total;
3.2 set output node number as 1, are the fuzzy detection values detected by BP neural network;
3.3 setting hidden layer neurons one share 20, and the number of iterations being arranged in test is set as 2000, the error of setting It is set as 0.0006, learning rate is set as 0.04.
Furtherly, the conspicuousness detection figure that step 4 describes refers to that image, which is carried out conspicuousness, detects detection obtained Result figure obtains the region of most conspicuousness in image.
Furtherly, the BP training result figure that step 5 describes, which refers to, inputs an image into knot obtained in BP neural network Fruit figure.
Furtherly, the super-pixel segmentation figure that step 6 describes, which refers to, carries out super-pixel segmentation detection obtained for image Result figure divides the image into multiple super-pixel block.
Furtherly, the specific implementation step of the testing result of the conspicuousness constraint in step 7 are as follows:
7.1 superpixel segmentation methods used, divide the image into N number of super-pixel;
7.2 use BP training result figure as its initial value;
7.3 by the constraint relationship of conspicuousness, that is, set Bc, SCKnot is detected for the fuzzy detection result and conspicuousness of super-pixel p Fruit,βcFor the fuzzy detection result and conspicuousness testing result of the neighborhood super-pixel i (1≤i≤N) of super-pixel, retouched by above-mentioned It states it is found that BcAnd SCAnd direct relationship, that is, S is not presentCBig region BcIt is not necessarily bigger, on the contrary it is sameAnd βcAlso it does not deposit In direct relationship, but the S known to above-mentioned demonstrationCAnd βcBetween relationship influence whether BcWithBetween relationship, that is, SCWith βcBetween difference it is smaller, BcWithBetween difference should also be as smaller, thus the fuzzy detection result of super-pixel p can indicate Are as follows:
I.e.
And from above-mentioned conspicuousness and it is fuzzy between relationship
Above formula shows what the fuzzy detection value of pixel was codetermined by the fuzzy value and significance value of pixel adjacent thereto;Two The more close then fog-level of conspicuousness between a pixel is also more close, therefore fuzzy value and significance value all meet Gauss point Cloth, it is available
Obtain Bc
Optimal result, it is desirable to obtain the maximum value of following formula,
BfFor the testing result that super-pixel p is obtained by BP network, BiFor neighborhood super-pixel, and parameter alpha is then by super-pixel Conspicuousness difference between p and neighborhood super-pixel i obtains, and the more big then α of conspicuousness difference is bigger, then more dependent on Preliminary detection As a result, more will receive the influence of surrounding pixel block, α if α value is smaller are as follows:
α=max (β)=max (exp (- (| | Sp||-||Si||)*10))
Furtherly, bilateral filtering optimization acquisition final result is carried out to testing result in step 8, to obtain to step 7 To result optimized using bilateral filtering.
Furtherly, in step 2, the specific method of singular value vector is
Wherein, U, V are two orthogonal matrixes, uiAnd viThe respectively column vector of orthogonal matrix U and V, S δ for diagonal matrixI For diagonal term, that is, singular value vector of S.
Furtherly, in step 2, the specific method of Local Extremum is
Si,jIndicate the intensity of pixel (i, j).
Furtherly, in step 2, the specific method of HiFST coefficient is
Color image I is converted gradient image K by 2.1;
All DCT high frequency coefficients within the scope of 2.2 the calculating 3*3,7*7,15*15 centered on pixel (i, j), 31*31, These values are ranked up to obtain a vector according to size;
2.3, which most carry out maximum pond to this vector, obtains HiFST coefficient.
Beneficial technical effect
The present invention be easy for clearly texture flat site due to lacking high-frequency information be by erroneous detection fuzzy region this Problem proposes a kind of new solution, i.e., by the singular value vector for indicating transform domain information, indicate image high-frequency information Three characteristic values of Local Extremum and HiFST coefficient combine, and are predicted, are analyzed by BP neural network, obtained more Accurate detection carries out global restriction by conspicuousness, obtains as a result, again by the relationship between saliency and readability To more accurate detection as a result, finally optimizing using bilateral filtering to experimental result, either pass through qualitative experiment Or it can be seen that, method proposed by the invention can obtain very in the clear texture flat site of detection by quantitative experiment Good testing result, and the two distinct types of characteristic value that we select can be complementary to one another, and have very strong robustness.
Detailed description of the invention
Fig. 1 is On Local Fuzzy original image and testing result.
Fig. 2 is the corresponding Local Extremum number size of 4 image blocks compared in Fig. 1.
Fig. 3 is the On Local Fuzzy image of the block of 4 8*8 of label.
Fig. 4 is that (all pieces of singular value size amplifies 100 for the corresponding singular value size of 4 image blocks that compares in Fig. 3 Times).
Fig. 5 is the conspicuousness testing result and fuzzy detection result for On Local Fuzzy image.
Fig. 6 is the conspicuousness difference and clear difference for On Local Fuzzy image.
Fig. 7 is fuzzy detection flow chart.
Fig. 8 is the comparing result of the present invention with fuzzy image processing method mentioned by background technique.
Specific embodiment
Now in conjunction with the attached drawing design feature that the present invention will be described in detail.
Referring to Fig. 7, the image local fuzzy detection method based on conspicuousness carries out as follows
Step 1: one color image of input;
Step 2: feature vector solution being carried out to the result of step 1, obtains feature vector;
Step 3: BP neural network training being carried out to the result of step 2, obtains the BP neural network that training is completed;
Step 4: the result of step 1 being detected using conspicuousness method, obtains the conspicuousness detection figure of image;
Step 5: by step 3 as a result, obtaining BP prediction result figure;
Step 6: the result of step 1 being split using superpixel segmentation method, obtains the super-pixel segmentation figure of image;
Step 7: by step 4 conspicuousness detection figure, step 5 BP prediction result figure obtained and step 6 institute obtained The super-pixel segmentation figure of acquisition, obtains testing result;
Step 8: bilateral filtering is carried out to the testing result that step 7 obtains and is optimized, final testing result is obtained, and Output.
Furtherly, feature vector described in step 2, including singular value vector, Local Extremum and HiFST coefficient;
The singular value vector refers to that matrix carries out the vector in the diagonal matrix obtained after singular value decomposition;
The Local Extremum refers to the feature of representative image high-frequency information;
The HiFST coefficient refers to a kind of coefficient that discrete cosine transform coefficient is carried out to entropy weighting and pond.
Furtherly, the BP neural network that step 3 describes, refers to that three layers of traditional BP network, network can be divided into input Layer, hidden layer and output layer, the specific steps are as follows:
3.1 set input node number as 10, correspond to " rice " type office in the block of pixels of the 8*8 centered on pixel (i, j) Number, HiFST coefficient and the singular value vector of portion's extreme point are the vector of one 1 × 10 dimension in total;
3.2 set output node number as 1, are the fuzzy detection values detected by BP neural network;
3.3 setting hidden layer neurons one share 20, and the number of iterations being arranged in test is set as 2000, the error of setting It is set as 0.0006, learning rate is set as 0.04.
Furtherly, the conspicuousness detection figure that step 4 describes refers to that image, which is carried out conspicuousness, detects detection obtained Result figure obtains the region of most conspicuousness in image.
Furtherly, the BP training result figure that step 5 describes, which refers to, inputs an image into knot obtained in BP neural network Fruit figure.
Furtherly, the super-pixel segmentation figure that step 6 describes, which refers to, carries out super-pixel segmentation detection obtained for image Result figure divides the image into multiple super-pixel block.
Furtherly, the specific implementation step of the testing result of the conspicuousness constraint in step 7 are as follows:
7.1 superpixel segmentation methods used, divide the image into N number of super-pixel;
7.2 use BP training result figure as its initial value;
7.3 by the constraint relationship of conspicuousness, that is, set Bc, SCKnot is detected for the fuzzy detection result and conspicuousness of super-pixel p Fruit,βcFor the fuzzy detection result and conspicuousness testing result of the neighborhood super-pixel i (1≤i≤N) of super-pixel, retouched by above-mentioned It states it is found that BcAnd SCAnd direct relationship, that is, S is not presentCBig region BcIt is not necessarily bigger, on the contrary it is sameAnd βcAlso it does not deposit In direct relationship, but the S known to above-mentioned demonstrationCAnd βcBetween relationship influence whether BcWithBetween relationship, that is, SCAnd βc Between difference it is smaller, BcWithBetween difference should also be as smaller, thus the fuzzy detection result of super-pixel p can indicate Are as follows:
I.e.
And from above-mentioned conspicuousness and it is fuzzy between relationship
Above formula shows what the fuzzy detection value of pixel was codetermined by the fuzzy value and significance value of pixel adjacent thereto;Two The more close then fog-level of conspicuousness between a pixel is also more close, therefore fuzzy value and significance value all meet Gauss point Cloth, it is available
Obtain BcOptimal result, it is desirable to obtain the maximum value of following formula,
BfFor the testing result that super-pixel p is obtained by BP network, BiFor neighborhood super-pixel, and parameter alpha is then by super-pixel Conspicuousness difference between p and neighborhood super-pixel i obtains, and the more big then α of conspicuousness difference is bigger, then more dependent on Preliminary detection As a result, more will receive the influence of surrounding pixel block, α if α value is smaller are as follows:
α=max (β)=max (exp (- (| | Sp||-||Si||)*10))
Furtherly, bilateral filtering optimization acquisition final result is carried out to testing result in step 8, to obtain to step 7 To result optimized using bilateral filtering.
Furtherly, in step 2, the specific method of singular value vector is
Wherein, U, V are two orthogonal matrixes, uiAnd viThe respectively column vector of orthogonal matrix U and V, S δ for diagonal matrixI For diagonal term, that is, singular value vector of S.
Furtherly, in step 2, the specific method of Local Extremum is
Si,jIndicate the intensity of pixel (i, j).
Preferable scheme is that in step 2, the specific method of HiFST coefficient is
Color image I is converted gradient image K by 2.1;
All DCT high frequency coefficients within the scope of 2.2 the calculating 3*3,7*7,15*15 centered on pixel (i, j), 31*31, These values are ranked up to obtain a vector according to size;
2.3, which most carry out maximum pond to this vector, obtains HiFST coefficient.
Preferable scheme is that in step 4, conspicuousness detection method particularly includes:
Image is split by 4.1 super-pixel segmentations, obtains super-pixel segmentation figure, according to preset different super-pixel Number, has obtained several super-pixel segmentation figures, has been denoted as { s respectively1,s2,...,sm, wherein s1It is the most segmentation figure of super-pixel, sm It is then the least segmentation figure of super-pixel, then conspicuousness detection is carried out to every piece image, in other words the conspicuousness of image will be examined It surveys, has been divided into and the super-pixel in single region is detected, but may thus have ignored the connection of neighbouring super pixels, institute With paper when carrying out the detection of single super-pixel conspicuousness, three kinds of different types of characteristic values are proposed for single super picture The conspicuousness detection of element, these three characteristic values are divided into the significant characteristics of single super-pixel, and the constraint between neighbouring super pixels is special Sign, the background characteristics of single super-pixel.We introduce these three different features respectively below.
4.1.1 the first significant characteristics for being characterized in single super-pixel.Conspicuousness can be greatly distinguished using some The feature in region and non-limiting region.These features are mainly RGB feature, LAB feature, HSV feature, standard perimeter feature, LM filter change feature etc. constitutes the vector of one 34 dimension in total.
4.1.2 second of feature is the binding characteristic between two neighbouring super pixels.One super-pixel is significant block, then its It may be conspicuousness block that there is surrounding the super-pixel of similar features, which also to have greatly,.Binding characteristic mainly compares super-pixel and neighbour Feature between the super-pixel of domain is poor, these feature differences are mainly the RGB difference of two super-pixel, LAB difference, LM filtered difference, The maximum value difference of LM filtered difference, LAB histogram difference, color histogram difference and saturation histogram difference, in total group At 26 dimensional vectors as binding characteristic.
4.1.3 the third is characterized in that single super-pixel belongs to the feature of background.Pass through background color and the direct area of texture Whether point super-pixel is background.These are characterized in what salient region by comparing and the feature difference of non-background area obtained, Between these features and neighbouring super pixels binding characteristic dimension it is identical and one 26 dimension feature vector.
4.2 by combining the different types of feature one of three of the above that 80 sextuple vectors are obtained, by these features to Amount, which is put into random number forest, to be carried out the training of supervision and obtains conspicuousness testing result because piece image have M it is aobvious Work property testing result is determined at it by super-pixel number in every width conspicuousness testing result final by parameter setting Significant result in shared ratio, what is obtained is exactly final conspicuousness testing result.
Preferable scheme is that in step 6, super-pixel segmentation method particularly includes:
6.1 first have to initialize cluster centre.The color image of reading is originally tri- color space of RGB, uses letter Number transforms to CIELAB color space from rgb color space, can be in CIELAB color space to image in this way convenient for calculating It is split.Cluster centre is set as K, initializes this K cluster centre, and cluster centre is represented by five dimensional feature vector Vi= [li,ai,bi,xi,yi]T, wherein lab represents pixel in the three-dimensional color feature of CIELAB color space, and xy represents plane sky Between position feature, next it is exactly cluster process that i ∈ [1, k], which is the number of cluster centre, first by the division of image uniform At k grid, length and the width of each grid areIt is always a by the pixel that possesses in a color image Number.The cluster centre that SLIC algorithm is arranged in order to prevent appears on edge, or just appears on noise spot, the algorithm The block of 3*3 is also used, selects in this block the smallest point of gradient as cluster centre.
6.2 pixel cluster.The label of the label of each pixel and the cluster centre nearest apart from this pixel in image Identical, for SLIC algorithm in order to accelerate calculating speed, the range for carrying out pixel cluster is in the appropriate model centered on cluster centre In enclosing.This performance compared with traditional K mean cluster method is greatly improved, and K mean cluster algorithm is by each picture Plain to carry out similarity calculation with all cluster centre, this calculation SLIC algorithm that compares seems more complicated, time-consuming.
How 6.3 cluster centres change.After each pixel has stamped corresponding label, algorithm will be counted again Cluster centre is calculated, the position at new center is the mean place of all pixels for belonging to the cluster centre.Cluster centre each time When changing, algorithm can be recalculated, and new cluster is carried out to all pixels.It updates repeatedly so always, until poly- The variable at class center is less than the threshold value that program is set at the beginning, and algorithm just completes.
6.4 optimization.After algorithm stops iteration, in fact it could happen that the super-pixel block with same label simultaneously non-conterminous is asked Topic, that is, the non-conterminous super-pixel for occurring two or more possess identical label.In order to solve this problem, SLIC is mentioned The algorithm of same label adjacent area merging is gone out.The neighbouring super pixels for possessing same label merge between each other, due to this K super-pixel number of the presence of kind mode, SLIC setting is slightly larger than final segmentation super-pixel number.How super-pixel is calculated Between similarity degree, SLIC algorithm proposes a kind of new method D of measurement distance:
Wherein, dcAnd dsIt is expressed as color distance and positional distance:
Wherein, m and n is then the pixel in image, and the distance between the two pixels D is by directly calculating the two pictures Euclidean distance and color difference between element.Due to there is no to measure the unification of the two different characteristics of color characteristic and position feature Unit, difference between the two, which is directly added, leads to distance inaccuracy, in order to connect the two characteristic values, calculates in SLIC In method, maximum color characteristic distance N is proposedcWith maximum space positional distance NsConcept, by defining two characteristic distances Afterwards, color component and distance component are normalized respectively, distance D ' was redefined after carrying out normalization, this mode Lower color characteristic and distance feature just have unified linear module, are defined as follows:
In formula
Problem that may be very big due to depositing the color difference between each pixel in the picture, in SLIC paper In, by maximum color characteristic distance NcIt directly is set as m, then D ' variation are as follows:
In above formula, it is mainly adjusted by parameter m, so that it may adjust color distance dcWith space length dsBetween specific gravity, The calculated result of distance, i.e. tightness degree between influence neighbouring super pixels are directly influenced, because the effect of parameter m is to adjust Relationship between color characteristic and distance feature, so parameter m is thus referred to as tightness parameter, in SLIC algorithm, parameter m's Value range is generally [Isosorbide-5-Nitrae 0], can obtain optimal result in this way.
Preferable scheme is that in step 8, bilateral filtering method particularly includes:
Bilateral filtering is shown below
Wherein f (k, l) indicates input, and g (i, j) indicates output, and h (i, j, k, l) is airspace filter kernel function, such as following formula institute Show
K (i, j, k, l) is that codomain filtering kernel function is shown below
(i, j) is current point position, and position is put centered on (k, l),For spatial domain standard deviation,For codomain standard deviation, The two parameters need us to give by oneself.It is set as 3 in this experiment,It is set as 0.5.
Embodiment:
By example as can be seen that Fig. 8 (a) representative is original image, what Fig. 8 (b) was represented is the result figure manually marked, from In example it will be seen that Liu method, the method for Chakrabarti, the method for Su, the method for Shi14, the side of Shi15 Method, the method for Yi, the method for Tang, the method for Alireza, there are also the methods of BP, are all easy to miss clear texture flat site Inspection, their detection effect is all bad, the method that this patent proposes, can pass through in the case where BP testing result is inaccurate The constraint of conspicuousness, so that detection method becomes accurately the testing result of texture flat site, our testing result is most connect The result figure of person of modern times's work mark.
Now technical problem targeted in the present invention, key innovations are discussed as follows:
Local Extremum
The Local Extremum of image[12]The high-frequency information of image is represented, and fuzzy region is compared to the height of clear area Frequency information is less, therefore can be used as distinguishing the feature of clear area and fuzzy region.The class of two dimensional image Local Extremum Type can divide 5 classes, respectively EM1, EM2, EM3, EM4, EM5.Four kinds of front is all one-dimensional Local Extremum and the 5th kind is then Two-dimentional Local Extremum.Image local extreme point shows the high-frequency information of image, and in this chapter, we use Si,jIndicate picture The intensity of plain (i, j), in the case where not considering noise, table 1 indicates the 5th kind of " rice " the type pixel part in 3 × 3 region Extreme point.
1 local extremum vertex type of table
The 5th kind of " rice " type Local Extremum is all the high-frequency information for reflecting image in table 1, for blurred picture, The extreme point number of " rice " type is bigger in the difference of clear area and fuzzy region, therefore is obscured using the 5th kind of extreme point The detection in region.As shown in Fig. 1 (a), block 1, block 2 is two fuzzy blocks, and size 40*40, block 3, block 4 is two clear Block, block size are also 40*40, and Fig. 1 (b) is then the testing result obtained characterized by the 5th kind of " rice " type structure extreme point number Figure, Fig. 2 are then number statistics of five kinds of different type Local Extremums in blurred block 1,2, clear block 3,4, can be with by Fig. 2 Find out, " rice " type Local Extremum is larger in the difference of clear block and blurred block, and other four kinds of Local Extremums are in clear block It is not fairly obvious with the differentiation of blurred block, so " rice " type Local Extremum is used in the present invention as differentiation fuzzy region With the feature of clear area.
Singular value vector
For the image I that a width gives, if its size is m*n, its singular value decomposition are as follows:
Wherein, U, V are two orthogonal matrixes, uiAnd viThe respectively column vector of orthogonal matrix U and V, S δ for diagonal matrixI For the diagonal term of S, from formula 1 as can be seen that it is r by weight of singular value and square that order is 1 that image I, which can be resolved into, The sum of battle array, this is a kind of feature of transform domain, the overall shape of big singular value representative image, the details of small singular value correspondence image High-frequency information lacks high-frequency information since blurred picture lacks detailed information, so latter several of singular value vector are zero, So Su et al. is using the specific gravity of all singular values shared by the big singular value of first few items as the mark for judging that whether image clearly obscures Standard causes image detail information to be lost since image is fuzzy, therefore total singular value specific gravity is close to 1 shared by first few items singular value, still There are certain defects, are illustrated in fig. 3 shown below, we have chosen the block of 4 8*8 in total, wherein 2 are blurred block, two are clear Block.As shown in Figure 4, it is very high to account for total specific gravity for the singular value first three items of blurred block, and back five is substantially zeroed, and before clear block Three, face equally accounts for total specific gravity very high and subsequent five and is all not zero substantially.So singular value specific gravity is not used in this chapter, and It is to use singular value vector as feature.
Discrete cosine transform (DCT) is a kind of transformation for converting spatial information (si) to frequency domain information.Image carries out dct transform Afterwards, DCT coefficient can reflect the distribution situation of piece image intermediate frequency domain information, and lacking for image high-frequency information can then be caused by obscuring It loses.For piece image I, its discrete cosine transform is as shown in formula 2
Wherein:
In the present invention, using HiFST coefficient[7]As the feature for distinguishing image clearly region and fuzzy region, HiFST Coefficient is a kind of coefficient that DCT coefficient is carried out to entropy weighting and pond.Calculation method is as follows: firstly, converting color image I to Gradient image K, then calculates the 3*3 centered on pixel (i, j), 7*7,15*15, all DCT high frequency systems within the scope of 31*31 Number, these values are ranked up to obtain a vector according to size, are finally carried out maximum pond to this vector and are obtained HiFST system Number.
BP neural network construction
BP neural network belongs to the preceding artificial neural network to type topological structure, and it is a kind of quilt that topological structure, which only has three layers, Widely used network.BP network has the characteristics that self study, classification capacity is strong and is easily achieved, therefore the present invention selects BP net Network is as classifier.
The present invention uses three layers of traditional BP network, and network can be divided into input layer, hidden layer and output layer.Specifically such as Under: input node number is 10, corresponds to the number of " rice " type Local Extremum in the block of pixels of the 8*8 centered on pixel (i, j) Mesh, HiFST coefficient and singular value vector are the vector of one 1 × 10 dimension in total;Output node number is 1, is by BP nerve The fuzzy detection value that network detects;The hidden layer neuron one used shares 20, and the number of iterations being arranged in test is set It is 2000, the error of setting is set as 0.0006, sets 0.04 for learning rate.
Conspicuousness constraint
The flat clear area of texture is easy by erroneous detection to be fuzzy region in the result that BP is detected, in practice for clear The erroneous detection of clear texture flat site is always a problem, and in the present invention, we detect to obtain using saliency to BP Result constrained, enable detection method more accurate for the detection of texture flat site.Salient region detection Have great importance for image understanding and analysis, its target is the salient region in detection image, i.e. those human eyes The place being attracted.Conspicuousness, which detects, can be used many aspects, such as the reconstruction of image segmentation, image cognition, image object, Image quality evaluation etc..And for a blurred picture, either defocus blurred image or motion blur image, human eye is first What is first noticed is exactly those clearly marking areas more outstanding.
The conspicuousness detection method that the present invention uses is DRFI[13]Conspicuousness detection method.As shown below, Fig. 5 (a) For original image, Fig. 5 (b) is the result figure of conspicuousness detection, and Fig. 5 (c) is the fuzzy detection figure obtained using HiFST method, is passed through Our seeing of can will be apparent that of Fig. 6, conspicuousness difference is bigger between two pixels, then the readability difference of the two pixels Bigger, we propose a viewpoint according to the conspicuousness phenomenon of this blurred picture: it is fuzzy in region similar in conspicuousness Degree should be close, and the salient region in image should flock together without being distributed across on whole picture.According to This viewpoint proposes optimization method of the invention, i.e., by super-pixel segmentation, the superpixel segmentation method that we use here It is SLIC[14], N number of super-pixel is divided the image into, the PRELIMINARY RESULTS that we use BP classifier to obtain is as its initial value, so Conspicuousness constraint is carried out to image afterwards, i.e., if the conspicuousness difference very little of a super-pixel and neighbouring super pixels, this can be pushed away The fog-level for surveying this super-pixel depends on the fuzzy detection of its initial fuzzy detection result and surrounding super-pixel as a result, phase Instead, if the difference of a super-pixel and surrounding super-pixel is very big, the fog-level of this super-pixel mainly depends on In its initial fuzzy detection result.
If Bc, SCFor the fuzzy detection result and conspicuousness testing result of super-pixel p,βcFor the super picture of neighborhood of super-pixel The fuzzy detection result and conspicuousness testing result of plain i (1≤i≤N), seen from the above description, BcAnd SCAnd there is no direct Relationship, that is, SCBig region BcIt is not necessarily bigger, on the contrary it is sameAnd βcAlso direct relationship is not present, but by above-mentioned demonstration Know SCAnd βcBetween relationship influence whether BcWithBetween relationship, that is, SCAnd βcBetween difference it is smaller, BcWithBetween Difference should also be as smaller, and thus the fuzzy detection result of super-pixel p can indicate are as follows:
I.e.
And from above-mentioned conspicuousness and it is fuzzy between relationship
Above formula shows what the fuzzy detection value of pixel was codetermined by the fuzzy value and significance value of pixel adjacent thereto.Two The more close then fog-level of conspicuousness between a pixel is also more close, therefore fuzzy value and significance value all meet Gauss point Cloth, it is available
Obtain BcOptimal result, it is desirable to formula (5) maximum value,
BfFor the testing result that super-pixel p is obtained by BP network, BiFor neighborhood super-pixel, and parameter alpha is then by super-pixel Conspicuousness difference between p and neighborhood super-pixel i obtains, and the more big then α of conspicuousness difference is bigger, then more dependent on Preliminary detection As a result, more will receive the influence of surrounding pixel block, the α in the present invention program if α value is smaller are as follows:
α=max (β)=max (exp (- (| | Sp||-||Si||)*10)) (8)
Method flow diagram
Finally, being summarized as follows to algorithmic procedure of the invention:
Step 1: the input image to be detected obtains preliminary knot by the way that trained BP network is predicted Fruit.
Step 2: detecting by conspicuousness method and super-pixel method to input picture, conspicuousness detection figure is obtained With super-pixel segmentation figure.
Step 3: obtaining further testing result by conspicuousness constrained procedure set forth above.
Step 4: obtained result is optimized to obtain final fuzzy detection result with bilateral filtering.
Flow chart is as shown in Figure 7:
Experimental comparison, analysis and verifying
In order to prove the validity of the fuzzy region detection method based on conspicuousness, the experimental result and Liu et al. of this chapter People[10]As a result, Chakrabarti et al.[9]As a result, Su et al.[3]As a result, Shi14 et al.[3]As a result, Shi15 etc. People[8]As a result, Yi et al.[6]As a result, Tang et al.[2]As a result, Alireza et al.[7]Result compare.We use Shi14 et al.[11](it includes that 296 local motion blur pictures and 704 parts defocus mould that the inside has altogether to disclosed data set Paste picture).
Here is some experiment samples and methodical fuzzy detection result mentioned above.The method of this chapter is being distinguished Texture flat site and the good experiment effect in clear area, the fuzzy detection figure obtained by the method for this chapter are very close Ground-truth figure.
By experimental result as can be seen that Liu method, the method for Chakrabarti, the method for Su, the method for Shi14, The method of Shi15, the method for Yi, the method for Tang, the method for Alireza, there are also the methods of BP, are all easy flat to clear texture Smooth region erroneous detection, however method proposed by the present invention can pass through the pact of conspicuousness in the case where BP testing result is inaccurate Beam, so that detection method becomes accurate to the testing result of texture flat site.
In conclusion the present invention is easy by erroneous detection to be confusion region to clearly texture flat site due to lacking high-frequency information This problem of domain proposes a kind of new solution, i.e., will indicate singular value vector, the expression image high frequency of transform domain information Three characteristic values of Local Extremum and HiFST coefficient of information combine, and are predicted, are analyzed by BP neural network, obtained Accurate testing result, then by the relationship between saliency and readability, pass through conspicuousness carry out it is global Constraint has obtained more accurate detection as a result, finally optimizing using bilateral filtering to experimental result, either by fixed Property experiment or pass through quantitative experiment it can be seen that, the method that this chapter is proposed can be obtained detecting clear texture flat site Good testing result, and the two distinct types of characteristic value that we select can be complementary to one another, and have very strong robust Property.
Bibliography
[1]Narvekar N D,Karam L J.A no-reference image blur metric based on the cumulative probability of blur detection[J].IEEE Transactions on Image Processing,2011,20(9):2678-2683.
[2]Tang,C.,et al.,A Spectral and Spatial Approach of Coarse-to-Fine Blurred Image Region Detection.IEEE Signal Processing Letters,2016.23(11): p.1652-1656.
[3]Su B,Lu S,Tan C L.Blurred image region detection and classification[C]//Proceedings of the ACM international conference on Multimedia,Toronto:IEEE,2011:1397-1400.
[4] Huang Shanchun, Fang Xianyong, Zhou Jian, Shen Feng are schemed based on image local fuzzy measurement [J] the China of BP neural network As figure journal, 2015, (1): 20~28
[5]Javaran,T.A.,H.Hassanpour,andV.Abolghasemi,Automatic estimation and segmentation of partial blur in natural images.The Visual Computer, 2017.33(2):p.151-161.
[6]Yi,X.and M.Eramian,LBP-based segmentation of defocus blur.IEEE Transactions on Image Processing,2016.25(4):p.1626-1638.
[7]Golestaneh,S.A.and L.J.Karam,Spatially-Varying Blur Detection Based on Multiscale Fused and Sorted Transform Coefficients of Gradient Magnitudes.arXiv preprint arXiv:1703.07478,2017.
[8]Shi,J.,L.Xu,and J.Jia.Just noticeable defocus blur detection and estimation.in Proceedings of the IEEE Conference on Computer Vision andPattern Recognition.2015.
[9]Chakrabarti A,Zickler T,Freeman W T.Analyzing spatially-varying blur[C]//Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition,San Francisco:IEEE,2010:2512-2519.
[10]Liu R,Li Z,Jia J.Image partial blur detection and classification [C]//Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition,Anchorage:IEEE,2008:1-8.
[11]Shi,J.,L.Xu,and J.Jia.Discriminative blur detection features.in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2014.
[12]Zheng S,Jayasumana S,Romera-Paredes B,et al.Conditional random fields as recurrent neural networks[C]//Proceedings of the IEEE International Conference on Computer Vision,Boston:IEEE,2015:1529-1537.
[13]Jiang H,Wang J,Yuan Z,et al.Salient object detection:A discriminative regional feature integration approach[C]//Computer Vision and Pattern Recognition(CVPR),2013 IEEE Conference on.IEEE,2013:2083-2090.
Achanta R,Shaji A,Smith K,et al.SLIC superpixels compared to state- of-the-art superpixel methods[J].IEEE transactions on pattern analysis and machine intelligence,2012,34(11):2274-2282。

Claims (10)

1. the image local fuzzy detection method based on conspicuousness, which is characterized in that carry out as follows
Step 1: one color image of input;
Step 2: feature vector solution being carried out to the result of step 1, obtains feature vector;
Step 3: BP neural network training being carried out to the result of step 2, obtains the BP neural network that training is completed;
Step 4: the result of step 1 being detected using conspicuousness method, obtains the conspicuousness detection figure of image;
Step 5: by step 3 as a result, obtaining BP prediction result figure;
Step 6: the result of step 1 being split using superpixel segmentation method, obtains the super-pixel segmentation figure of image;
Step 7: being obtained by step 4 conspicuousness detection figure obtained, step 5 BP prediction result figure obtained and step 6 Super-pixel segmentation figure, obtain testing result;
Step 8: bilateral filtering being carried out to the testing result that step 7 obtains and is optimized, obtains final testing result, and defeated Out.
2. the image local fuzzy detection method according to claim 1 based on conspicuousness, which is characterized in that step 2 institute The feature vector stated, including singular value vector, Local Extremum and HiFST coefficient;
The singular value vector refers to that matrix carries out the vector in the diagonal matrix obtained after singular value decomposition;
The Local Extremum refers to the feature of representative image high-frequency information;
The HiFST coefficient refers to a kind of coefficient that discrete cosine transform coefficient is carried out to entropy weighting and pond.
3. the image local fuzzy detection method according to claim 1 based on conspicuousness, which is characterized in that step 3 is retouched The BP neural network stated refers to that three layers of traditional BP network, network can be divided into input layer, hidden layer and output layer, specific to walk It is rapid as follows:
3.1 set input node number as 10, correspond to " rice " type local pole in the block of pixels of the 8*8 centered on pixel (i, j) It is worth number, HiFST coefficient and the singular value vector of point, is the vector of one 1 × 10 dimension in total;
3.2 set output node number as 1, are the fuzzy detection values detected by BP neural network;
3.3 setting hidden layer neurons one share 20, and the number of iterations being arranged in test is set as 2000, and the error of setting is set It is 0.0006, learning rate is set as 0.04.
4. the image local fuzzy detection method according to claim 1 based on conspicuousness, which is characterized in that step 4 is retouched The conspicuousness detection figure stated refers to that image, which is carried out conspicuousness, detects testing result figure obtained, obtains most conspicuousness in image Region.
5. the image local fuzzy detection method according to claim 1 based on conspicuousness, which is characterized in that step 5 is retouched The BP training result figure stated, which refers to, inputs an image into result figure obtained in BP neural network.
6. the image local fuzzy detection method according to claim 1 based on conspicuousness, which is characterized in that step 6 is retouched The super-pixel segmentation figure stated, which refers to, carries out super-pixel segmentation testing result figure obtained for image, divides the image into multiple super Block of pixels.
7. the image local fuzzy detection method according to claim 1 based on conspicuousness, which is characterized in that in step 7 Conspicuousness constraint testing result specific implementation step are as follows:
7.1 superpixel segmentation methods used, divide the image into N number of super-pixel;
7.2 use BP training result figure as its initial value;
7.3 by the constraint relationship of conspicuousness, that is, set Bc, SCFor the fuzzy detection result and conspicuousness testing result of super-pixel p,βcFor the fuzzy detection result and conspicuousness testing result of the neighborhood super-pixel i (1≤i≤N) of super-pixel, by foregoing description It is found that BcAnd SCAnd direct relationship, that is, S is not presentCBig region BcIt is not necessarily bigger, on the contrary it is sameAnd βcAlso it is not present Direct relationship, but the S known to above-mentioned demonstrationCAnd βcBetween relationship influence whether BcWithBetween relationship, that is, SCAnd βc Between difference it is smaller, BcWithBetween difference should also be as smaller, thus the fuzzy detection result of super-pixel p can indicate Are as follows:
I.e.
And from above-mentioned conspicuousness and it is fuzzy between relationship
Above formula shows what the fuzzy detection value of pixel was codetermined by the fuzzy value and significance value of pixel adjacent thereto;Two pictures The more close then fog-level of conspicuousness between element is also more close, therefore fuzzy value and significance value all meet Gaussian Profile, can To obtain
Obtain BcOptimal result, it is desirable to obtain the maximum value of following formula,
BfFor the testing result that super-pixel p is obtained by BP network, BiFor neighborhood super-pixel, and parameter alpha be then by super-pixel p and Conspicuousness difference between neighborhood super-pixel i obtains, and the more big then α of conspicuousness difference is bigger, then more depends on the knot of Preliminary detection Fruit more will receive the influence of surrounding pixel block, α if α value is smaller are as follows:
Wherein W (i) is all neighborhood super-pixel of super-pixel p, dsaliency(Sp, Si) represent two super-pixel p, i conspicuousness it is poor Value, dposition(Sp, Si) Euclidean distance of two super-pixel p, i is represented, c is adjustable parameter, takes 3, obtainsAs The testing result of conspicuousness constraint.
8. the image local fuzzy detection method according to claim 1 based on conspicuousness, which is characterized in that in step 8 Bilateral filtering optimization carried out to testing result obtain final result, the result to obtain to step 7 carried out using bilateral filtering Optimization.
9. the image local fuzzy detection method according to claim 2 based on conspicuousness, which is characterized in that in step 2 In, the specific method of singular value vector is
Wherein, U, V are two orthogonal matrixes, uiAnd viThe respectively column vector of orthogonal matrix U and V, S δ for diagonal matrixIFor S's Diagonal term, that is, singular value vector.
10. the image local fuzzy detection method according to claim 2 based on conspicuousness, which is characterized in that in step 2 In, the specific method of Local Extremum is
SI, jIndicate the intensity of pixel (i, j).
CN201810498275.0A 2018-05-22 2018-05-22 Saliency-based image local blur detection method Active CN109035196B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810498275.0A CN109035196B (en) 2018-05-22 2018-05-22 Saliency-based image local blur detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810498275.0A CN109035196B (en) 2018-05-22 2018-05-22 Saliency-based image local blur detection method

Publications (2)

Publication Number Publication Date
CN109035196A true CN109035196A (en) 2018-12-18
CN109035196B CN109035196B (en) 2022-07-05

Family

ID=64611401

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810498275.0A Active CN109035196B (en) 2018-05-22 2018-05-22 Saliency-based image local blur detection method

Country Status (1)

Country Link
CN (1) CN109035196B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977892A (en) * 2019-03-31 2019-07-05 西安电子科技大学 Ship Detection based on local significant characteristics and CNN-SVM
CN110083430A (en) * 2019-04-30 2019-08-02 成都市映潮科技股份有限公司 A kind of system theme color replacing options, device and medium
CN110826726A (en) * 2019-11-08 2020-02-21 腾讯科技(深圳)有限公司 Object processing method, object processing apparatus, object processing device, and medium
CN110838150A (en) * 2019-11-18 2020-02-25 重庆邮电大学 Color recognition method for supervised learning
CN111208148A (en) * 2020-02-21 2020-05-29 凌云光技术集团有限责任公司 Dig hole screen light leak defect detecting system
CN114827432A (en) * 2021-01-27 2022-07-29 深圳市万普拉斯科技有限公司 Focusing method and system, mobile terminal and readable storage medium
CN117475091A (en) * 2023-12-27 2024-01-30 浙江时光坐标科技股份有限公司 High-precision 3D model generation method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101140624A (en) * 2007-10-18 2008-03-12 清华大学 Image matching method
CN104915636A (en) * 2015-04-15 2015-09-16 北京工业大学 Remote sensing image road identification method based on multistage frame significant characteristics
US20160300343A1 (en) * 2015-04-08 2016-10-13 Algotec Systems Ltd. Organ detection and segmentation
CN106780479A (en) * 2016-12-31 2017-05-31 天津大学 A kind of high precision image fuzzy detection method based on deep learning
CN107274419A (en) * 2017-07-10 2017-10-20 北京工业大学 A kind of deep learning conspicuousness detection method based on global priori and local context

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101140624A (en) * 2007-10-18 2008-03-12 清华大学 Image matching method
US20160300343A1 (en) * 2015-04-08 2016-10-13 Algotec Systems Ltd. Organ detection and segmentation
CN104915636A (en) * 2015-04-15 2015-09-16 北京工业大学 Remote sensing image road identification method based on multistage frame significant characteristics
CN106780479A (en) * 2016-12-31 2017-05-31 天津大学 A kind of high precision image fuzzy detection method based on deep learning
CN107274419A (en) * 2017-07-10 2017-10-20 北京工业大学 A kind of deep learning conspicuousness detection method based on global priori and local context

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977892A (en) * 2019-03-31 2019-07-05 西安电子科技大学 Ship Detection based on local significant characteristics and CNN-SVM
CN109977892B (en) * 2019-03-31 2020-11-10 西安电子科技大学 Ship detection method based on local saliency features and CNN-SVM
CN110083430A (en) * 2019-04-30 2019-08-02 成都市映潮科技股份有限公司 A kind of system theme color replacing options, device and medium
CN110083430B (en) * 2019-04-30 2022-03-29 成都映潮科技股份有限公司 System theme color changing method, device and medium
CN110826726A (en) * 2019-11-08 2020-02-21 腾讯科技(深圳)有限公司 Object processing method, object processing apparatus, object processing device, and medium
CN110826726B (en) * 2019-11-08 2023-09-08 腾讯科技(深圳)有限公司 Target processing method, target processing device, target processing apparatus, and medium
CN110838150A (en) * 2019-11-18 2020-02-25 重庆邮电大学 Color recognition method for supervised learning
CN110838150B (en) * 2019-11-18 2022-07-15 重庆邮电大学 Color recognition method for supervised learning
CN111208148A (en) * 2020-02-21 2020-05-29 凌云光技术集团有限责任公司 Dig hole screen light leak defect detecting system
CN114827432A (en) * 2021-01-27 2022-07-29 深圳市万普拉斯科技有限公司 Focusing method and system, mobile terminal and readable storage medium
CN117475091A (en) * 2023-12-27 2024-01-30 浙江时光坐标科技股份有限公司 High-precision 3D model generation method and system
CN117475091B (en) * 2023-12-27 2024-03-22 浙江时光坐标科技股份有限公司 High-precision 3D model generation method and system

Also Published As

Publication number Publication date
CN109035196B (en) 2022-07-05

Similar Documents

Publication Publication Date Title
CN109035196A (en) Saliency-Based Image Local Blur Detection Method
Hu et al. Revisiting single image depth estimation: Toward higher resolution maps with accurate object boundaries
CN107133948B (en) Image blurring and noise evaluation method based on multitask convolution neural network
CN104850850B (en) A kind of binocular stereo vision image characteristic extracting method of combination shape and color
CN111611874B (en) Face mask wearing detection method based on ResNet and Canny
CN106951870B (en) Intelligent detection and early warning method for active visual attention of significant events of surveillance video
CN107590427B (en) Method for detecting abnormal events of surveillance video based on space-time interest point noise reduction
Espinal et al. Wavelet-based fractal signature analysis for automatic target recognition
CN108154087A (en) A kind of matched infrared human body target detection tracking method of feature based
CN106157330B (en) Visual tracking method based on target joint appearance model
CN106991686A (en) A kind of level set contour tracing method based on super-pixel optical flow field
CN109271932A (en) Pedestrian based on color-match recognition methods again
CN104657980A (en) Improved multi-channel image partitioning algorithm based on Meanshift
CN105069816B (en) A kind of method and system of inlet and outlet people flow rate statistical
CN110827312A (en) Learning method based on cooperative visual attention neural network
CN110910497B (en) Method and system for realizing augmented reality map
Tangsakul et al. Single image haze removal using deep cellular automata learning
Jia et al. Fabric defect inspection based on lattice segmentation and lattice templates
CN108073940A (en) A kind of method of 3D object instance object detections in unstructured moving grids
CN108154488B (en) A kind of image motion ambiguity removal method based on specific image block analysis
Talbot et al. Elliptical distance transforms and the object splitting problem
CN110147755B (en) Context cascade CNN-based human head detection method
Aslam et al. A review on various clustering approaches for image segmentation
Ma et al. Integration of multiresolution image segmentation and neural networks for object depth recovery
Zhang et al. CAD Technology Under the Background of Internet of Things and Its Application in Video Automatic Processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant