CN104463917A - Image visual saliency detection method based on division method normalization - Google Patents

Image visual saliency detection method based on division method normalization Download PDF

Info

Publication number
CN104463917A
CN104463917A CN201410619259.4A CN201410619259A CN104463917A CN 104463917 A CN104463917 A CN 104463917A CN 201410619259 A CN201410619259 A CN 201410619259A CN 104463917 A CN104463917 A CN 104463917A
Authority
CN
China
Prior art keywords
division
sigma
channel
yellow
red
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410619259.4A
Other languages
Chinese (zh)
Other versions
CN104463917B (en
Inventor
余映
林洁
杨鉴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunnan University YNU
Original Assignee
Yunnan University YNU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunnan University YNU filed Critical Yunnan University YNU
Priority to CN201410619259.4A priority Critical patent/CN104463917B/en
Publication of CN104463917A publication Critical patent/CN104463917A/en
Application granted granted Critical
Publication of CN104463917B publication Critical patent/CN104463917B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image visual saliency detection method based on division method normalization, and belongs to the field of automatically calculating the visual saliency from bottom to top. According to the image visual saliency detection method, the color and brightness features of each pixel are utilized for calculating the visual saliency, division method normalization is adopted for calculating and stimulating the mutual inhibiting effect of nerve cells of similar detection features in the primary visual cortex, and the method has biological fitness. Compared with a traditional saliency calculating method, calculation is easy and efficient, the visual saliency caused by color differences can be accurately detected, the salient value of the salient region is integrally enhanced, and an obtained image visual saliency map has the clear shape.

Description

Based on division normalized image vision conspicuousness detection method
Technical field
The invention belongs to the automatic calculating field of bottom-up vision significance, be specifically related to a kind of based on division normalized image vision conspicuousness detection method.
Background technology
The neural resource of human brain is limited, and it cannot process all visually-perceptible information simultaneously.Vision attention is a kind of important Vision information processing mechanism, and it only allows a small amount of visually-perceptible information to enter senior cortex to process, as short-term memory, visual consciousness, identification and study etc.Vision significance (Visual saliency) is a kind of vision perception characteristic, and it can allow a significant target or region highlight in the middle of the visual scene of complexity, thus can cause our vision attention.Some vision attentions are formed at that scene relies on or claim bottom-up vision significance, and some vision attentions are controlled by Task Dependent or claim top-down vision significance.
Visual saliency map (Visual saliency map) is widely used in the middle of the application of much computer vision, as the image scaling, image retrieval etc. of attention object segmentation, object identification, adapting to image compression, content erotic.In visual saliency map, the size of each pixel grayscale represents the power of correspondence position conspicuousness in visual scene.The people such as Itti proposed in 1998 " Amodel of saliency-based visual attention for rapid scene analysis ".The method neuromechanism that vision significance is formed in computation structure patrix apery brain, can calculate the visual saliency map of input scene image.Recently, occurred that a class conspicuousness detection method carrys out computation vision conspicuousness from the angle of information theory, these class methods comprise " the Saliency based on information maximization " that the people such as Bruce proposed in 2005, and " Graph-based visual saliency " that the people such as Harel proposed in 2006.Although this kind of algorithm has good conspicuousness detection perform, their calculated amount are very large, still can not process in real time.
Another kind of vision significance computing method calculate in frequency domain.The people such as Hou proposed in 2007 " Saliencydetection:a spectral residual approach ", and the method utilizes the residual error between the amplitude spectrum average of input picture Fourier transform amplitude spectrum and natural image to carry out the vision significance of calculating input image.The people such as Yu proposed in 2009 " Pulse discretecosine transform for saliency-based visual attention ", and the method carrys out the visual saliency map of computed image by the coefficient in transform domain of normalization input picture discrete cosine transform.Conspicuousness computing method computation complexity based on frequency domain is low, and computing velocity quickly, can be applied to real time processing system, but their remarkable figure resolution is lower, can not provide body form clearly.
Current most conspicuousness computing method only can obtain the visual saliency map of low rate respectively, and calculation cost is expensive.Some algorithm can only detect the edge of well-marked target, and can not detect complete well-marked target.The people such as Achanta proposed in 2009 " Frequency-tuned salient region detection ", and the method utilizes the Euclidean distance between the color average of the color value of each pixel and entire image to carry out computation vision significantly to scheme.Although the method step is very simple, and can obtain the remarkable figure of full resolution, it is not design according to the Forming Mechanism of vision significance, and therefore it significantly schemes with the visually-perceptible gap of people larger.
Summary of the invention
The object of the invention is to propose one based on division normalized image vision conspicuousness detection method, important objects region in image can be made to be highlighted equably.
For reaching above-mentioned purpose of the present invention, provided by the invention based on division normalized image vision conspicuousness detection method, specifically comprise the following steps:
1) be that the colored input picture of M × N pixel is from RGB color notation conversion space to CIE1976 L*a*b* color space by size.Input picture can produce the Color Channel that three have biorational, wherein luminance channel L, green/red channel opposing A and indigo plant/yellow channel opposing B after conversion;
2) green/red channel opposing A is decomposed into two subchannel: A -and A +, wherein, A -that all setting to 0 on the occasion of element of matrix A is obtained, and A +all negative value elements of matrix A are set to 0 and obtains; Indigo plant/yellow channel opposing B is decomposed into two subchannel: B -and B +, wherein, B -that all setting to 0 on the occasion of element of matrix B is obtained, and B +all negative value elements of matrix B are set to 0 obtain, by matrix A -, A +, B -and B +regard four Color Channels as, respectively corresponding green, red, blue, yellow four kinds of colors;
3) energy of green, red, blue, yellow and luminance channel is calculated, wherein, E g, E r, E b, E yand E lthe respectively energy of corresponding green, red, blue, yellow and brightness 5 feature passages;
4) to each element of green, red, blue, yellow and luminance channel matrix divided by the energy of this passage, do division normalization;
5) will do division normalized green, red, blue, yellow four Color Channels reconsolidate is two color channel opposings, utilize the normalized feature passage of division with form division normalized image, in this image, each pixel is considered as a point in three dimensions, and the Euclidean distance between some pixels and all pixel averages is exactly the saliency value of this pixel.
Wherein, in the third step, calculate the energy of green, red, blue, yellow and luminance channel, channel energy is here defined as the absolute value sum of all elements in access matrix, and specific formula for calculation is as follows:
E g = Σ x = 1 M Σ y = 1 N | A - ( x , y ) | ,
E r = Σ x = 1 M Σ y = 1 N | A + ( x , y ) | ,
E b = Σ x = 1 M Σ y = 1 N | B - ( x , y ) | ,
E y = Σ x = 1 M Σ y = 1 N | B + ( x , y ) | ,
E L = Σ x = 1 M Σ y = 1 N | L ( x , y ) | ,
Wherein, E g, E r, E b, E yand E lthe respectively energy of corresponding green, red, blue, yellow and brightness 5 feature passages.
Wherein, in the 4th step, division normalization is that namely divided by the absolute value sum of this access matrix all elements, specific formula for calculation is as follows by the energy of each element of access matrix divided by this passage:
A ~ - ( x , y ) = A - ( x , y ) E g ,
A ~ + ( x , y ) = A + ( x , y ) E r ,
B ~ - ( x , y ) = B - ( x , y ) E b ,
B ~ + ( x , y ) = B + ( x , y ) E y ,
L ~ ( x , y ) = L ( x , y ) E L ,
Wherein, with done normalized green, the red, blue, yellow of division and brightness 5 feature passages respectively.After above-mentioned division normalization calculates, the energy of each Color Channel is equal to 1.This means, if the energy of a certain Color Channel is very little, so after division normalization, the absolute value (amplitude) of its all elements will amplify relatively.That is, the more weak Color Channel of energy is strengthened relatively, and the Color Channel that energy is stronger is weakened relatively.For those energy before division normalization very little (lower than M × N × 128 1% ~ 5% between) Color Channel, need to suppress or zero setting it after division normalized, in case this human eye almost perception less than weak signal extremely amplified after division normalization.This is because people is imperceptible energy very weak color characteristic, and this weak signal can be considered picture noise.
Wherein, in the 5th step, will do division normalized green, red, blue, yellow four Color Channels reconsolidate between two is two color channel opposings, specific formula for calculation is as follows:
A ~ = A ~ - + A ~ + ,
B ~ = B ~ - + B ~ + .
Wherein, normalized green/red channel opposing of division and the normalized indigo plant of division/yellow channel opposing respectively.
Utilize the normalized feature passage of division with carry out the remarkable figure of calculating input image.The basic thought of algorithm is as follows: by with in the division normalized image formed, each pixel can be considered as a point in three dimensions, and the Euclidean distance between some pixels and all pixel averages is exactly the saliency value of this pixel.Calculate in three Color Channels if decomposed, so in given Color Channel, the passage saliency value of a certain pixel can be defined as the absolute value of the value of this pixel and the difference of passage average.After calculating three passage saliency value corresponding to each pixel, one can be integrated into and significantly be schemed S (its size remains M × N's).In remarkable figure, the saliency value of a certain position is exactly the euclideam norm (Euclidean Norm) of three passage saliency value of this position, and specific formula for calculation is:
S ( x , y ) = [ ω 1 · ( L ~ ( x , y ) - m L ~ ) ] 2 + [ ω 2 · ( A ~ ( x , y ) - m A ~ ) ] 2 + [ ω 3 · ( B ~ ( x , y ) - m B ~ ) ] 2 ,
Wherein, with represent division normalization characteristic passage respectively with respective average.Note adding three parameter ω in formula above 1, ω 2and ω 3, this is to adjust each passage saliency value weight shared in the calculation flexibly.Usually, can ω be set 1=1, and ω is set 23=2.55.Finally, also need the remarkable figure S obtained to normalize to grey level range [0,255].
Image vision conspicuousness computing method proposed by the invention utilize the color of each pixel and brightness to calculate its vision significance, in the division normalization calculating simulation primary visual cortex adopted there is the neuronic mutual inhibiting effect of similar detection feature, there is biorational.Compare with traditional conspicuousness computing method, this method has following four advantages: 1, calculate simple efficient; 2, the vision significance that color distortion causes can accurately be detected; 3, the remarkable figure of full resolution can be obtained; 4, the saliency value of marking area obtains overall enhanced, has shape clearly.This method achieves the result being obviously better than other classic methods on multiple vision significance test template and natural image test set.
Accompanying drawing explanation
Fig. 1 be the invention process row based on division normalized image vision conspicuousness detection method process flow diagram;
Fig. 2 is the example of vision significance test template;
Wherein: (a) test template; B () correspondence often opens the visual saliency map that test template calculates;
Fig. 3 is the example of general objective marking area natural image;
Wherein: (a) natural image; B () correspondence often opens the visual saliency map that natural image calculates;
Fig. 4 is the example of Small object marking area natural image;
Wherein: (a) natural image; B () correspondence often opens the visual saliency map that natural image calculates.
Embodiment
Below by example, the present invention will be further described.It should be noted that the object publicizing and implementing example is to help to understand the present invention further, but it will be appreciated by those skilled in the art that: in the spirit and scope not departing from the present invention and claims, various substitutions and modifications are all possible.Therefore, the present invention should not be limited to the content disclosed in embodiment, and the scope that the scope of protection of present invention defines with claims is as the criterion.
Fig. 1 is the processing flow chart that the present invention is based on division normalized image vision conspicuousness computing method, comprising:
The first step, transforms to CIE1976 L*a*b* color space by input picture
Be that the colored input picture of M × N pixel is from RGB color notation conversion space to CIE1976 L*a*b* color space by size.Input picture can produce the Color Channel that three have biorational, i.e. luminance channel L, green/red channel opposing A and indigo plant/yellow channel opposing B after conversion.
Second step, calculates green, red, blue, yellow four Color Channels
Green/red channel opposing A is decomposed into two subchannel: A -and A +, wherein, A -that all setting to 0 on the occasion of element of matrix A is obtained, and A +all negative value elements of matrix A are set to 0 and obtains.Similarly, indigo plant/yellow channel opposing B is decomposed into two subchannel: B -and B +, wherein, B -that all setting to 0 on the occasion of element of matrix B is obtained, and B +all negative value elements of matrix B are set to 0 obtain.According to the definition of L*a*b* color space, can by matrix A -, A +, B -and B +regard four Color Channels as, respectively corresponding green, red, blue, yellow four kinds of colors.
3rd step, calculates the energy of green, red, blue, yellow and luminance channel
Calculate the energy of green, red, blue, yellow and luminance channel, channel energy is here defined as the absolute value sum of all elements in access matrix, and specific formula for calculation is as follows:
E g = Σ x = 1 M Σ y = 1 N | A - ( x , y ) | ,
E r = Σ x = 1 M Σ y = 1 N | A + ( x , y ) | ,
E b = Σ x = 1 M Σ y = 1 N | B - ( x , y ) | ,
E y = Σ x = 1 M Σ y = 1 N | B + ( x , y ) | ,
E L = Σ x = 1 M Σ y = 1 N | L ( x , y ) | ,
Wherein, E g, E r, E b, E yand E lthe respectively energy of corresponding green, red, blue, yellow and brightness 5 feature passages.
4th step, does division normalization to green, red, blue, yellow and luminance channel
Division normalization is that namely divided by the absolute value sum of this access matrix all elements, specific formula for calculation is as follows by the energy of each element of access matrix divided by this passage:
A ~ - ( x , y ) = A - ( x , y ) E g ,
A ~ + ( x , y ) = A + ( x , y ) E r ,
B ~ - ( x , y ) = B - ( x , y ) E b ,
B ~ + ( x , y ) = B + ( x , y ) E y ,
L ~ ( x , y ) = L ( x , y ) E L ,
Wherein, with done normalized green, the red, blue, yellow of division and brightness 5 feature passages respectively.After above-mentioned division normalization calculates, the energy of each Color Channel is equal to 1.This means, if the energy of a certain Color Channel is very little, so after division normalization, the absolute value (amplitude) of its all elements will amplify relatively.That is, the more weak Color Channel of energy is strengthened relatively, and the Color Channel that energy is stronger is weakened relatively.
5th step, the Color Channel very weak to energy suppresses
Very little (such as script energy, lower than 3% of possible maximum energy value M × N × 128) Color Channel, need to suppress or zero setting it after division normalized, in case this human eye almost perception less than weak signal extremely amplified after division normalization.This is because people is imperceptible energy very weak color characteristic, and this weak signal can be considered picture noise.
Normalized for division green, red, blue, yellow four Color Channels are merged into two color channel opposings by the 6th step
To do division normalized green, red, blue, yellow four Color Channels reconsolidate between two is two color channel opposings, specific formula for calculation is as follows:
A ~ = A ~ - + A ~ + ,
B ~ = B ~ - + B ~ + ,
Wherein, normalized green/red channel opposing of division and the normalized indigo plant of division/yellow channel opposing respectively.
7th step, utilizes division normalized feature passage to carry out the remarkable figure of calculating input image
Utilize the normalized feature passage of division with carry out the remarkable figure of calculating input image.The basic thought of algorithm is as follows: by with in the division normalized image formed, each pixel can be considered as a point in three dimensions, and the Euclidean distance between some pixels and all pixel averages is exactly the saliency value of this pixel.Calculate in three Color Channels if decomposed, so in given Color Channel, the passage saliency value of a certain pixel can be defined as the absolute value of the value of this pixel and the difference of passage average.After calculating three passage saliency value corresponding to each pixel, one can be integrated into and significantly be schemed S (its size remains M × N's).In remarkable figure, the saliency value of a certain position is exactly the euclideam norm (Euclidean Norm) of three passage saliency value of this position, and specific formula for calculation is:
S ( x , y ) = [ ω 1 · ( L ~ ( x , y ) - m L ~ ) ] 2 + [ ω 2 · ( A ~ ( x , y ) - m A ~ ) ] 2 + [ ω 3 · ( B ~ ( x , y ) - m B ~ ) ] 2 ,
Wherein, with represent division normalization characteristic passage respectively with respective average.Note in formula, add three parameter ω 1, ω 2and ω 3, this is to adjust each passage saliency value weight shared in the calculation flexibly.Usually, can ω be set 1=1, and ω is set 23=2.55.Finally, also need the remarkable figure S obtained to normalize to grey level range [0,255].
Be illustrated in figure 2 exemplary plot during above-mentioned processing procedure process one group of vision significance test template.Fig. 2 (a) is depicted as 4 vision significance test templates, and in each image, only have a significant target, it has unique color, and from first image to the 4th image, and the vision significance of target is weakening gradually.In the visual saliency map that this method shown in Fig. 2 (b) calculates, the difference of target and interfering object saliency value is weakening gradually, and the visually-perceptible of this and people just in time matches.This example illustrates, this method is also very accurate to the detection of color slight change, and matches with the visually-perceptible situation of people.
Be illustrated in figure 3 the example of above-mentioned processing procedure process one group of general objective marking area natural image.Fig. 3 (a) is depicted as 7 natural images, often opens in image and contains larger well-marked target region.In the visual saliency map that this method shown in Fig. 3 (b) calculates, remarkable figure is full resolution, and well-marked target has profile clearly, and the saliency value of well-marked target is overall enhanced.
Be illustrated in figure 4 the example of above-mentioned processing procedure process one group of Small object marking area natural image.Fig. 4 (a) is depicted as 7 natural images, often opens in image and contains less well-marked target region.In the visual saliency map that this method shown in Fig. 4 (b) calculates, remarkable figure is full resolution, and well-marked target has profile clearly, and the saliency value of marking area is overally improved.
Saliency computing method disclosed by the invention, the color of each pixel of input picture and brightness is only utilized to carry out the vision significance of each position in computed image, the division normalization adopted has biorational, it can simulate the process that in human brain primary visual cortex, homogenous characteristics suppresses mutually, method is simple, efficient, can obtain the remarkable figure of full resolution.The present invention achieves the result being obviously better than other classic methods on multiple vision significance test template and natural image test set.The present invention can calculate the vision significance of each pixel in image automatically, in the remarkable figure calculated, marking area has shape clearly, and its result can be applied to the applications such as important goal segmentation, object identification, adapting to image compression, the image scaling of content erotic and image retrieval.
Although the present invention discloses as above with preferred embodiment, but and be not used to limit the present invention.Any those of ordinary skill in the art, do not departing under technical solution of the present invention ambit, the Method and Technology content of above-mentioned announcement all can be utilized to make many possible variations and modification to technical solution of the present invention, or be revised as the Equivalent embodiments of equivalent variations.Therefore, every content not departing from technical solution of the present invention, according to technical spirit of the present invention to any simple modification made for any of the above embodiments, equivalent variations and modification, all still belongs in the scope of technical solution of the present invention protection.

Claims (7)

1., based on a division normalized image vision conspicuousness detection method, specifically comprise the following steps:
1) be that the colored input picture of M × N pixel is from RGB color notation conversion space to CIE1976L*a*b* color space by size, input picture can produce the Color Channel that three have biorational, wherein luminance channel L, green/red channel opposing A and indigo plant/yellow channel opposing B after conversion;
2) green/red channel opposing A is decomposed into two subchannel: A -and A +, wherein, A -that all setting to 0 on the occasion of element of matrix A is obtained, and A +all negative value elements of matrix A are set to 0 and obtains; Indigo plant/yellow channel opposing B is decomposed into two subchannel: B -and B +, wherein, B -that all setting to 0 on the occasion of element of matrix B is obtained, and B +all negative value elements of matrix B are set to 0 obtain, by matrix A -, A +, B -and B +regard four Color Channels as, respectively corresponding green, red, blue, yellow four kinds of colors;
3) energy of green, red, blue, yellow and luminance channel is calculated, wherein, E g, E r, E b, E yand E lthe respectively energy of corresponding green, red, blue, yellow and brightness 5 feature passages;
4) to each element of green, red, blue, yellow and luminance channel matrix divided by the energy of this passage, do division normalization;
5) will do division normalized green, red, blue, yellow four Color Channels reconsolidate is two color channel opposings, utilize the normalized feature passage of division with form division normalized image, in this image, each pixel is considered as a point in three dimensions, and the Euclidean distance between some pixels and all pixel averages is exactly the saliency value of this pixel.
2. as claimed in claim 1 based on division normalized image vision conspicuousness detection method, to it is characterized in that, step 3) in the specific formula for calculation of channel energy as follows:
E g = Σ x = 1 M Σ y = 1 N | A - ( x , y ) | ,
E r = Σ x = 1 M Σ y = 1 N | A + ( x , y ) | ,
E b = Σ x = 1 M Σ y = 1 N | B - ( x , y ) | ,
E y = Σ x = 1 M Σ y = 1 N | B + ( x , y ) | ,
E L = Σ x = 1 M Σ y = 1 N | L ( x , y ) | .
3. as claimed in claim 1 based on division normalized image vision conspicuousness detection method, to it is characterized in that, step 4) in the normalized specific formula for calculation of division as follows:
A ~ - ( x , y ) = A - ( x , y ) E g ,
A ~ + ( x , y ) = A + ( x , y ) E r ,
B ~ - ( x , y ) = B - ( x , y ) E b ,
B ~ + ( x , y ) = B + ( x , y ) E y ,
L ~ ( x , y ) = L ( x , y ) E L .
4. as claimed in claim 1 based on division normalized image vision conspicuousness detection method, to it is characterized in that, step 4) in energy before doing division normalization lower than M × N × 128 1% ~ 5% Color Channel suppress or zero setting.
5. as claimed in claim 1 based on division normalized image vision conspicuousness detection method, to it is characterized in that, step 5) in will do normalized green, red, blue, yellow four Color Channels of division and merged between two, specific formula for calculation is as follows:
A ~ = A ~ - + A ~ + ,
B ~ = B ~ - + B ~ + .
6. as claimed in claim 1 based on division normalized image vision conspicuousness detection method, to it is characterized in that, step 5) in the specific formula for calculation of saliency value of a certain pixel be:
S ( x , y ) = [ ω 1 · ( L ~ ( x , y ) - m L ~ ) ] 2 + [ ω 2 · ( A ~ ( x , y ) - m A ~ ) ] 2 + [ ω 3 · ( B ~ ( x , y ) - m B ~ ) ] 2 ,
Wherein, with represent division normalization characteristic passage respectively with respective average; ω 1, ω 2and ω 3be respectively feature passage with calculating parameter.
7. as claimed in claim 6 based on division normalized image vision conspicuousness detection method, it is characterized in that, described ω 1: ω 2: ω 3=1:2.55:2.55.
CN201410619259.4A 2014-11-06 2014-11-06 Based on the normalized image vision conspicuousness detection method of division Active CN104463917B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410619259.4A CN104463917B (en) 2014-11-06 2014-11-06 Based on the normalized image vision conspicuousness detection method of division

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410619259.4A CN104463917B (en) 2014-11-06 2014-11-06 Based on the normalized image vision conspicuousness detection method of division

Publications (2)

Publication Number Publication Date
CN104463917A true CN104463917A (en) 2015-03-25
CN104463917B CN104463917B (en) 2017-10-03

Family

ID=52909899

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410619259.4A Active CN104463917B (en) 2014-11-06 2014-11-06 Based on the normalized image vision conspicuousness detection method of division

Country Status (1)

Country Link
CN (1) CN104463917B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407978A (en) * 2016-09-24 2017-02-15 上海大学 Unconstrained in-video salient object detection method combined with objectness degree
CN107145824A (en) * 2017-03-29 2017-09-08 纵目科技(上海)股份有限公司 A kind of lane line dividing method and system, car-mounted terminal based on significance analysis
CN113781451A (en) * 2021-09-13 2021-12-10 长江存储科技有限责任公司 Wafer detection method and device, electronic equipment and computer readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8369652B1 (en) * 2008-06-16 2013-02-05 Hrl Laboratories, Llc Visual attention system for salient regions in imagery
CN102930542A (en) * 2012-10-31 2013-02-13 电子科技大学 Detection method for vector saliency based on global contrast

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8369652B1 (en) * 2008-06-16 2013-02-05 Hrl Laboratories, Llc Visual attention system for salient regions in imagery
CN102930542A (en) * 2012-10-31 2013-02-13 电子科技大学 Detection method for vector saliency based on global contrast

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
EERO P. SIMONCELLI: "Modeling Surround Suppression in V1 Neurons with a Statistically-Derived Normalization Model", 《ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS》 *
PENG BIAN 等: "Visual saliency: a biologically plausible contourlet-like frequency domain approach", 《COGNITIVE NEURODYNAMICS》 *
于振洋: "基于频率域的显著性区域提取方法", 《长沙理工大学学报(自然科学版)》 *
王卫东: "《印刷色彩》", 31 May 2005, 印刷工业出版社 *
边鹏: "视觉注意的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407978A (en) * 2016-09-24 2017-02-15 上海大学 Unconstrained in-video salient object detection method combined with objectness degree
CN106407978B (en) * 2016-09-24 2020-10-30 上海大学 Method for detecting salient object in unconstrained video by combining similarity degree
CN107145824A (en) * 2017-03-29 2017-09-08 纵目科技(上海)股份有限公司 A kind of lane line dividing method and system, car-mounted terminal based on significance analysis
CN113781451A (en) * 2021-09-13 2021-12-10 长江存储科技有限责任公司 Wafer detection method and device, electronic equipment and computer readable storage medium
CN113781451B (en) * 2021-09-13 2023-10-17 长江存储科技有限责任公司 Wafer detection method, device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN104463917B (en) 2017-10-03

Similar Documents

Publication Publication Date Title
CN103996192B (en) Non-reference image quality evaluation method based on high-quality natural image statistical magnitude model
CN104966085B (en) A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features
CN101980248B (en) Improved visual attention model-based method of natural scene object detection
CN104616664B (en) A kind of audio identification methods detected based on sonograph conspicuousness
CN106650770A (en) Mura defect detection method based on sample learning and human visual characteristics
CN106228547A (en) A kind of view-based access control model color theory and homogeneity suppression profile and border detection algorithm
CN105023008A (en) Visual saliency and multiple characteristics-based pedestrian re-recognition method
CN103914699A (en) Automatic lip gloss image enhancement method based on color space
CN103325120A (en) Rapid self-adaption binocular vision stereo matching method capable of supporting weight
CN105809173B (en) A kind of image RSTN invariable attribute feature extraction and recognition methods based on bionical object visual transform
CN104572971A (en) Image retrieval method and device
CN103632153B (en) Region-based image saliency map extracting method
CN104966285A (en) Method for detecting saliency regions
CN105095857A (en) Face data enhancement method based on key point disturbance technology
CN104361574A (en) No-reference color image quality assessment method on basis of sparse representation
CN105787470A (en) Method for detecting power transmission line tower in image based on polymerization multichannel characteristic
CN104217440B (en) A kind of method extracting built-up areas from remote sensing images
CN107527054A (en) Prospect extraction method based on various visual angles fusion
CN105139401A (en) Depth credibility assessment method for depth map
CN103544488A (en) Face recognition method and device
CN103927759A (en) Automatic cloud detection method of aerial images
CN104463917A (en) Image visual saliency detection method based on division method normalization
CN102567969B (en) Color image edge detection method
CN112419258A (en) Robust environmental sound identification method based on time-frequency segmentation and convolutional neural network
CN104463122A (en) Seal recognition method based on PCNN

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant