CN109410171A - A kind of target conspicuousness detection method for rainy day image - Google Patents
A kind of target conspicuousness detection method for rainy day image Download PDFInfo
- Publication number
- CN109410171A CN109410171A CN201811073630.6A CN201811073630A CN109410171A CN 109410171 A CN109410171 A CN 109410171A CN 201811073630 A CN201811073630 A CN 201811073630A CN 109410171 A CN109410171 A CN 109410171A
- Authority
- CN
- China
- Prior art keywords
- image
- matrix
- saliency map
- node
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Abstract
A kind of target conspicuousness detection method for rainy day image proposed by the present invention, comprising the following steps: S1, extract the vision significance based on brightness and color characteristic, calculate brightness saliency map Slc;S2, the vision significance based on color difference feature is extracted, calculates color difference saliency map Scv;S3, the vision significance based on dark is extracted, calculates dark saliency map Sd;S4, by brightness saliency map Slc, color difference saliency map ScvWith dark saliency map SdHybrid operation obtains final saliency map Sfinal, Sfinal=Slc.*Scv‑Sd.In the present invention, the brightness of target and color characteristic, color difference feature and dark feature in rainy day image is utilized, target significant characteristics in rainy day image are extracted, construct target conspicuousness detection model, for image sharpening and in the rain the detection of target provide effective preprocessing means.
Description
Technical field
The present invention relates to technical field of image processing more particularly to a kind of target conspicuousness detection sides for rainy day image
Method.
Background technique
Vision significance model application field is extensive, and more mature has target detection and segmentation, video analysis etc., shows
The quality of work property testing result plays the role of these applications vital.Specifically, three types be can be generally divided into
Type: (1) method (2) studied in Basic of Biology is combined first two method by the method (3) of pure mathematics Modeling Calculation
Method.And these methods are all bottom-up research.Most technique studies are still to lay stress on conspicuousness
The field of detection and the segmentation of target, and still on the basis of classical model frame, only the innovation of these methods is all
Improvement or the feature selecting for being placed on modeling are different.Existing model mainly has following several respects insufficient: (1) cannot be deep enough
The many models of mechanism (3) that analysis and research conspicuousness clarification of objective (2) can not understand biological vision cognition in depth can not
It is directly applied or application has conditional limitation.
The status studied both at home and abroad is made a general survey of, although achieving certain achievement, it was also proposed that many practicable methods,
But research finds that also rare scholar carries out case study: vision perception characteristic of the human eye to target under the conditions of the rainy day to following problems
Problem analysis.People propose many mature practical methods to the conspicuousness detection of target at present, to visual attention model
Also it furthers investigate, but due to the randomness and particularity of rain, the view of human eye caused by changing for target conspicuousness under the conditions of the rainy day
Feel that perception characteristics research also needs further deeply.In addition there is presently no the grade classifications to rain there is quantitative effective standard, such as
It is a problem that, which forms the image data base for being used for conspicuousness detection containing different brackets rain,.
Summary of the invention
Technical problems based on background technology, the invention proposes a kind of target conspicuousness inspections for rainy day image
Survey method.
A kind of target conspicuousness detection method for rainy day image proposed by the present invention, comprising the following steps:
S1, the vision significance based on brightness and color characteristic is extracted, calculates brightness saliency map Slc;
S2, the vision significance based on color difference feature is extracted, calculates color difference saliency map Scv;
S3, the vision significance based on dark is extracted, calculates dark saliency map Sd;
S4, by brightness saliency map Slc, color difference saliency map ScvWith dark saliency map SdHybrid operation obtains final
Saliency map Sfinal, Sfinal=Slc.*Scv-Sd。
Preferably, in step S1, brightness saliency map SlcIt obtains as follows:
S11, by image by rgb color model conversion be LAB colour model;
S12, brightness saliency map S is established according to the image of LAB colour modellc, Slc=(L*, A*, B*), wherein L*,
A*, B* are the value in three channels in the image of LAB colour model.
Preferably, in step S11 by image by rgb color model conversion be LAB colour model concrete mode are as follows:
Firstly, setting function gamma, the image of rgb color model is embodied:
Wherein, r, g, b are three channels of pixel, and value range is [0,255];
Then, intermediate variable XYZ is set, the rgb value of image is converted into XYZ value,
Wherein, M is coefficient matrix;
XYZ is finally converted into LAB further according to preset model;
Preset model are as follows:
Wherein, Xn, Yn, Zn be default constant.
Preferably, Xn=95.047, Yn ,=100, Zn=108.883.
Preferably, aobvious using the color difference for absorbing markovian soak time calculating image on color gamut in step S2
Conspicuousness.
Preferably, the vision significance based on color difference feature is extracted in step S2, calculates color difference saliency map ScvTool
Body includes the following steps:
S21, markovian figure is based on according to picture construction, the background area of image is defined as absorbing state, each
Node is defined as the conspicuousness of nonabsorptive state from the mean value that nonabsorptive state is transferred to the required number of absorption;
S22, different super-pixel points is divided the image into image segmentation software, a list is constructed based on super-pixel point
Layer figure;
S23, by background absorption state node definition be background edge region pixel, by the nonabsorptive state section of image border
Point is defined as the node connected each other;
S24, it stores in node incidence matrix using side as weight information, the side right between mutual a sequence of node is great
Side right weight between not adjacent node;
Side e between adjacent node i, jijOn weighted value wijIt indicates are as follows:
Wherein xi,xjNode i, average value of the j in color space are respectively indicated, factor sigma is the pre- of the intensity as weight
If constant;
S25, pass through weighted value wijIncidence matrix A is set, and degree matrix D is inferred according to incidence matrix A, then in conjunction with pass
Join matrix A and degree matrix D calculates transfer matrix P;
The incidence matrix A of correlation is as follows between node:
Wherein N (i) indicates the combination for all nodes being connected with node i;
Degree matrix D is as follows:
D=diag (∑jaij)
Transfer matrix P is as follows:
P=D-1×A
Wherein A is not normalized matrix, and P is sparse matrix;
S26, inferred substantially according to the absorbing state quantity r of figure and absorption the chain quantity t and transfer matrix P of transfering state
Matrix N infers that formula is as follows:
Wherein, [0,1] Q ∈t×t, include the transition probability between any pair of transfering state;R∈[0,1]t×r, comprising from appoint
What state is transferred to the probability of any absorbing state;0 is r × t null matrix, and I is r × r unit matrix;
S27, time y=∑ is absorbed according to each transfering statejnij× c, wherein nijFor the element of matrix N, c t
The unit column vector of dimension;
S28, notable figure is obtained by normalizing soak time yI=1,2,3 ... t, wherein i table
Show the sequence of nonabsorptive state node,Indicate normalized soak time vector;
S29, the saliency value on super-pixel point is transformed on each pixel, notable figure S (i) is optimized, obtained
Color difference saliency map ScvIt is as follows:
Scv(px)=S (i) px∈Ri。
Preferably, in step S3, dark saliency map S is calculateddMethod particularly includes: pixel each in image is utilized
3 × 3 image block centered on it indicates, calculates the inverse value of the dark channel value of image block in this, as the pixel
Priori value is as sd(p) dark notable figure, calculation formula are as follows:
Wherein Ich(q) indicate point q to the color value in deserved channel ch.
The present invention, lightness colors information and color difference information are complementary close in the saliency map of three obtained feature
System, therefore enhance the conspicuousness of target when calculating using the mask effect of point multiplication operation, and the conspicuousness of dark and rainfall are big
Small related therefore last calculating eliminates influence of the rain to conspicuousness using subtraction.
In this way, in the present invention, the brightness of target and color characteristic, color difference feature in rainy day image is utilized and helps secretly
Road feature extracts target significant characteristics in rainy day image, constructs target conspicuousness detection model, is the clear of image
Change and the detection of target provides effective preprocessing means in the rain.The present invention rainy day military target fixation and recognition track,
Vehicle drive, Post disaster relief, outdoor the fields such as scene monitoring have very big practical value.
Detailed description of the invention
Fig. 1 is a kind of target conspicuousness detection method flow chart for rainy day image proposed by the present invention;
Fig. 2 is the acquisition flow chart of brightness saliency map in one embodiment of the invention;
Fig. 3 is that the vision significance in one embodiment of the invention based on color difference feature obtains flow chart.
Specific embodiment
Referring to Fig.1, a kind of target conspicuousness detection method for rainy day image proposed by the present invention, including following step
It is rapid:
S1, the vision significance based on brightness and color characteristic is extracted, calculates brightness saliency map Slc。
S2, the vision significance based on color difference feature is extracted, calculates color difference saliency map Scv。
In the prior art, the most commonly used is obtaining the profile conspicuousness of image by the difference on detection profile, however this
Still there is a certain distance for color image.A part of colouring information in rainy day image image can be fallen into oblivion by raindrop
Fall, but remaining colouring information can also determine the conspicuousness detection of image.For example, it is directed to rainy day image, it can be on color gamut
Conspicuousness is shown using the difference for absorbing markovian soak time calculating image.
S3, the vision significance based on dark is extracted, calculates dark saliency map Sd;
S4, by brightness saliency map Slc, color difference saliency map ScvWith dark saliency map SdHybrid operation obtains final
Saliency map Sfinal, Sfinal=Slc.*Scv-Sd。
In present embodiment, lightness colors information and color difference information are mutual in the saliency map of three obtained feature
Benefit relationship, therefore enhance the conspicuousness of target when calculating using the mask effect of point multiplication operation, and the conspicuousness of dark and rain
It measures size related therefore last calculating and influence of the rain to conspicuousness is eliminated using subtraction.
In this way, in present embodiment, be utilized the brightness of target and color characteristic in rainy day image, color difference feature and
Dark feature extracts target significant characteristics in rainy day image, constructs target conspicuousness detection model, is image
The detection of sharpening and in the rain target provides effective preprocessing means.This will rainy day military target fixation and recognition with
Track, vehicle drive, Post disaster relief, outdoor the fields such as scene monitoring have very big practical value.
Identification of the human eye to a big chunk of image recognition from vision to brightness and color, it is therefore desirable to extract simultaneously
The brightness of image and colouring information.LAB mode is made of three channels, and the channel L is brightness, A channel be from red to dark green,
Channel B is then from blue to yellow.It, can be by by rgb color model conversion being LAB color by image so in present embodiment
Color model extracts the vision significance based on brightness and color characteristic.
Specifically, in further embodiment of the present invention, in step S1, brightness saliency map SlcIt obtains as follows
:
S11, by image by rgb color model conversion be LAB colour model;
S12, brightness saliency map S is established according to the image of LAB colour modellc, Slc=(L*, A*, B*), wherein L*,
A*, B* are the value in three channels in the image of LAB colour model.
When it is implemented, since RGB can not be directly changed into LAB, so, need to be provided with an intermediate variable XYZ into
Row excessively, that is, realizes RGB-XYZ-LAB.
In further embodiment of the present invention, improve it is a kind of by image by rgb color model conversion be LAB colour model
Concrete mode, it is specific as follows:
Firstly, setting function gamma, the image of rgb color model is embodied:
Wherein, r, g, b are three channels of pixel, and value range is [0,255].
In such present embodiment, by being embodied for RGB numerical value, it is based on numerical value conversion model to be subsequent, is established
Basis.
Then, intermediate variable XYZ is set, the rgb value of image is converted into XYZ value,
Wherein, M is coefficient matrix, as preset constant.
XYZ is finally converted into LAB further according to preset model.
Preset model are as follows:
Wherein, Xn, Yn, Zn be default constant, when it is implemented, may be selected Xn=95.047, Yn ,=100, Zn=
108.883。
In this way, converting in present embodiment by XYZ, L*, A*, B* are calculated based on RGB, to obtain brightness
Saliency map Slc, Slc=(L*, A*, B*).
In further embodiment of the present invention, the vision significance based on color difference feature is extracted in step S2, is calculated
Color difference saliency map ScvSpecifically comprise the following steps:
S21, markovian figure is based on according to picture construction, the background area of image is defined as absorbing state, each
Node is defined as the conspicuousness of nonabsorptive state from the mean value that nonabsorptive state is transferred to the required number of absorption.It is general to construct incidence matrix
Sparse associated diagram is relied on, so, in the markovian figure of building, each node corresponds to a shape in Ma Shilian
State.
S22, different super-pixel points is divided the image into image segmentation software, a list is constructed based on super-pixel point
Layer figure.
S23, by background absorption state node definition be background edge region pixel, by the nonabsorptive state section of image border
Point is defined as the node connected each other;.Because four edges of single layer figure can not be occupied simultaneously by well-marked target, background
Absorbing state node can be defined as the pixel in background edge region.In single layer figure each node with close on nonabsorptive state node
It is connected, or edge identical with periphery nodes sharing.So any pair of absorbing state node is not connected directly.This
It outside, by the nonabsorptive state node definition of image border is also here the node connected each other.In this way, the distance between similar node
It can reduce.
S24, it stores in node incidence matrix using side as weight information, the side right between mutual a sequence of node is great
Side right weight between not adjacent node.Side e between adjacent node i, jijOn weighted value wijIt indicates are as follows:
Wherein xi,xjNode i, average value of the j in color space are respectively indicated, factor sigma is the pre- of the intensity as weight
If constant.When it is implemented, weight intensity σ is a constant, but not unique, can freely be adjusted according to algorithm situation.
S25, pass through weighted value wijIncidence matrix A is set, and degree matrix D is inferred according to incidence matrix A, then in conjunction with pass
Join matrix A and degree matrix D calculates transfer matrix P;
The incidence matrix A of correlation is as follows between node:
Wherein N (i) indicates the combination for all nodes being connected with node i;
Degree matrix D is as follows:
D=diag (∑jaij)
Transfer matrix P is as follows:
P=D-1×A
Wherein A is not normalized matrix, and P is sparse matrix.
In this step, since a step node can only be shifted every time, from nonabsorptive state node vtIt is transferred to absorbing state section
Point vaRequired average time is mainly derived from two aspects: first, the space length between two nodes.Distance is bigger flat
The equal time is longer;Second, from node vtTo vaTransition probability in transfer process on undergone path.Probability higher transfer time
It is shorter.
S26, inferred substantially according to the absorbing state quantity r of figure and absorption the chain quantity t and transfer matrix P of transfering state
Matrix N infers that formula is as follows:
Wherein, [0,1] Q ∈t×t, include the transition probability between any pair of transfering state;R∈[0,1]t×r, comprising from appoint
What state is transferred to the probability of any absorbing state;0 is r × t null matrix, and I is r × r unit matrix;
Since in step S25, sparse matrix P is known matrix, so, it, can in the situation known to matrix P and matrix I
Matrix Q is extracted, and is derived there basis matrix I.
S27, time y=∑ is absorbed according to each transfering statejnij× c, wherein nijFor the element of matrix N, c t
The unit column vector of dimension.
Specifically, in this step, nijIt can be considered for chain from transfering state i, when resting on the expectation of transfering state j
Between, so, according to formula y=∑jnij× c can calculate calculate each transfering state be absorbed the time.
S28, notable figure is obtained by normalizing soak time yWherein, i
Indicate the sequence of nonabsorptive state node,Indicate normalized soak time vector;
S29, the saliency value on super-pixel point is transformed on each pixel, notable figure S (i) is optimized, obtained
Color difference saliency map ScvIt is as follows:
Scv=S (i) px∈Ri。
In this way, being conducive to next step multiple features line by the optimization that figure is had super-pixel point to be transferred to pixel by step S29
Property fusion.
In the step S3 of further embodiment of the present invention, dark saliency map S is calculateddMethod particularly includes: by image
In each pixel utilize 3 × 3 image block centered on it to indicate, calculate the inverse value of the dark channel value of image block with
This as the pixel priori value as sdDark notable figure, calculation formula are as follows:
Wherein Ich(q) indicate point q to the color value in deserved channel ch.
So-called Dow Jones index of helping secretly is to some image blocks (such as square, river, building) without containing sky, in RGB color
In any channel in space, be constantly present the very small pixel of one or several brightness values (for 0 or close to 0).Mainly by one
What a little color relatively depths or target color and shade generated, while these features are also that we study conspicuousness target and are had
's.But when sky occurs in the picture, usually shown in the form of background, brightness value is larger, then cannot say and include
Dark.So can use the dark property of image in conspicuousness object detection field, by it in present embodiment
Prior information is as a feature in this paper algorithm, to improve the accurate of significance analysis.
In present embodiment, although being calculated by normalization can be pixel scale by the precision improvement of algorithm, obtain more
Add accurate inspection result.However, not all picture is all suitable for dark channel prior value, it is darker for some backgrounds or compared with
The picture of bright prospect may play opposite effect to conspicuousness testing result.Therefore, we can be by calculating dark
The pixel grey scale average value at priori map contour edge carrys out elimination as far as possible or weakens the minus effect generated.
In a specific embodiment of the invention, according to the building thought of classical image data base, 146 original rain are acquired
Its image constructs rainy day image data base from two word banks of sample database and java standard library, finally by 7 standard evaluation indexes into
Row experimental evaluation.The result shows that method of the invention has more superior performance compared with classical way.The present invention is image
The detection of sharpening and in the rain target provides effective preprocessing means.This will rainy day military target fixation and recognition with
Track, vehicle drive, Post disaster relief, outdoor the fields such as scene monitoring have very big practical value.
The above, preferable specific embodiment only of the present invention, but protection scope of the present invention not office
Be limited to this, anyone skilled in the art in the technical scope disclosed by the present invention, technology according to the present invention
Scheme and its inventive concept are subject to equivalent substitution or change, should be covered by the protection scope of the present invention.
Claims (7)
1. a kind of target conspicuousness detection method for rainy day image, which comprises the following steps:
S1, the vision significance based on brightness and color characteristic is extracted, calculates brightness saliency map Slc;
S2, the vision significance based on color difference feature is extracted, calculates color difference saliency map Scv;
S3, the vision significance based on dark is extracted, calculates dark saliency map Sd;
S4, by brightness saliency map Slc, color difference saliency map ScvWith dark saliency map SdHybrid operation obtains final significant
Degree figure Sfinal, Sfinal=Slc.*Scv-Sd。
2. being used for the target conspicuousness detection method of rainy day image as described in claim 1, which is characterized in that in step S1,
Brightness saliency map SlcIt obtains as follows:
S11, by image by rgb color model conversion be LAB colour model;
S12, brightness saliency map S is established according to the image of LAB colour modellc, Slc=(L*, A*, B*), wherein L*, A*, B*
For the value in three channels in the image of LAB colour model.
3. being used for the target conspicuousness detection method of rainy day image as claimed in claim 2, which is characterized in that in step S11
By image by rgb color model conversion be LAB colour model concrete mode are as follows:
Firstly, setting function gamma, the image of rgb color model is embodied:
Wherein, r, g, b are three channels of pixel, and value range is [0,255];
Then, intermediate variable XYZ is set, the rgb value of image is converted into XYZ value,
Wherein, M is coefficient matrix;
XYZ is finally converted into LAB further according to preset model;
Preset model are as follows:
Wherein, Xn, Yn, Zn be default constant.
4. being used for the target conspicuousness detection method of rainy day image as claimed in claim 3, which is characterized in that Xn=
95.047, Yn ,=100, Zn=108.883.
5. as described in claim 1 be used for rainy day image target conspicuousness detection method, which is characterized in that in step S2
Conspicuousness is shown using the color difference for absorbing markovian soak time calculating image on color gamut.
6. being used for the target conspicuousness detection method of rainy day image as claimed in claim 5, which is characterized in that mentioned in step S2
The vision significance based on color difference feature is taken, color difference saliency map S is calculatedcvSpecifically comprise the following steps:
S21, markovian figure is based on according to picture construction, the background area of image is defined as absorbing state, each node
The mean value that number needed for absorbing is transferred to from nonabsorptive state is defined as the conspicuousness of nonabsorptive state;
S22, different super-pixel points is divided the image into image segmentation software, a single layer is constructed based on super-pixel point
Figure;
S23, by background absorption state node definition be background edge region pixel, the nonabsorptive state node of image border is determined
Justice is the node connected each other;
S24, it stores in node incidence matrix using side as weight information, the side right between mutual a sequence of node is great in not
Side right weight between adjacent node;
Side e between adjacent node i, jijOn weighted value wijIt indicates are as follows:
Wherein xi,xjNode i, average value of the j in color space are respectively indicated, factor sigma is the default normal of the intensity as weight
Number;
S25, pass through weighted value wijIncidence matrix A is set, and degree matrix D is inferred according to incidence matrix A, then in conjunction with association square
Battle array A and degree matrix D calculates transfer matrix P;
The incidence matrix A of correlation is as follows between node:
Wherein N (i) indicates the combination for all nodes being connected with node i;
Degree matrix D is as follows:
D=diag (∑jaij)
Transfer matrix P is as follows:
P=D-1×A
Wherein A is not normalized matrix, and P is sparse matrix;
S26, fundamental matrix is inferred according to the absorbing state quantity r of figure and absorption the chain quantity t and transfer matrix P of transfering state
N infers that formula is as follows:
Wherein, [0,1] Q ∈t×t, include the transition probability between any pair of transfering state;R∈[0,1]t×r, include from any shape
State is transferred to the probability of any absorbing state;0 is r × t null matrix, and I is r × r unit matrix;
S27, time y=∑ is absorbed according to each transfering statejnij× c, wherein nijFor the element of matrix N, c is t dimension
Unit column vector;
S28, notable figure is obtained by normalizing soak time yWherein, i is indicated
The sequence of nonabsorptive state node,Indicate normalized soak time vector;
S29, the saliency value on super-pixel point is transformed on each pixel, notable figure S (i) is optimized, obtain color difference
Saliency map ScvIt is as follows:
Scv(px)=S (i) px∈Ri。
7. being used for the target conspicuousness detection method of rainy day image as described in claim 1, which is characterized in that in step S3,
Calculate dark saliency map SdMethod particularly includes: pixel each in image is utilized into 3 × 3 image block centered on it
It indicates, calculates the inverse value of the dark channel value of image block in this, as the priori value of the pixel as sd(p) dark is aobvious
Figure is write, calculation formula is as follows:
Wherein Ich(q) indicate point q to the color value in deserved channel ch.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811073630.6A CN109410171B (en) | 2018-09-14 | 2018-09-14 | Target significance detection method for rainy image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811073630.6A CN109410171B (en) | 2018-09-14 | 2018-09-14 | Target significance detection method for rainy image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109410171A true CN109410171A (en) | 2019-03-01 |
CN109410171B CN109410171B (en) | 2022-02-18 |
Family
ID=65464945
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811073630.6A Active CN109410171B (en) | 2018-09-14 | 2018-09-14 | Target significance detection method for rainy image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109410171B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111080722A (en) * | 2019-12-11 | 2020-04-28 | 中山大学 | Color migration method and system based on significance detection |
CN111310768A (en) * | 2020-01-20 | 2020-06-19 | 安徽大学 | Saliency target detection method based on robustness background prior and global information |
WO2020211522A1 (en) * | 2019-04-15 | 2020-10-22 | 京东方科技集团股份有限公司 | Method and device for detecting salient area of image |
CN112381076A (en) * | 2021-01-18 | 2021-02-19 | 西南石油大学 | Method for preprocessing picture in video significance detection task |
CN112465746A (en) * | 2020-11-02 | 2021-03-09 | 新疆天维无损检测有限公司 | Method for detecting small defects in radiographic film |
CN112861880A (en) * | 2021-03-05 | 2021-05-28 | 江苏实达迪美数据处理有限公司 | Weak supervision RGBD image saliency detection method and system based on image classification |
CN113158715A (en) * | 2020-11-05 | 2021-07-23 | 西安天伟电子系统工程有限公司 | Ship detection method and device |
CN114022747A (en) * | 2022-01-07 | 2022-02-08 | 中国空气动力研究与发展中心低速空气动力研究所 | Salient object extraction method based on feature perception |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101980248A (en) * | 2010-11-09 | 2011-02-23 | 西安电子科技大学 | Improved visual attention model-based method of natural scene object detection |
CN102129693A (en) * | 2011-03-15 | 2011-07-20 | 清华大学 | Image vision significance calculation method based on color histogram and global contrast |
US9025880B2 (en) * | 2012-08-29 | 2015-05-05 | Disney Enterprises, Inc. | Visual saliency estimation for images and video |
CN106780476A (en) * | 2016-12-29 | 2017-05-31 | 杭州电子科技大学 | A kind of stereo-picture conspicuousness detection method based on human-eye stereoscopic vision characteristic |
CN106780430A (en) * | 2016-11-17 | 2017-05-31 | 大连理工大学 | A kind of image significance detection method based on surroundedness and Markov model |
CN107292318A (en) * | 2017-07-21 | 2017-10-24 | 北京大学深圳研究生院 | Image significance object detection method based on center dark channel prior information |
-
2018
- 2018-09-14 CN CN201811073630.6A patent/CN109410171B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101980248A (en) * | 2010-11-09 | 2011-02-23 | 西安电子科技大学 | Improved visual attention model-based method of natural scene object detection |
CN102129693A (en) * | 2011-03-15 | 2011-07-20 | 清华大学 | Image vision significance calculation method based on color histogram and global contrast |
US9025880B2 (en) * | 2012-08-29 | 2015-05-05 | Disney Enterprises, Inc. | Visual saliency estimation for images and video |
CN106780430A (en) * | 2016-11-17 | 2017-05-31 | 大连理工大学 | A kind of image significance detection method based on surroundedness and Markov model |
CN106780476A (en) * | 2016-12-29 | 2017-05-31 | 杭州电子科技大学 | A kind of stereo-picture conspicuousness detection method based on human-eye stereoscopic vision characteristic |
CN107292318A (en) * | 2017-07-21 | 2017-10-24 | 北京大学深圳研究生院 | Image significance object detection method based on center dark channel prior information |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020211522A1 (en) * | 2019-04-15 | 2020-10-22 | 京东方科技集团股份有限公司 | Method and device for detecting salient area of image |
CN111080722A (en) * | 2019-12-11 | 2020-04-28 | 中山大学 | Color migration method and system based on significance detection |
CN111080722B (en) * | 2019-12-11 | 2023-04-21 | 中山大学 | Color migration method and system based on significance detection |
CN111310768A (en) * | 2020-01-20 | 2020-06-19 | 安徽大学 | Saliency target detection method based on robustness background prior and global information |
CN112465746A (en) * | 2020-11-02 | 2021-03-09 | 新疆天维无损检测有限公司 | Method for detecting small defects in radiographic film |
CN112465746B (en) * | 2020-11-02 | 2024-03-05 | 新疆天维无损检测有限公司 | Method for detecting small defects in ray film |
CN113158715A (en) * | 2020-11-05 | 2021-07-23 | 西安天伟电子系统工程有限公司 | Ship detection method and device |
CN112381076A (en) * | 2021-01-18 | 2021-02-19 | 西南石油大学 | Method for preprocessing picture in video significance detection task |
CN112861880A (en) * | 2021-03-05 | 2021-05-28 | 江苏实达迪美数据处理有限公司 | Weak supervision RGBD image saliency detection method and system based on image classification |
CN112861880B (en) * | 2021-03-05 | 2021-12-07 | 江苏实达迪美数据处理有限公司 | Weak supervision RGBD image saliency detection method and system based on image classification |
CN114022747A (en) * | 2022-01-07 | 2022-02-08 | 中国空气动力研究与发展中心低速空气动力研究所 | Salient object extraction method based on feature perception |
Also Published As
Publication number | Publication date |
---|---|
CN109410171B (en) | 2022-02-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109410171A (en) | A kind of target conspicuousness detection method for rainy day image | |
CN111784602B (en) | Method for generating countermeasure network for image restoration | |
CN111709902A (en) | Infrared and visible light image fusion method based on self-attention mechanism | |
CN111625608B (en) | Method and system for generating electronic map according to remote sensing image based on GAN model | |
CN110956094A (en) | RGB-D multi-mode fusion personnel detection method based on asymmetric double-current network | |
CN101271578B (en) | Depth sequence generation method of technology for converting plane video into stereo video | |
CN110555465B (en) | Weather image identification method based on CNN and multi-feature fusion | |
CN111738064B (en) | Haze concentration identification method for haze image | |
CN112434796A (en) | Cross-modal pedestrian re-identification method based on local information learning | |
RU2476825C2 (en) | Method of controlling moving object and apparatus for realising said method | |
CN110288550B (en) | Single-image defogging method for generating countermeasure network based on priori knowledge guiding condition | |
CN109741285B (en) | Method and system for constructing underwater image data set | |
CN112819096B (en) | Construction method of fossil image classification model based on composite convolutional neural network | |
CN110378848A (en) | A kind of image defogging method based on derivative figure convergence strategy | |
CN105139385A (en) | Image visual saliency region detection method based on deep automatic encoder reconfiguration | |
CN109919246A (en) | Pedestrian's recognition methods again based on self-adaptive features cluster and multiple risks fusion | |
CN111666852A (en) | Micro-expression double-flow network identification method based on convolutional neural network | |
CN112686276A (en) | Flame detection method based on improved RetinaNet network | |
CN104751111A (en) | Method and system for recognizing human action in video | |
CN110458208A (en) | Hyperspectral image classification method based on information measure | |
CN114841846A (en) | Self-coding color image robust watermark processing method based on visual perception | |
CN108460794A (en) | A kind of infrared well-marked target detection method of binocular solid and system | |
CN107451975A (en) | A kind of view-based access control model weights similar picture quality clarification method | |
CN110334628A (en) | A kind of outdoor monocular image depth estimation method based on structuring random forest | |
CN112070691A (en) | Image defogging method based on U-Net |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |