CN109410171B - Target significance detection method for rainy image - Google Patents
Target significance detection method for rainy image Download PDFInfo
- Publication number
- CN109410171B CN109410171B CN201811073630.6A CN201811073630A CN109410171B CN 109410171 B CN109410171 B CN 109410171B CN 201811073630 A CN201811073630 A CN 201811073630A CN 109410171 B CN109410171 B CN 109410171B
- Authority
- CN
- China
- Prior art keywords
- image
- matrix
- saliency
- nodes
- color
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Abstract
The invention provides a target significance detection method for an image in rainy days, which comprises the following steps: s1, extracting visual saliency based on brightness and color features, and calculating a brightness saliency map Slc(ii) a S2, extracting visual saliency based on color difference characteristics, and calculating a color difference saliency map Scv(ii) a S3, extracting visual saliency based on dark channel, and calculating a dark channel saliency map Sd(ii) a S4, drawing the brightness significance level SlcColor difference saliency map ScvAnd dark channel saliency map SdObtaining the final saliency map S by mixed operationfinal,Sfinal=Slc.*Scv‑Sd. According to the invention, the brightness and color characteristics, the color difference characteristics and the dark channel characteristics of the target in the rainy image are utilized to extract the target significance characteristics in the rainy image, and a target significance detection model is constructed, so that an effective preprocessing means is provided for the image definition and the detection of the target in the rain.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a target significance detection method for an image in rainy days.
Background
The visual saliency model has wide application fields, mature target detection and segmentation, video analysis and the like, and the quality of a saliency detection result plays a crucial role in the applications. In particular, there are generally three types: (1) method (2) studied on a biological basis is a method in which the former two methods are combined by a method (3) of purely mathematical modeling calculation. And these methods are all studied from the bottom up. Most of the research of the methods still focuses on the field of detection and segmentation of the salient objects, and is also based on the classical model framework, and only the innovation of the methods is the improvement of modeling or the difference of feature selection. The existing model mainly has the following defects: (1) the inability to analyze features that investigate significance targets in depth (2) the inability to understand mechanisms of biological visual cognition in depth (3) many models cannot be directly applied or conditional limitations should be applied.
Despite the current state of domestic and foreign research, although some achievements have been achieved and many practical methods have been proposed, few researchers have found that they have conducted special studies on the following problems: the visual perception characteristic of human eyes to the target under the rainy day condition is analyzed. At present, people provide a plurality of mature and practical methods for detecting the significance of a target, and deeply research a visual attention model, but due to the randomness and the particularity of rain, the research on human visual perception characteristics caused by the significance change of the target under the rainy day needs to be further deeply researched. In addition, no quantitative effective standard exists for rain grading, and how to form an image database containing rain with different grades for significance detection is a difficult problem.
Disclosure of Invention
Based on the technical problems in the background art, the invention provides a target significance detection method for an image in rainy days.
The invention provides a target significance detection method for an image in rainy days, which comprises the following steps:
s1, extracting visual saliency based on brightness and color features, and calculating a brightness saliency map Slc;
S2, extracting visual saliency based on color difference characteristics, and calculating a color difference saliency map Scv;
S3, extracting visual saliency based on dark channel, and calculating a dark channel saliency map Sd;
S4, drawing the brightness significance level SlcColor difference saliency map ScvAnd dark channel saliency map SdObtaining the final saliency map S by mixed operationfinal,Sfinal=Slc.*Scv-Sd。
Preferably, in step S1, the luminance saliency map SlcIs obtained by the following steps:
s11, converting the image from the RGB color model into an LAB color model;
s12, establishing a brightness significance map S according to the image of the LAB color modellc,SlcAnd (L, a, B), wherein L, a, B are values of three channels in an image of the LAB color model.
Preferably, the specific way of converting the image from the RGB color model to the LAB color model in step S11 is as follows:
first, a function gamma is set to visualize the image of the RGB color model:
wherein r, g and b are three channels of pixels, and the value ranges are all [0,255 ];
then, an intermediate variable XYZ is set, RGB values of the image are converted into XYZ values,
finally, converting XYZ into LAB according to a preset model;
the preset model is as follows:
wherein Xn, Yn, Zn are default constants.
Preferably, Xn is 95.047, Yn is 100, and Zn is 108.883.
Preferably, in step S2, the color difference saliency of the image is calculated using the absorption time of the absorption markov chain over the color field.
Preferably, in step S2, the visual saliency based on the color difference features is extracted, and the color difference saliency map S is calculatedcvThe method specifically comprises the following steps:
s21, constructing a graph based on the Markov chain according to the image, defining a background area of the image as an absorption state, and defining the average value of times required for transferring each node from the non-absorption state to the absorption state as the significance of the non-absorption state;
s22, segmenting the image into different super pixel points by using image segmentation software, and constructing a single-layer image based on the super pixel points;
s23, defining background absorption state nodes as pixel points of a background edge region, and defining non-absorption state nodes of an image edge as interconnected nodes;
s24, storing the edges as weight information in the node incidence matrix, wherein the edge weight between the nodes which are connected with each other is larger than the edge weight between the nodes which are not adjacent;
edge e between adjacent nodes i, jijWeight value w ofijExpressed as:
wherein xi,xjRespectively representing the average values of the nodes i, j in the color space, and the coefficient σ is a preset constant of the intensity as a weight;
s25 passing weight value wijSetting an incidence matrix A, deducing a time matrix D according to the incidence matrix A, and then calculating a transfer matrix P by combining the incidence matrix A and the time matrix D;
the correlation matrix a of the correlations between nodes is as follows:
wherein N (i) represents the union of all nodes connected to node i;
the degree matrix D is as follows:
D=diag(∑jaij)
the transition matrix P is as follows:
P=D-1×A
wherein A is an unnormalized matrix and P is a sparse matrix;
s26, deducing a basic matrix N according to the absorption state number r, the absorption chain number t of the transition state and the transition matrix P, wherein the deduction formula is as follows:
wherein Q ∈ [0,1 ]]t×tTransition probabilities between any pair of transition states; r is an element of [0,1 ]]t×rIncluding the probability of transitioning from any state to any absorbing state; 0 is an r multiplied by t zero matrix, and I is an r multiplied by r unit matrix;
s27, absorbing time y ═ Σ according to each transition statejnijX c, wherein nijIs the element of matrix N, c is the unit column vector of t dimension;
s28, obtaining a saliency map by normalizing the absorption time yT, where i represents a sequence of non-absorbing state nodes,represents a normalized absorption time vector;
s29, converting the significant value on the super pixel point to each pixel point, optimizing the significant graph S (i) to obtain the color difference significant degree graph ScvThe following were used:
Scv(px)=S(i) px∈Ri。
preferably, in step S3, the dark channel saliency map S is calculateddThe specific method comprises the following steps: each pixel point in the image is represented by a 3 x 3 image block taking the pixel point as the center, the opposite value of the dark channel value of the image block is calculated and taken as the prior value of the pixel point as sd(p) dark channel saliency map, calculated as follows:
wherein Ich(q) represents the color value of point q in the corresponding channel ch.
According to the invention, the brightness color information and the color difference information in the acquired significance map with the three characteristics are in a complementary relation, so that the significance of the target is enhanced by using the mask effect of dot product operation during calculation, and the significance of a dark channel is related to the rainfall, so that the influence of rain on the significance is eliminated by adopting subtraction operation in the final calculation.
Therefore, in the invention, the brightness and color characteristics, the color difference characteristics and the dark channel characteristics of the target in the rainy image are utilized to extract the target significance characteristics in the rainy image and construct a target significance detection model, thereby providing an effective preprocessing means for the image clarification and the detection of the target in the rain. The invention has great practical value in the fields of military target positioning, identifying and tracking in rainy days, vehicle driving, rescue after disaster, outdoor scene monitoring and the like.
Drawings
FIG. 1 is a flowchart of a target saliency detection method for a rainy image according to the present invention;
FIG. 2 is a flow chart of luminance saliency map acquisition in one embodiment of the present invention;
fig. 3 is a flow chart of acquiring visual saliency based on color difference features in an embodiment of the present invention.
Detailed Description
Referring to fig. 1, the method for detecting the target saliency of an image in rainy days, provided by the invention, comprises the following steps:
s1, extracting visual saliency based on brightness and color features, and calculating a brightness saliency map Slc。
S2, extracting visual saliency based on color difference characteristics, and calculating a color difference saliency map Scv。
In the prior art, the outline significance of an image is obtained by detecting the difference on the outline, but the difference is still certain for a color image. Some color information in the image of the rainy day is annihilated by raindrops, but the rest of the color information can also determine the saliency detection of the image. For example, for a rainy image, the differential saliency of the image can be calculated using the absorption time of the absorption Markov chain over a color field.
S3, extracting visual saliency based on dark channel, and calculating a dark channel saliency map Sd;
S4, drawing the brightness significance level SlcColor difference saliency map ScvAnd dark channel saliency map SdObtaining the final saliency map S by mixed operationfinal,Sfinal=Slc.*Scv-Sd。
In the present embodiment, since the luminance color information and the color difference information in the obtained saliency maps of the three features are in a complementary relationship, the saliency of the target is enhanced by the masking effect of the dot product operation during calculation, and the saliency of the dark channel is related to the magnitude of the rainfall, so that the influence of the rainfall on the saliency is eliminated by the subtraction operation in the final calculation.
In this way, in the embodiment, the brightness and color features, the color difference features and the dark channel features of the target in the rainy image are utilized to extract the target saliency features in the rainy image, so as to construct the target saliency detection model, and provide an effective preprocessing means for the image sharpening and the detection of the target in the rain. The method has great practical value in the fields of military target positioning, identification and tracking, vehicle driving, rescue after disaster, outdoor scene monitoring and the like in rainy days.
A large part of the image recognition by the human eye comes from visual recognition of brightness and color, and thus it is necessary to extract brightness and color information of the image at the same time. The LAB mode consists of three channels, the L channel is luminance, the a channel is red to dark green, and the B channel is blue to yellow. Therefore, in this embodiment, the visual saliency based on luminance and color features can be extracted by converting the image from an RGB color model to an LAB color model.
Specifically, in a further embodiment of the present invention, in step S1, the luminance saliency map SlcIs obtained by the following steps:
s11, converting the image from the RGB color model into an LAB color model;
s12, establishing a brightness significance map S according to the image of the LAB color modellc,SlcAnd (L, a, B), wherein L, a, B are values of three channels in an image of the LAB color model.
In specific implementation, because RGB cannot be directly converted into LAB, an intermediate variable XYZ needs to be set for transition, that is, RGB-XYZ-LAB is realized.
In a further embodiment of the present invention, a specific way for converting an image from an RGB color model to an LAB color model is improved, which is specifically as follows:
first, a function gamma is set to visualize the image of the RGB color model:
wherein r, g and b are three channels of pixels, and the value ranges are all [0,255 ].
Thus, in the embodiment, the imaging of the RGB numerical value lays a foundation for the subsequent numerical value conversion model.
Then, an intermediate variable XYZ is set, RGB values of the image are converted into XYZ values,
And finally, converting XYZ into LAB according to a preset model.
The preset model is as follows:
where Xn, Yn, and Zn are default constants, specific implementation may be selected from Xn 95.047, Yn 100, and Zn 108.883.
As described above, in the present embodiment, L, a, and B are calculated based on RGB by XYZ conversion, and the luminance saliency map S is obtainedlc,Slc=(L*,A*,B*)。
In a further embodiment of the present invention, the visual saliency based on the color difference features is extracted in step S2, and the color difference saliency map S is calculatedcvThe method specifically comprises the following steps:
s21, constructing a graph based on the Markov chain according to the image, defining the background area of the image as an absorption state, and defining the average value of the times required by each node to be transferred from the non-absorption state to the absorption state as the significance of the non-absorption state. The constructed incidence matrix generally depends on a sparse incidence graph, so that each node in the constructed graph of the Markov chain corresponds to one state in the Markov chain.
And S22, segmenting the image into different super pixel points by using image segmentation software, and constructing a single-layer image based on the super pixel points.
S23, defining background absorption state nodes as pixel points of a background edge region, and defining non-absorption state nodes of an image edge as interconnected nodes; . Since the four edges of the single-layer graph cannot be occupied by the salient objects at the same time, the background absorption state nodes can be defined as pixel points in the background edge region. Each node in the single-layer graph is connected to an adjacent non-absorbing node or shares the same edge with a peripheral node. Any pair of absorption state nodes are not directly connected. In addition, the non-absorbing state nodes of the image edges are also defined herein as interrelated nodes. In this way, the distance between similar nodes can be reduced.
S24, storing the edges as weight information in the node correlation matrix, wherein the edge weight between the nodes that are connected to each other is greater than the edge weight between the nodes that are not adjacent to each other. Edge e between adjacent nodes i, jijWeight value w ofijExpressed as:
wherein xi,xjRespectively, represent the average values of the nodes i, j in the color space, and the coefficient σ is a preset constant of the intensity as a weight. In the specific implementation, the weight intensity σ is a constant, but not only, and can be freely adjusted according to the situation of the algorithm.
S25 passing weight value wijSetting an incidence matrix A, deducing a time matrix D according to the incidence matrix A, and then calculating a transfer matrix P by combining the incidence matrix A and the time matrix D;
the correlation matrix a of the correlations between nodes is as follows:
wherein N (i) represents the union of all nodes connected to node i;
the degree matrix D is as follows:
D=diag(∑jaij)
the transition matrix P is as follows:
P=D-1×A
where A is the unnormalized matrix and P is the sparse matrix.
In this step, since only one node can be transferred at a time, node v is in a non-absorbing statetTransfer to absorbing state node vaThe average time required is mainly derived from two aspects: first, the spatial distance between two nodes. The larger the distance, the longer the average time; second, the slave node vtTo vaTransition probabilities on paths experienced during the transition. The higher the probability the shorter the transition time.
S26, deducing a basic matrix N according to the absorption state number r, the absorption chain number t of the transition state and the transition matrix P, wherein the deduction formula is as follows:
wherein Q ∈ [0,1 ]]t×tTransition probabilities between any pair of transition states; r is an element of [0,1 ]]t×rIncluding the probability of transitioning from any state to any absorbing state; 0 is an r multiplied by t zero matrix, and I is an r multiplied by r unit matrix;
since the sparse matrix P is a known matrix in step S25, when the matrix P and the matrix I are known, the matrix Q can be extracted, and the base matrix I can be derived therefrom.
S27, absorbing time y ═ Σ according to each transition statejnijX c, wherein nijC is a unit column vector of dimension t, which is an element of the matrix N.
Specifically, in this step, nijIt can be considered that the chain starts from the transition state i and stays in the transition state j for the desired time, and therefore according to the formula y ∑jnijXc can calculate the absorbed time for each transition state.
S28, obtaining a saliency map by normalizing the absorption time yWherein i represents a sequence of non-absorbing state nodes,represents a normalized absorption time vector;
s29, converting the significant value on the super pixel point to each pixel point, optimizing the significant graph S (i) to obtain the color difference significant degree graph ScvThe following were used:
Scv=S(i) px∈Ri。
therefore, optimization of transferring the super pixel points to the pixel points through the step S29 is beneficial to next multi-feature linear fusion.
In step S3 of a further embodiment of the present invention, a dark channel saliency map S is calculateddThe specific method comprises the following steps: each pixel point in the image is represented by a 3 x 3 image block taking the pixel point as the center, the opposite value of the dark channel value of the image block is calculated and taken as the prior value of the pixel point as sdDark channel saliency map, the calculation formula is as follows:
wherein Ich(q) represents the color value of point q in the corresponding channel ch.
The so-called dark channel means that for some image blocks (such as squares, rivers, buildings, etc.) without sky, in any channel of the RGB color space, there is always one or several pixel points (0 or close to 0) with very small brightness value. Mainly generated by darker or colored objects and shadows, and the characteristics of the objects which we have researched for significance. However, when the sky appears in the image, it is often displayed in the form of a background, and the luminance value is large, so it cannot be said that the dark channel is included. Therefore, in the embodiment, the dark channel property of the image can be utilized in the salient object detection field, and the prior information of the image can be used as a feature in the algorithm in the text, so as to improve the accuracy of the salient analysis.
In this embodiment, although the accuracy of the algorithm can be improved to the pixel level by the normalization calculation, a more accurate test result is obtained. However, not all pictures are adapted with dark channel prior values, and some pictures with darker background or brighter foreground may have opposite effect on the significance detection result. Therefore, the opposite effect can be eliminated or reduced as much as possible by calculating the pixel gray level average value of the dark channel prior map contour edge.
In a specific embodiment of the invention, 146 original rainy-day images are collected according to the construction idea of a classical image database, the rainy-day image database is constructed from two sub-libraries of a sample library and a standard library, and finally experimental evaluation is carried out through 7 standard evaluation indexes. The results show that the method of the invention has superior performance compared with the classical method. The invention provides an effective preprocessing means for the image definition and the detection of the target in the rain. The method has great practical value in the fields of military target positioning, identification and tracking, vehicle driving, rescue after disaster, outdoor scene monitoring and the like in rainy days.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention are equivalent to or changed within the technical scope of the present invention.
Claims (2)
1. A target saliency detection method for a rainy image, characterized by comprising the steps of:
s1, extracting visual saliency based on brightness and color features, and calculating a brightness saliency map Slc;
S2, extracting visual saliency based on color difference characteristics, and calculating a color difference saliency map Scv;
S3, extracting visual saliency based on dark channel, and calculating a dark channel saliency map Sd;
S4, drawing the brightness significance level SlcColor difference saliency map ScvAnd dark channel saliency map SdObtaining the final saliency map S by mixed operationfinal,Sfinal=Slc.*Scv-Sd;
In the method for detecting the target significance of the image in rainy days, in step S1, the brightness significance map SlcIs obtained by the following steps:
s11, converting the image from the RGB color model into an LAB color model;
s12, establishing a brightness significance map S according to the image of the LAB color modellc,Slc(L, a, B), wherein L, a, B are values of three channels in an image of the LAB color model;
in the target saliency detection method for the rainy image, the specific way of converting the image from the RGB color model to the LAB color model in step S11 is as follows:
first, a function gamma is set to visualize the image of the RGB color model:
wherein r, g and b are three channels of pixels, and the value ranges are all [0,255 ];
then, an intermediate variable XYZ is set, RGB values of the image are converted into XYZ values,
finally, converting XYZ into LAB according to a preset model;
the preset model is as follows:
wherein Xn, Yn, Zn are default constants;
in the target significance detection method for the rainy day image, in step S2, the color difference significance of the image is calculated by adopting the absorption time of the absorption Markov chain on a color domain;
in the target saliency detection method for the rainy image, visual saliency based on color difference features is extracted in step S2, and a color difference saliency map S is calculatedcvThe method specifically comprises the following steps:
s21, constructing a graph based on the Markov chain according to the image, defining a background area of the image as an absorption state, and defining the average value of times required for transferring each node from the non-absorption state to the absorption state as the significance of the non-absorption state;
s22, segmenting the image into different super pixel points by using image segmentation software, and constructing a single-layer image based on the super pixel points;
s23, defining background absorption state nodes as pixel points of a background edge region, and defining non-absorption state nodes of an image edge as interconnected nodes;
s24, storing the edges as weight information in the node incidence matrix, wherein the edge weight between the nodes which are connected with each other is larger than the edge weight between the nodes which are not adjacent;
edge e between adjacent nodes i, jijWeight value w ofijExpressed as:
wherein xi,xjRespectively representing the average values of the nodes i, j in the color space, and the coefficient σ is a preset constant of the intensity as a weight;
s25 passing weight value wijSetting incidence matrix A, deducing degree matrix D according to incidence matrix A, and combining incidence matrix A and degreeCalculating a transfer matrix P by the matrix D;
the correlation matrix a of the correlations between nodes is as follows:
wherein N (i) represents the union of all nodes connected to node i;
the degree matrix D is as follows:
D=diag(∑jaij)
the transition matrix P is as follows:
P=D-1×A
wherein A is an unnormalized matrix and P is a sparse matrix;
s26, deducing a basic matrix N according to the absorption state number r, the absorption chain number t of the transition state and the transition matrix P, wherein the deduction formula is as follows:
wherein Q ∈ [0,1 ]]t×tTransition probabilities between any pair of transition states; r is an element of [0,1 ]]t×rIncluding the probability of transitioning from any state to any absorbing state; 0 is an r multiplied by t zero matrix, and I is an r multiplied by r unit matrix;
s27, absorbing time y ═ Σ according to each transition statejnijX c, wherein nijIs the element of matrix N, c is the unit column vector of t dimension;
s28, obtaining a saliency map by normalizing the absorption time yWherein i represents a sequence of non-absorbing state nodes,represents a normalized absorption time vector;
s29, converting the significant value on the super pixel point to each pixel point, optimizing the significant graph S (i) to obtain the color difference significant degree graph ScvThe following were used:
Scv(px)=S(i) px∈Ri;
in the method for detecting the target significance of the image in rainy days, in step S3, a dark channel significance map S is calculateddThe specific method comprises the following steps: each pixel point in the image is represented by a 3 x 3 image block taking the pixel point as the center, the opposite value of the dark channel value of the image block is calculated and taken as the prior value of the pixel point as sd(p) dark channel saliency map, calculated as follows:
wherein Ich(q) represents the color value of point q in the corresponding channel ch.
2. The method of claim 1, wherein Xn 95.047, Yn 100, Zn 108.883.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811073630.6A CN109410171B (en) | 2018-09-14 | 2018-09-14 | Target significance detection method for rainy image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811073630.6A CN109410171B (en) | 2018-09-14 | 2018-09-14 | Target significance detection method for rainy image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109410171A CN109410171A (en) | 2019-03-01 |
CN109410171B true CN109410171B (en) | 2022-02-18 |
Family
ID=65464945
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811073630.6A Active CN109410171B (en) | 2018-09-14 | 2018-09-14 | Target significance detection method for rainy image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109410171B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110008969B (en) * | 2019-04-15 | 2021-05-14 | 京东方科技集团股份有限公司 | Method and device for detecting image saliency region |
CN111080722B (en) * | 2019-12-11 | 2023-04-21 | 中山大学 | Color migration method and system based on significance detection |
CN111310768B (en) * | 2020-01-20 | 2023-04-18 | 安徽大学 | Saliency target detection method based on robustness background prior and global information |
CN112465746B (en) * | 2020-11-02 | 2024-03-05 | 新疆天维无损检测有限公司 | Method for detecting small defects in ray film |
CN113158715A (en) * | 2020-11-05 | 2021-07-23 | 西安天伟电子系统工程有限公司 | Ship detection method and device |
CN112381076B (en) * | 2021-01-18 | 2021-03-23 | 西南石油大学 | Method for preprocessing picture in video significance detection task |
CN112861880B (en) * | 2021-03-05 | 2021-12-07 | 江苏实达迪美数据处理有限公司 | Weak supervision RGBD image saliency detection method and system based on image classification |
CN114022747B (en) * | 2022-01-07 | 2022-03-15 | 中国空气动力研究与发展中心低速空气动力研究所 | Salient object extraction method based on feature perception |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101980248B (en) * | 2010-11-09 | 2012-12-05 | 西安电子科技大学 | Improved visual attention model-based method of natural scene object detection |
CN102129693B (en) * | 2011-03-15 | 2012-07-25 | 清华大学 | Image vision significance calculation method based on color histogram and global contrast |
US9025880B2 (en) * | 2012-08-29 | 2015-05-05 | Disney Enterprises, Inc. | Visual saliency estimation for images and video |
CN106780430B (en) * | 2016-11-17 | 2019-08-09 | 大连理工大学 | A kind of image significance detection method based on surroundedness and Markov model |
CN106780476A (en) * | 2016-12-29 | 2017-05-31 | 杭州电子科技大学 | A kind of stereo-picture conspicuousness detection method based on human-eye stereoscopic vision characteristic |
CN107292318B (en) * | 2017-07-21 | 2019-08-09 | 北京大学深圳研究生院 | Image significance object detection method based on center dark channel prior information |
-
2018
- 2018-09-14 CN CN201811073630.6A patent/CN109410171B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN109410171A (en) | 2019-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109410171B (en) | Target significance detection method for rainy image | |
CN108985238B (en) | Impervious surface extraction method and system combining deep learning and semantic probability | |
CN108399362B (en) | Rapid pedestrian detection method and device | |
CN111986099B (en) | Tillage monitoring method and system based on convolutional neural network with residual error correction fused | |
CN113065558A (en) | Lightweight small target detection method combined with attention mechanism | |
CN108596108B (en) | Aerial remote sensing image change detection method based on triple semantic relation learning | |
CN101828201B (en) | Image processing device and method, and learning device, method | |
CN113569724B (en) | Road extraction method and system based on attention mechanism and dilation convolution | |
CN107610118B (en) | Based on dMImage segmentation quality evaluation method | |
CN106960182A (en) | A kind of pedestrian integrated based on multiple features recognition methods again | |
CN101710418A (en) | Interactive mode image partitioning method based on geodesic distance | |
CN111563408B (en) | High-resolution image landslide automatic detection method with multi-level perception characteristics and progressive self-learning | |
CN110705634A (en) | Heel model identification method and device and storage medium | |
CN113033385A (en) | Deep learning-based violation building remote sensing identification method and system | |
CN113516771A (en) | Building change feature extraction method based on live-action three-dimensional model | |
CN116524189A (en) | High-resolution remote sensing image semantic segmentation method based on coding and decoding indexing edge characterization | |
Wang et al. | Haze removal algorithm based on single-images with chromatic properties | |
CN112927252B (en) | Newly-added construction land monitoring method and device | |
CN104637060A (en) | Image partition method based on neighbor-hood PCA (Principal Component Analysis)-Laplace | |
Singh et al. | Visibility enhancement and dehazing: Research contribution challenges and direction | |
Pal et al. | Visibility enhancement techniques for fog degraded images: a comparative analysis with performance evaluation | |
CN115497006B (en) | Urban remote sensing image change depth monitoring method and system based on dynamic mixing strategy | |
CN111079807A (en) | Ground object classification method and device | |
Miah | A real time road sign recognition using neural network | |
CN114596562A (en) | Rice field weed identification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |