CN101944178A - Significant region extraction method for intelligent monitoring - Google Patents

Significant region extraction method for intelligent monitoring Download PDF

Info

Publication number
CN101944178A
CN101944178A CN 201010292932 CN201010292932A CN101944178A CN 101944178 A CN101944178 A CN 101944178A CN 201010292932 CN201010292932 CN 201010292932 CN 201010292932 A CN201010292932 A CN 201010292932A CN 101944178 A CN101944178 A CN 101944178A
Authority
CN
China
Prior art keywords
contrast
brightness
value
color
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201010292932
Other languages
Chinese (zh)
Other versions
CN101944178B (en
Inventor
孙建德
刘琚
张�杰
杨彩霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201010292932A priority Critical patent/CN101944178B/en
Publication of CN101944178A publication Critical patent/CN101944178A/en
Application granted granted Critical
Publication of CN101944178B publication Critical patent/CN101944178B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a significant region extraction method for intelligent monitoring, which comprises the following steps of: (1) multi-scale transformation; (2) local contrast characteristic extraction; (3) characteristic graph formation; (4) global normalization; and (5) linear combination. The significance of a significant graph is expressed by the gray scale of an image, the gray scale is larger, the spot is paid with more attention and the significance is greater; on the contrary, the significance is smaller. In the invention, the visual characteristic that human eyes are sensitive to strong contrast is taken into consideration, the image is extracted in a multi-scale local contrast way by fully using the three most basic and important low-grade characteristics, namely luminance, texture and color of a representational image, and the significant region of the image is extracted by a series of operation including iterative interpolation summation, normalization treatment and linear combination; and thus, the accuracy and integrality of the significant region extraction are obviously improved.

Description

The marking area extracting method that is used for intelligent monitoring
Technical field
The present invention relates to a kind of marking area extractive technique that can be applicable to intelligent monitoring, belong to image, multimedia signal processing technique field.
Background technology
Along with social development, the application of intelligent monitor system more and more generalizes.Such as, in some important public places such as customs, bank, station, shops rig camera can be housed all now, but the video data for the monitoring gained under most of situation all is with unified resolution monitoring scene to be gathered gained, usually often because resolution can not fully be applied in the important area, cause and can't differentiate effectively important area or object, this processing mode seems loaded down with trivial details and causes trouble.As: criminal investigation and case detection etc.Therefore, press for intelligent monitoring and can survey important area automatically.
Along with the development of bio-science, many achievements in research show that human eye is unequal for each regional degree of concern of visual pattern.For stimulating stronger image-region, human eye always drops into more vision attention; And for flat site, the vision attention that human eye drops into is less, this just human vision having the main cause place of high-efficiency information processing capacity and target acquistion function.If so attention mechanism of the more accurate emulation biological vision of intelligent monitor system energy, only to existing the marking area of interesting target to pay close attention in the image, then will not pay close attention to for flat site, redundant information in the image just can be given up in advance so, and possible important area, be the target area, then can obtain the concern of emphasis.Therefore, the research visual attention model that the can be applicable to intelligent monitoring marking area that extracts image is surveyed collection as the important area of monitoring.
Research to visual attention model extraction marking area has in recent years become the research focus, has proposed some visual attention models and has been applied to many aspects.Classical Itti attention model, low-level features such as the color by extracting input picture, brightness, direction are analyzed to merge and are significantly schemed.Its basic thought is the contrast of pixel in aspect such as color, brightness, direction and peripheral region is defined as the remarkable value of this point, contrasts by force more, and the remarkable value of this point is just big more.Though this model can find focus-of-attention, the marking area that obtains is quite inaccurate, and object bounds does not match in marking area and the image.For visual attention model is applied better in practice, just must on the Itti model based, improve.Because brightness, texture, color are the essential characteristics of token image, therefore, consideration is based on these three kinds of features, replace central peripheral difference in the past to calculate, adopt new part contrast characterization method to extract the marking area of image, make this marking area extracting method be more suitable for being applied in the intelligent monitoring.
Summary of the invention
The present invention is directed to deficiencies such as the extraction that has the existence of visual attention model extraction marking area algorithm at present is inaccurate, imperfect, a kind of marking area extracting method that is used for intelligent monitoring based on the local contrast of multiple dimensioned low-level features is proposed, the method is significantly improved on accuracy that marking area extracts and integrality, and meets the visual characteristic of human eye.
The marking area extracting method that is used for intelligent monitoring of the present invention may further comprise the steps:
(1) multi-scale transform: input picture is carried out Filtering Processing, obtain the image of input picture on m different scale, m ∈ 4,6,8};
(2) local contrast feature extraction: on different scale, calculate the part contrast figure of brightness, texture, three low-level features of color respectively, it is to adopt the pixel window of a slip to calculate the local control value of each position that local contrast figure calculates, adopt for brightness and to receive the method for (Weber-Fechner) rule based on weber-Fick and calculate its local contrast figure, calculate its local contrast figure for texture by gray variance, calculate its local contrast figure for color employing colour-difference method in based on the HSI color space of visually-perceptible;
(3) characteristic pattern forms: the part contrast figure on each yardstick carries out iteration interpolation summation operation to each feature, forms the characteristic pattern of each feature;
(4) global normalization: the remarkable value of each pixel in three width of cloth characteristic patterns is calculated in normalization, obtains three width of cloth and significantly schemes;
(5) linear combination: the remarkable figure of three width of cloth is carried out linear in conjunction with obtaining final remarkable figure.
Significantly the conspicuousness among the figure is to represent by the gray scale of image, and the place that gray-scale value is big more is subjected to degree of concern strong more, and conspicuousness is big more; Otherwise conspicuousness is more little.
Filtering in the described step (1) is to use the gaussian pyramid model that input picture is carried out multiple-stage filtering.
Calculating local contrast figure in the described step (2) adopts the local pixel window of a slip to calculate the local control value of each position; When calculating the control value of each position pixel, this position is corresponding with the center pixel of window, calculates this point and the contrast in the zone of window size on every side, and the value of gained is as the local control value of this location point; Brightness, texture, color are calculated respectively, obtained the local contrast figure of three width of cloth.
It is as follows to receive the local contrast of the brightness figure computing formula of (Weber-Fechner) rule based on weber-Fick:
I CM ( x , y ) = clg I j max I j avg = clg max { I 1 , I 2 , . . . I n , . . . , I N } 1 N Σ n = 1 N I n
Wherein, I CM(x, y) be pixel (x, the brightness control value of y) locating, c are constant,
Figure BSA00000284844000022
With
Figure BSA00000284844000023
Be respectively brightness maximal value and the mean value in j the window, the number of pixels in N=(2k+1) * (2k+1) the expression window.I n(n ∈ 1,2 ... N) brightness value of any pixel in the expression window.
Computing formula based on the local contrast of the texture of gray variance figure is as follows:
T CM ( x , y ) = [ 1 N - 1 Σ n = 1 N ( I n - 1 N Σ n = 1 N I n ) 2 ] 1 2
Wherein, I n(n ∈ 1,2 ... N) brightness value of any pixel in the expression window.
Part contrast figure computing method based on the HSI color space of visually-perceptible are as follows: at first to two value of color Y at the HSI color space 1=(H 1, S 1, I 1) TAnd Y 2=(H 2, S 2, I 2) T, the definition color difference is:
Δ HSI ( Y 1 , Y 2 ) = ( Δ I ) 2 + ( Δ C ) 2
Δ wherein I=| I 1-I 2| and Δ C = S 1 2 + S 2 2 - 2 S 1 S 2 cos θ
θ = | H 1 - H 2 | ; if | H 1 - H 2 | ≤ π 2 π - | H 1 - H 2 | ; if | H 1 - H 2 | > π
Therefore, the local contrast figure of color is calculated as follows formula:
C CM ( x , y ) = 1 N - 1 [ Σ n = 1 N - 1 Δ HSI ( Y ( x , y ) , Y n ) ]
In step (2), calculate the part contrast figure of each feature on different scale respectively, i.e. brightness, texture, each correspondence of color obtain several local contrast figure.
The step that step (3) forms the characteristic pattern of each feature is:
A is inserted into the higher scale size of resolution that is adjacent from the minimum yardstick of resolution in the part contrast figure on the yardstick that resolution is lower;
Part contrast figure on the corresponding yardstick with it of part contrast figure of b after with interpolation carries out addition;
The above step a of c, b carry out repeated iterative operation, end up to adding up on the highest yardstick of resolution, obtain characteristic pattern.
The present invention considers that human eye is for the responsive visual characteristic of strong contrast, make full use of the most basic, most important brightness, texture and three low-level features of color of token image, image is carried out the extraction of multiple dimensioned local contrast, through the summation of iteration interpolation, normalized, the marking area of image is extracted in the operation of linear junction unification series; On accuracy that marking area extracts and integrality, be significantly improved.
Description of drawings
Fig. 1 is the frame diagram of the inventive method.
Fig. 2 is the remarkable figure that obtains by the inventive method emulation.
Fig. 3 is the contrast of the visual attention location location drawing that obtains by the eye movement experimental verification and the remarkable figure that obtains by the inventive method.
Embodiment
The present invention adopts the part based on multiple dimensioned low-level features to contrast and extracts marking area, the part contrast that is about to the pixel feature is extracted as the size of weighing significance, in conjunction with the multi-scale transform algorithm, under different scale, ask for contrast figure, and the algorithm that adopts the summation of iteration interpolation is combined into the characteristic pattern that a width of cloth and original image have identical size with the contrast figure of different scale.By normalization, linear then in conjunction with the remarkable figure that obtains image.
Fig. 1 has provided the general frame figure of the inventive method, and according to flow process shown in Figure 1, method of the present invention comprises following concrete steps:
1. multiple dimensioned local contrast figure forms
Input picture is carried out multiple-stage filtering with gaussian pyramid, and down-sampling obtains the image of original image on m different scale, and yardstick 1 is an input picture.Yardstick 2 to m is respectively 1/2 to 1/2 of input picture M-1, other increases along with sample stage, and the resolution of image reduces gradually.On each yardstick, calculate the part contrast figure of brightness, texture and color characteristic respectively.Corresponding each feature obtains the part contrast figure under the m width of cloth different scale respectively.Wherein: m ∈ 4,6,8}.
The specific algorithm of the part contrast figure of brightness, texture, three kinds of features of color is as follows:
Adopted a size to calculate the local control value of each position among the present invention for the local pixel window of (2k+1) * (2k+1) (k is a nonnegative integer).When calculating the control value of each position pixel, this position is corresponding with the center pixel of window, calculates this point and the contrast in the zone of window size on every side, and the value of gained is as the local control value of this location point.
The calculating of brightness contrast figure, receive (Weber-Fechner) rule as can be known according to weber-Fick: visually the logarithm of subjective luminance and light stimulus intensity is proportional.Therefore calculate of the brightness contrast of the subjective luminance of each pixel with following formula, that is: as this point
I CM ( x , y ) = clg I j max I j avg = clg max { I 1 , I 2 , . . . I n , . . . , I N } 1 N Σ n = 1 N I n
Wherein, I CM(x, y) be pixel (c is a constant for x, the brightness of y) locating contrast,
Figure BSA00000284844000042
With
Figure BSA00000284844000043
Be respectively brightness maximal value and the mean value in j the window, the number of pixels in N=(2k+1) * (2k+1) the expression window.I n(n ∈ 1,2 ... N) brightness value of any pixel in the expression window.
The calculating of texture contrast figure, we select for use the simplest gray variance to describe the coarse texture degree of image block.Computing formula is as follows:
T CM ( x , y ) = [ 1 N - 1 Σ n = 1 N ( I n - 1 N Σ n = 1 N I n ) 2 ] 1 2
Wherein, I n(n ∈ 1,2 ... N) brightness value of any pixel in the expression window.
The comparison based on the HSI color space colour-difference of visually-perceptible is considered in the calculating of color contrast figure.To two value of color Y at the HSI color space 1=(H 1, S 1, I 1) TAnd Y 2=(H 2, S 2, I 2) T, color difference is defined as:
Δ HSI ( Y 1 , Y 2 ) = ( Δ I ) 2 + ( Δ C ) 2
Δ wherein I=| I 1-I 2| and Δ C = S 1 2 + S 2 2 - 2 S 1 S 2 cos θ
θ = | H 1 - H 2 | ; if | H 1 - H 2 | ≤ π 2 π - | H 1 - H 2 | ; if | H 1 - H 2 | > π
The color contrast is calculated as follows formula:
C CM ( x , y ) = 1 N - 1 [ Σ n = 1 N - 1 Δ HSI ( Y ( x , y ) , Y n ) ]
2. the formation of characteristic pattern
In the forming process of characteristic pattern, the present invention takes a kind of iteration interpolation summation algorithm.This algorithm begins to carry out successively upwards interpolation, summation from the minimum yardstick m of resolution.Finally on the highest yardstick 1 of resolution, obtain characteristic pattern.
The process of this algorithm is as follows:
for?σ=1:m-1
% carries out the interpolation operation;
I CM(m+1-σ)=im_interpolation(I CM(m+1-σ),size(I CM(m-σ)))
% sues for peace on same yardstick;
I CM(m-σ)=im_integrate(I CM(m+1-σ),I CM(m-σ))
end
Wherein, σ is the yardstick of contrast figure.This algorithmic procedure is applied to brightness, texture and color contrast figure, obtains corresponding 3 width of cloth characteristic patterns.
3. global normalization is calculated
To by the brightness that obtains in the step 2, texture and color characteristic figure, the overall significance of each pixel is calculated in normalization, and method is as follows:
S SM ( x , y ) = F ( x , y ) - min ( F 1 , F 2 , . . . , F M × N ) max ( F 1 , F 2 , . . . , F M × N ) - min ( F 1 , F 2 , . . . , F M × N ) × 255
In the formula, S SM∈ { I SM, T SM, C SMRepresent significantly to scheme F ∈ { I FM, T FM, C FMRepresenting corresponding characteristic pattern, M and N represent S SMWidth and the height.
4. linear combination
Through after the normalization, three width of cloth are significantly schemed I SM, T SM, C SMSeparately through forming final remarkable figure after the weighting, associated methods as shown in the formula:
S=w 1×I SM+w 2×T SM+w 3×C SM
W wherein iThe expression weighted value, and satisfy Σ i = 1 3 w i = 1 .
For proving the performance of this vision mode, two experiments have been carried out.Emulation experiment 1 is selected great amount of images, uses the algorithm of the marking area extraction of the present invention's proposition, experimentizes.Experimental result as shown in Figure 2.Fig. 2 (a) is an input picture, the remarkable figure that obtains during multiple dimensioned factor m=6 among Fig. 2 (b).The part contrast vision mode algorithm based on multiple dimensioned low-level features of the present invention's proposition can extract accurate and complete marking area as seen from Figure 2, and the conspicuousness of remarkable position is more outstanding.The scope that the multiple dimensioned factor is not limited to fix can select different scale filtering to calculate according to the different images size, chooses the optimum filtering yardstick according to experimental result.In order to verify that whether the visual attention model that the present invention proposes meets human visual characteristic, has carried out the eye movement experiment under existing vision track hardware condition.Experimental result as shown in Figure 3.The marking area that extracts of the visual attention location model that proposes of the present invention meets human-eye visual characteristic as seen from Figure 3.
To sum up analyze, simulation result and eye movement experimental results show that the present invention is significantly improved and meets the visual characteristic of human eye on accuracy that marking area extracts and integrality.Therefore, can be used in the intelligent monitoring important area detection collection.

Claims (5)

1. marking area extracting method that is used for intelligent monitoring may further comprise the steps:
(1) multi-scale transform: input picture is carried out Filtering Processing, obtain the image of input picture on m different scale, m ∈ 4,6,8};
(2) local contrast feature extraction: on different scale, calculate the part contrast figure of brightness, texture, three low-level features of color respectively, it is to adopt the pixel window of a slip to calculate the local control value of each position that local contrast figure calculates, calculate its local contrast figure for the brightness employing based on weber-Fick nanofarad method then, calculate its local contrast figure for texture by gray variance, calculate its local contrast figure for color employing colour-difference method in based on the HSI color space of visually-perceptible;
(3) characteristic pattern forms: the part contrast figure on each yardstick carries out iteration interpolation summation operation to each feature, forms the characteristic pattern of each feature;
(4) global normalization: the remarkable value of each pixel in three width of cloth characteristic patterns is calculated in normalization, obtains three width of cloth and significantly schemes;
(5) linear combination: the remarkable figure of three width of cloth is carried out linear in conjunction with obtaining final remarkable figure.
2. the marking area extracting method that is used for intelligent monitoring according to claim 1 is characterized in that: the conspicuousness among the described remarkable figure is to represent by the gray scale of image, and the place that gray-scale value is big more is subjected to degree of concern strong more, and conspicuousness is big more; Otherwise conspicuousness is more little.
3. the marking area extracting method that is used for intelligent monitoring according to claim 1 is characterized in that: the filtering in the described step (1) is to use the gaussian pyramid model that input picture is carried out multiple-stage filtering.
4. the marking area extracting method that is used for intelligent monitoring according to claim 1, it is characterized in that: in the described step (2) when calculating the control value of each position pixel, this position is corresponding with the center pixel of window, calculate this point and the contrast in the zone of window size on every side, the value of gained is as the local control value of this location point; Brightness, texture, color are calculated respectively, obtained the local contrast figure of three width of cloth;
Computing formula based on the local contrast of weber-Fick nanofarad brightness then figure is as follows:
I CM ( x , y ) = clg I j max I j avg = clg max { I 1 , I 2 , . . . I n , . . . , I N } 1 N Σ n = 1 N I n
Wherein, I CM(x, y) be pixel (x, the brightness control value of y) locating, c are constant,
Figure FSA00000284843900012
With
Figure FSA00000284843900013
Be respectively brightness maximal value and the mean value in j the window, the number of pixels in N=(2k+1) * (2k+1) the expression window.I n(n ∈ 1,2 ... N) brightness value of any pixel in the expression window;
Computing formula based on the local contrast of the texture of gray variance figure is as follows:
T CM ( x , y ) = [ 1 N - 1 Σ n = 1 N ( I n - 1 N Σ n = 1 N I n ) 2 ] 1 2
Wherein, I n(n ∈ 1,2 ... N) brightness value of any pixel in the expression window;
Part contrast figure computing method based on the HSI color space of visually-perceptible are as follows: at first to two value of color Y at the HSI color space 1=(H 1, S 1, I 1) TAnd Y 2=(H 2, S 2, I 2) T, the definition color difference is:
Δ HSI ( Y 1 , Y 2 ) = ( Δ I ) 2 + ( Δ C ) 2
Δ I=|I wherein 1-I 2| and Δ C = S 1 2 + S 2 2 - 2 S 1 S 2 cos θ
θ = | H 1 - H 2 | ; if | H 1 - H 2 | ≤ π 2 π - | H 1 - H 2 | ; if | H 1 - H 2 | > π
Therefore, the local contrast figure of color is calculated as follows formula:
C CM ( x , y ) = 1 N - 1 [ Σ n = 1 N - 1 Δ HSI ( Y ( x , y ) , Y n ) ] .
5. the marking area extracting method that is used for intelligent monitoring according to claim 1 is characterized in that: the step that forms the characteristic pattern of each feature in the described step (3) is:
A is inserted into the higher scale size of resolution that is adjacent from the minimum yardstick of resolution in the part contrast figure on the yardstick that resolution is lower;
Part contrast figure on the corresponding yardstick with it of part contrast figure of b after with interpolation carries out addition;
The above step a of c, b carry out repeated iterative operation, end up to adding up on the highest yardstick of resolution, obtain characteristic pattern.
CN201010292932A 2010-09-27 2010-09-27 Significant region extraction method for intelligent monitoring Expired - Fee Related CN101944178B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010292932A CN101944178B (en) 2010-09-27 2010-09-27 Significant region extraction method for intelligent monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010292932A CN101944178B (en) 2010-09-27 2010-09-27 Significant region extraction method for intelligent monitoring

Publications (2)

Publication Number Publication Date
CN101944178A true CN101944178A (en) 2011-01-12
CN101944178B CN101944178B (en) 2012-10-24

Family

ID=43436164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010292932A Expired - Fee Related CN101944178B (en) 2010-09-27 2010-09-27 Significant region extraction method for intelligent monitoring

Country Status (1)

Country Link
CN (1) CN101944178B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184557A (en) * 2011-06-17 2011-09-14 电子科技大学 Salient region detection method for complex scene
CN102496024A (en) * 2011-11-25 2012-06-13 山东大学 Method for detecting incident triggered by characteristic frame in intelligent monitor
CN102521592A (en) * 2011-11-30 2012-06-27 苏州大学 Multi-feature fusion salient region extracting method based on non-clear region inhibition
CN103679716A (en) * 2013-12-05 2014-03-26 河海大学 Salient region layered extracting method based on HLS color space
CN104574366A (en) * 2014-12-18 2015-04-29 华南理工大学 Extraction method of visual saliency area based on monocular depth map
CN106021610A (en) * 2016-06-28 2016-10-12 电子科技大学 Video fingerprint extracting method based on salient region
CN108797723A (en) * 2018-03-27 2018-11-13 曹典 Intelligent flusher based on image detection
CN110633708A (en) * 2019-06-28 2019-12-31 中国人民解放军军事科学院国防科技创新研究院 Deep network significance detection method based on global model and local optimization
CN111914850A (en) * 2019-05-07 2020-11-10 百度在线网络技术(北京)有限公司 Picture feature extraction method, device, server and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101211356A (en) * 2006-12-30 2008-07-02 中国科学院计算技术研究所 Image inquiry method based on marking area
CN101271525A (en) * 2008-04-10 2008-09-24 复旦大学 Fast image sequence characteristic remarkable picture capturing method
US20100195883A1 (en) * 2007-06-28 2010-08-05 Patriarche Julia W System and method for automatically generating sample points from a series of medical images and identifying a significant region

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101211356A (en) * 2006-12-30 2008-07-02 中国科学院计算技术研究所 Image inquiry method based on marking area
US20100195883A1 (en) * 2007-06-28 2010-08-05 Patriarche Julia W System and method for automatically generating sample points from a series of medical images and identifying a significant region
CN101271525A (en) * 2008-04-10 2008-09-24 复旦大学 Fast image sequence characteristic remarkable picture capturing method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《中国图象图形学报》 20080831 王珞等 基于局部显著区域的自然场景识别 第13卷, 第8期 2 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184557A (en) * 2011-06-17 2011-09-14 电子科技大学 Salient region detection method for complex scene
CN102184557B (en) * 2011-06-17 2012-09-12 电子科技大学 Salient region detection method for complex scene
CN102496024A (en) * 2011-11-25 2012-06-13 山东大学 Method for detecting incident triggered by characteristic frame in intelligent monitor
CN102496024B (en) * 2011-11-25 2014-03-12 山东大学 Method for detecting incident triggered by characteristic frame in intelligent monitor
CN102521592A (en) * 2011-11-30 2012-06-27 苏州大学 Multi-feature fusion salient region extracting method based on non-clear region inhibition
CN102521592B (en) * 2011-11-30 2013-06-12 苏州大学 Multi-feature fusion salient region extracting method based on non-clear region inhibition
CN103679716A (en) * 2013-12-05 2014-03-26 河海大学 Salient region layered extracting method based on HLS color space
CN104574366A (en) * 2014-12-18 2015-04-29 华南理工大学 Extraction method of visual saliency area based on monocular depth map
CN104574366B (en) * 2014-12-18 2017-08-25 华南理工大学 A kind of extracting method in the vision significance region based on monocular depth figure
CN106021610A (en) * 2016-06-28 2016-10-12 电子科技大学 Video fingerprint extracting method based on salient region
CN106021610B (en) * 2016-06-28 2019-09-24 电子科技大学 A kind of method for extracting video fingerprints based on marking area
CN108797723A (en) * 2018-03-27 2018-11-13 曹典 Intelligent flusher based on image detection
CN111914850A (en) * 2019-05-07 2020-11-10 百度在线网络技术(北京)有限公司 Picture feature extraction method, device, server and medium
CN111914850B (en) * 2019-05-07 2023-09-19 百度在线网络技术(北京)有限公司 Picture feature extraction method, device, server and medium
CN110633708A (en) * 2019-06-28 2019-12-31 中国人民解放军军事科学院国防科技创新研究院 Deep network significance detection method based on global model and local optimization

Also Published As

Publication number Publication date
CN101944178B (en) 2012-10-24

Similar Documents

Publication Publication Date Title
CN101944178B (en) Significant region extraction method for intelligent monitoring
CN111080629B (en) Method for detecting image splicing tampering
CN106874894B (en) Human body target detection method based on regional full convolution neural network
Wang et al. Improved human detection and classification in thermal images
Sirmacek et al. Urban-area and building detection using SIFT keypoints and graph theory
Zhang et al. Region of interest extraction in remote sensing images by saliency analysis with the normal directional lifting wavelet transform
Tso et al. A contextual classification scheme based on MRF model with improved parameter estimation and multiscale fuzzy line process
CN110163188B (en) Video processing and method, device and equipment for embedding target object in video
CN104616032A (en) Multi-camera system target matching method based on deep-convolution neural network
CN102495998B (en) Static object detection method based on visual selective attention computation module
CN104299009B (en) License plate character recognition method based on multi-feature fusion
CN104598933A (en) Multi-feature fusion based image copying detection method
CN110929593A (en) Real-time significance pedestrian detection method based on detail distinguishing and distinguishing
Wang et al. Combined use of FCN and Harris corner detection for counting wheat ears in field conditions
Seo et al. Visual saliency for automatic target detection, boundary detection, and image quality assessment
Yang et al. DSG-fusion: Infrared and visible image fusion via generative adversarial networks and guided filter
CN105512622A (en) Visible remote-sensing image sea-land segmentation method based on image segmentation and supervised learning
CN104008404B (en) Pedestrian detection method and system based on significant histogram features
CN106056078A (en) Crowd density estimation method based on multi-feature regression ensemble learning
CN109977862A (en) A kind of recognition methods of parking stall limiter
Soni et al. Road network extraction using multi-layered filtering and tensor voting from aerial images
CN116310868A (en) Multi-level attention interaction cloud and snow identification method, equipment and storage medium
CN104615985B (en) A kind of recognition methods of human face similarity degree
Chen A feature preserving adaptive smoothing method for early vision
CN116935200B (en) Audit-oriented image tampering detection method, system, equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121024

Termination date: 20150927

EXPY Termination of patent right or utility model