CN102800086A - Offshore scene significance detection method - Google Patents

Offshore scene significance detection method Download PDF

Info

Publication number
CN102800086A
CN102800086A CN2012102072715A CN201210207271A CN102800086A CN 102800086 A CN102800086 A CN 102800086A CN 2012102072715 A CN2012102072715 A CN 2012102072715A CN 201210207271 A CN201210207271 A CN 201210207271A CN 102800086 A CN102800086 A CN 102800086A
Authority
CN
China
Prior art keywords
saliency map
frame
saliency
significance
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012102072715A
Other languages
Chinese (zh)
Other versions
CN102800086B (en
Inventor
任蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Maritime University
Original Assignee
Shanghai Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Maritime University filed Critical Shanghai Maritime University
Priority to CN201210207271.5A priority Critical patent/CN102800086B/en
Publication of CN102800086A publication Critical patent/CN102800086A/en
Application granted granted Critical
Publication of CN102800086B publication Critical patent/CN102800086B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an offshore scene significance detection method. The method comprises the following steps of: 1, extracting an offshore scene image sequence; 2, transferring each frame image to the CIELab colour space, and extracting the characteristic pattern of luminance and colour passages; 3, using the absolute value of the difference between the extracted characteristics and the global mid value as the global significance map; 4, using the absolute value of the difference between the characteristics and the local mean value filtration as the local significance map; 5, combining the global significance map and the local significance map of the characteristics to obtain an overall significance map; 6, linearly combining the significance maps of the colour passages of frame images, and respectively combining the combined significance maps with the luminance significance maps to form an overall significance map; 7, accumulating by using each frame detection result as the centre and modifying the significance map of the current frame; and 8, converting the overall significance map into a binarization image to obtain an offshore scene significance target area. By adopting the method, the significance area in the offshore scene can be extracted rapidly, and the interference of sea noise wave can be favorably inhibited. The method is simple in implementation, and is suitable for real-time application.

Description

Marine scene significance detection method
Technical Field
The invention relates to a detection technology in the field of machine vision and image processing, in particular to a method for detecting the significance of an offshore scene by utilizing the image processing and machine vision technology.
Background
At present, most of visual attention calculation models at home and abroad directly carry out significance detection in a spatial domain of an image and extract a significance map. Compared with the idea of utilizing image frequency spectrum, the method does not need to perform Fourier transform, discrete cosine transform and other orthogonal transforms on the image.
The core of the significance detection method based on the spatial domain is how to define significance. Various methods proposed at present mainly measure significance by using feature differences among pixels in a spatial domain, and common features include brightness, color, direction, texture and the like. Meanwhile, many scholars introduce information theory, graph theory, Bayes theory and the like into significance calculation, and obtain a good effect in the detection of the significance of the natural scene. In order to solve the problem of low resolution of the saliency map in the Frequency domain saliency detection method, Achanta et al propose a Frequency tuned saliency detection method (FrT) implemented on the original image size, defining the difference value between each feature mean value in the CIELab color space of an image and the gaussian filtered value as saliency. The method is simple and easy to realize, can extract a complete and obvious target, and has good internal consistency.
Some domestic scholars have preliminarily discussed the problem of ship detection in a marine scene based on visible light images by applying a visual attention mechanism. The smart et al proposes a ship detection visual attention model based on HIS (Hue, Saturation) space, that is, in HSI color space, a feature map of each component is obtained by performing multi-scale difference calculation on three components, and then linear fusion is performed on the feature maps to obtain a saliency map. Wuqi et al introduces a visual attention mechanism into a real-time monitoring and tracking system of a moving target on the sea, and provides an iterative linear low-pass filtering method based on an inverted triangle small template, so as to rapidly realize smooth denoising on a coarse resolution image and highlight the target. Wuqi et al also propose a method for rapidly detecting a moving target on the sea based on a visible light image sequence, which first divides a region of interest (ROI) in a static image by using a visual attention model, and then detects the moving target by applying an improved time difference method only in the region of interest.
However, these proposed methods have limitations. Firstly, it is noted that these methods are directed to the detection of significant targets in natural scenes or onshore scenes, and therefore, the targets are mostly large targets; meanwhile, due to the complexity of the algorithm, downsampling is often required to be performed on the original image before saliency calculation is performed. For a large-sized salient target, down-sampling does not cause the target information to be lost too much, so the salient detection result is ideal. However, due to the particularity of the marine scene, that is, many marine targets are small targets, especially point targets, and are dispersed in the marine scene, the existing spatial domain saliency detection method is directly applied, the result is not ideal, especially the detection effect on the small targets is not good, and the essential reason is analyzed, namely the small target information is lost too much due to image down-sampling. The frequency tuning significance detection method is simple to implement, but only the global contrast is taken as significance in the model, so that the model is directly applied to the offshore scene, and the characteristics of a large number of sea clutters are far higher than the characteristic mean value, namely the global contrast is close to the target, so that the detection result target is highlighted, and a large number of clutters are contained. In addition, the existing visual attention model of the marine scene uses the visual attention calculation model of Itti for reference, and the implementation is relatively complex.
In summary, there have been related efforts to address the problem of saliency detection in either terrestrial or natural scenes. Due to the fact that a large number of sea clutter exist in the sea scene and the sea targets are mostly small targets, the effect of the existing method is not ideal. In view of the defects of the prior art, a new method for detecting the significance of the marine scene is particularly provided to solve the above-mentioned problems.
Disclosure of Invention
The invention provides a method for detecting the significance of an offshore scene, which utilizes the characteristics of an offshore scene image to extract a significant region in a CIELab color space.
In order to achieve the purpose, the invention provides a method for detecting the significance of an offshore scene, which is characterized by comprising the following steps:
step 1, extracting an image sequence of a marine scene;
step 2, converting each frame of image of the marine scene image from RGB color space to CIELab color space, and extracting the brightness and two color channels as basic features to obtain a feature map of the brightness and the two color channels;
step 3, the brightness and the absolute value of the difference between the two color features of all frame images of the marine scene image and the global median of the marine scene image are used as a global saliency map;
setting the first of the sequence of input images
Figure 2012102072715100002DEST_PATH_IMAGE002
Frame
Figure 2012102072715100002DEST_PATH_IMAGE004
,LiAs a brightness feature, aiAnd biIs two color features, the first
Figure 2012102072715100002DEST_PATH_IMAGE002A
The frame is any one frame in the marine scene image;
for the luminance and the two color features, respectively, a global median is calculated,
Figure 2012102072715100002DEST_PATH_IMAGE006
(1)
Figure 2012102072715100002DEST_PATH_IMAGE008
(2)
Figure 2012102072715100002DEST_PATH_IMAGE010
(3)
wherein L isimIs the global median of the luminance features, aimAnd bimIs the global median of the two colors;
then, a global saliency map for each feature is computed:
Figure 2012102072715100002DEST_PATH_IMAGE012
(4)
Figure 2012102072715100002DEST_PATH_IMAGE014
(5)
Figure 2012102072715100002DEST_PATH_IMAGE016
(6)
wherein, the values represent the absolute values,
Figure 2012102072715100002DEST_PATH_IMAGE017
is a global saliency map of a luminance feature,
Figure 2012102072715100002DEST_PATH_IMAGE018
and
Figure 2012102072715100002DEST_PATH_IMAGE019
global saliency maps for the two color features, respectively;
step 4, taking the brightness and the absolute value of the filtering difference between the two color features and the local mean value of the brightness and the two color features of all frame images of the marine scene image as a local saliency map:
Figure 2012102072715100002DEST_PATH_IMAGE021
(7)
Figure 2012102072715100002DEST_PATH_IMAGE023
(8)
Figure 2012102072715100002DEST_PATH_IMAGE025
(9)
wherein,
Figure 2012102072715100002DEST_PATH_IMAGE027
is that
Figure 2012102072715100002DEST_PATH_IMAGE029
Local mean templates of
Figure 2012102072715100002DEST_PATH_IMAGE031
Symbol of
Figure 2012102072715100002DEST_PATH_IMAGE033
Representing a spatial domain convolution operation that is performed,
Figure 2012102072715100002DEST_PATH_IMAGE034
is a local saliency map of a luminance feature,
Figure 2012102072715100002DEST_PATH_IMAGE035
and
Figure 2012102072715100002DEST_PATH_IMAGE036
is a local saliency map of two color features;
step 5, respectively combining the brightness of all frame images of the marine scene image and the global saliency map and the local saliency map of the two color features to obtain a total saliency map of the three features:
Figure 2012102072715100002DEST_PATH_IMAGE038
(10)
Figure 2012102072715100002DEST_PATH_IMAGE040
(11)
Figure 2012102072715100002DEST_PATH_IMAGE042
(12)
wherein,
Figure 2012102072715100002DEST_PATH_IMAGE043
is an overall saliency map of a luminance feature,
Figure 2012102072715100002DEST_PATH_IMAGE044
and
Figure 2012102072715100002DEST_PATH_IMAGE045
is a total saliency map of two color features;
step 6, respectively carrying out linear combination on the saliency maps of the two color channels of all the frame images of the marine scene image, and respectively combining the saliency maps with the brightness saliency maps to form a total saliency map;
in each frame, the color channel saliency map obtained by linearly merging the saliency maps of the two color channels is:
Figure 2012102072715100002DEST_PATH_IMAGE047
(13)
wherein,
Figure 2012102072715100002DEST_PATH_IMAGE048
a color channel saliency map is shown;
and fusing the color channel saliency map and the brightness saliency map to obtain a total saliency map of the frame of marine scene:
(14)
wherein,
Figure 2012102072715100002DEST_PATH_IMAGE051
i.e. the second of the input image sequence
Figure 2012102072715100002DEST_PATH_IMAGE002AA
Frame
Figure 2012102072715100002DEST_PATH_IMAGE004A
Gross prominence of marine scenesA drawing;
and 7, accumulating the corresponding saliency maps of n frames in the time by taking the detection result of each frame as a center and a time window with a fixed length, and correcting the saliency map of the current frame:
Figure 2012102072715100002DEST_PATH_IMAGE053
(15)
wherein
Figure 2012102072715100002DEST_PATH_IMAGE055
Is the length of the time window or windows,
Figure 2012102072715100002DEST_PATH_IMAGE057
is a normalized operator sign;
and 8, converting the total saliency map into a binary image according to a set threshold value to obtain a sea scene saliency target area.
In the step 7, n may be 5, 7, 9 or 11.
The threshold value in the step 8 is a normalized threshold value, and the value range thereof is 0.2 to 0.5.
Compared with the prior art, the method for detecting the sea scene saliency has the advantages that the method can be used for rapidly extracting the saliency region in the sea scene, is beneficial to target detection in the sea scene, well inhibits interference of sea clutter, and can ensure detection of the sea saliency small target under the condition that small targets in certain frames are submerged by sea waves and under the condition that strong sea clutter interference occurs. The method is simple to implement, is suitable for real-time application, and can provide machine vision auxiliary means for various maritime monitoring personnel.
Drawings
Fig. 1 is a method flowchart of a method for detecting marine scene saliency.
Detailed Description
The following further describes specific embodiments of the present invention with reference to the drawings.
The invention discloses a method for detecting the significance of an offshore scene, which is a method for detecting the significance of an offshore scene image in a CIELab color space by utilizing the spatial domain characteristics of the offshore scene image. The method utilizes global and local saliency of the marine scene to respectively extract and fuse image brightness and color channel saliency maps so as to highlight a target area. Meanwhile, in order to enhance the salient region of the offshore scene, sea clutter is further removed, and multiple interframes of salient images are accumulated.
The invention discloses a significance detection method realized in a CIELab color space by utilizing the spatial domain characteristics of an image of a marine scene. In the method, the global saliency map and the local saliency map of the marine scene image brightness and the color channel are fused to obtain the saliency region. Meanwhile, in order to better remove the sea clutter, a simple interframe significance accumulation method is adopted.
The invention mainly adopts the following steps: the method comprises a saliency calculation method of each frame of marine scene image and an inter-frame saliency map accumulation method.
The invention can be applied to the fields of maritime disaster search and rescue, maritime patrol, video-based ship collision avoidance, anti-pirate monitoring, on-duty observation and the like, and can provide comprehensive visual information for maritime traffic safety and the like by combining with infrared, remote sensing and radar imaging technologies.
As shown in fig. 1, the method for detecting the significance of the offshore scene comprises the following steps:
step 1, acquiring an original marine image sequence by using video acquisition equipment (such as a camera and the like), and extracting a marine scene image sequence.
And 2, converting each frame of image of the marine scene image from an RGB color space to a CIELab color space, extracting a brightness channel L and two color channels a and b of the image as basic features, and obtaining a feature map of the brightness and the two color channels.
This is because the correlation between channels in the RGB color space is high, and therefore the CIELab color space is used to extract the luminance and color features of the visual scene.
Wherein the above CIELab color space has only two color spaces a or b.
And 3, taking the brightness and the absolute value of the difference between the two color features of all frame images of the marine scene image and the global median thereof as a global saliency map.
The calculation process takes one of the frames as an example, and sets the second frame of the input image sequence
Figure DEST_PATH_IMAGE002AAA
Frame
Figure DEST_PATH_IMAGE004AA
The brightness and the two color characteristics are respectively:
Figure DEST_PATH_IMAGE059
,Lias a brightness feature, aiAnd biTwo color features.
For the luminance and the two color features, respectively, a global median is calculated,
Figure DEST_PATH_IMAGE006A
(1)
Figure DEST_PATH_IMAGE008A
(2)
Figure DEST_PATH_IMAGE010A
(3)
wherein L isimIs the global median of the luminance features, aimAnd bimIs the global median of the two colors.
And calculating a global saliency map of each feature using the following formula:
Figure DEST_PATH_IMAGE012A
(4)
Figure DEST_PATH_IMAGE014A
(5)
Figure DEST_PATH_IMAGE016A
(6)
wherein, the values represent the absolute values,
Figure DEST_PATH_IMAGE017A
is a global saliency map of a luminance feature,
Figure DEST_PATH_IMAGE018A
andrespectively, a global saliency map of the two color features.
And 4, taking the brightness and the absolute value of the filtering difference between the two color features and the local mean value of the brightness and the two color features of all frame images of the marine scene image as a local saliency map.
To a first orderTaking a frame as an example, taking the absolute value of each feature local mean filter and each feature difference as a local saliency map, and respectively:
Figure DEST_PATH_IMAGE021A
(7)
Figure DEST_PATH_IMAGE023A
(8)
(9)
wherein,
Figure DEST_PATH_IMAGE027A
is that
Figure DEST_PATH_IMAGE029A
Local mean templates of
Figure DEST_PATH_IMAGE031A
. Symbol
Figure DEST_PATH_IMAGE033A
Representing a spatial domain convolution operation.
Figure DEST_PATH_IMAGE034A
Is a local saliency map of a luminance feature,and
Figure DEST_PATH_IMAGE036A
is a local saliency map of two color features.
And step 5, respectively combining the brightness of all frame images of the marine scene image and the global saliency map and the local saliency map of the two color features to obtain a total saliency map of the three features.
To a first order
Figure DEST_PATH_IMAGE002AAAAA
For the frame example, the overall saliency map for the luminance and two color features is shown as follows:
Figure DEST_PATH_IMAGE038A
(10)
(11)
Figure DEST_PATH_IMAGE042A
(12)
wherein,
Figure DEST_PATH_IMAGE043A
is an overall saliency map of a luminance feature,
Figure DEST_PATH_IMAGE044A
and
Figure DEST_PATH_IMAGE045A
is a total saliency map of two color features.
And 6, respectively carrying out linear combination on the saliency maps of the two color channels of all the frame images of the marine scene image, and respectively combining the saliency maps with the brightness saliency maps to form a total saliency map.
To a first order
Figure DEST_PATH_IMAGE002AAAAAA
For example, the color channel saliency map obtained by linearly merging the saliency maps of two color channels is:
Figure DEST_PATH_IMAGE047A
(13)
wherein,
Figure DEST_PATH_IMAGE048A
is a color channel saliency map.
And (3) fusing the result of the above formula (13) with the brightness saliency map to finally obtain the total saliency map of the frame of marine scene as follows:
(14)
wherein,
Figure DEST_PATH_IMAGE051A
i.e. the second of the input image sequence
Figure DEST_PATH_IMAGE002AAAAAAA
Frame
Figure DEST_PATH_IMAGE004AAA
Overall saliency map of marine scene.
7, carrying out saliency map accumulation on each frame of detection result by using the detection result of each front frame and each rear frame as a center; the first 3 frames and the last 3 frames of the image sequence are accumulated by using the continuous 7 frames of the saliency map comprising the frames.
The invention utilizes a time window with a fixed length to accumulate all frames corresponding to the saliency map within the time, and corrects the saliency map of the current frame so as to enhance the target and inhibit the sea clutter.
I.e. after correction
Figure DEST_PATH_IMAGE060
The frame saliency map is:
Figure DEST_PATH_IMAGE053A
(15)
wherein
Figure DEST_PATH_IMAGE055A
The value of n is typically an odd number, for example, n may be 5, 7, 9, or 11, and the value of n cannot be too large or too small, which is related to the frame rate of video acquisition. In the present embodiment
Figure DEST_PATH_IMAGE055AA
Taking out the weight of the mixture of 7,
Figure DEST_PATH_IMAGE057A
it is a normalization operation sign, and aims to unify the gray values of the saliency map for the later saliency accumulation.
Specifically, for the first 3 frames and the last 3 frames of the image sequence, the consecutive 7 frames of saliency maps comprising the frame are accumulated.
And 8, converting the total saliency map into a binary image according to a set threshold value to obtain a sea scene saliency target area. The preset threshold is a normalized threshold, and the value range of the preset threshold is a numerical value between 0.2 and 0.5, and the threshold is an empirical value.
While the present invention has been described in detail with reference to the preferred embodiments, it should be understood that the above description should not be taken as limiting the invention. Various modifications and alterations to this invention will become apparent to those skilled in the art upon reading the foregoing description. Accordingly, the scope of the invention should be determined from the following claims.

Claims (3)

1. A sea scene significance detection method is characterized by comprising the following steps:
step 1, extracting an image sequence of a marine scene;
step 2, converting each frame of image of the marine scene image from RGB color space to CIELab color space, and extracting the brightness and two color channels as basic features to obtain a feature map of the brightness and the two color channels;
step 3, the brightness and the absolute value of the difference between the two color features of all frame images of the marine scene image and the global median of the marine scene image are used as a global saliency map;
setting the first of the sequence of input imagesFrame
Figure 2012102072715100001DEST_PATH_IMAGE004
,LiAs a brightness feature, aiAnd biIs two color features, the first
Figure DEST_PATH_IMAGE002A
The frame is any one frame in the marine scene image;
for the luminance and the two color features, respectively, a global median is calculated,
Figure 2012102072715100001DEST_PATH_IMAGE006
(1)
Figure 2012102072715100001DEST_PATH_IMAGE008
(2)
Figure 2012102072715100001DEST_PATH_IMAGE010
(3)
wherein L isimIs the global median of the luminance features, aimAnd bimIs the global median of the two colors;
then, a global saliency map for each feature is computed:
Figure 2012102072715100001DEST_PATH_IMAGE012
(4)
(5)
Figure 2012102072715100001DEST_PATH_IMAGE016
(6)
wherein, the values represent the absolute values,
Figure 2012102072715100001DEST_PATH_IMAGE017
is a global saliency map of a luminance feature,and
Figure 2012102072715100001DEST_PATH_IMAGE019
global saliency maps for the two color features, respectively;
step 4, taking the brightness and the absolute value of the filtering difference between the two color features and the local mean value of the brightness and the two color features of all frame images of the marine scene image as a local saliency map:
Figure DEST_PATH_IMAGE021
(7)
Figure DEST_PATH_IMAGE023
(8)
(9)
wherein,
Figure DEST_PATH_IMAGE027
is thatLocal mean templates of
Figure DEST_PATH_IMAGE031
Symbol ofRepresenting a spatial domain convolution operation that is performed,
Figure DEST_PATH_IMAGE034
is a local saliency map of a luminance feature,
Figure DEST_PATH_IMAGE035
and
Figure DEST_PATH_IMAGE036
is a local saliency map of two color features;
step 5, respectively combining the brightness of all frame images of the marine scene image and the global saliency map and the local saliency map of the two color features to obtain a total saliency map of the three features:
Figure DEST_PATH_IMAGE038
(10)
Figure DEST_PATH_IMAGE040
(11)
Figure DEST_PATH_IMAGE042
(12)
wherein,
Figure DEST_PATH_IMAGE043
is an overall saliency map of a luminance feature,
Figure DEST_PATH_IMAGE044
and
Figure DEST_PATH_IMAGE045
is a total saliency map of two color features;
step 6, respectively carrying out linear combination on the saliency maps of the two color channels of all the frame images of the marine scene image, and respectively combining the saliency maps with the brightness saliency maps to form a total saliency map;
in each frame, the color channel saliency map obtained by linearly merging the saliency maps of the two color channels is:
Figure DEST_PATH_IMAGE047
(13)
wherein,
Figure DEST_PATH_IMAGE048
a color channel saliency map is shown;
and fusing the color channel saliency map and the brightness saliency map to obtain a total saliency map of the frame of marine scene:
Figure DEST_PATH_IMAGE050
(14)
wherein,i.e. the second of the input image sequence
Figure DEST_PATH_IMAGE002AA
Frame
Figure DEST_PATH_IMAGE004A
An overall saliency map of an offshore scene;
and 7, accumulating the corresponding saliency maps of n frames in the time by taking the detection result of each frame as a center and a time window with a fixed length, and correcting the saliency map of the current frame:
Figure DEST_PATH_IMAGE053
(15)
wherein
Figure DEST_PATH_IMAGE055
Is the length of the time window or windows,
Figure DEST_PATH_IMAGE057
is a normalized operator sign;
and 8, converting the total saliency map into a binary image according to a set threshold value to obtain a sea scene saliency target area.
2. A method for detecting the significance of an offshore scene as recited in claim 1, wherein in step 7, n can be 5, 7, 9 or 11.
3. The method for detecting the significance of the offshore scene according to claim 1, wherein the threshold in the step 8 is a normalized threshold, and the value range of the normalized threshold is 0.2 to 0.5.
CN201210207271.5A 2012-06-21 2012-06-21 Offshore scene significance detection method Expired - Fee Related CN102800086B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210207271.5A CN102800086B (en) 2012-06-21 2012-06-21 Offshore scene significance detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210207271.5A CN102800086B (en) 2012-06-21 2012-06-21 Offshore scene significance detection method

Publications (2)

Publication Number Publication Date
CN102800086A true CN102800086A (en) 2012-11-28
CN102800086B CN102800086B (en) 2015-02-04

Family

ID=47199184

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210207271.5A Expired - Fee Related CN102800086B (en) 2012-06-21 2012-06-21 Offshore scene significance detection method

Country Status (1)

Country Link
CN (1) CN102800086B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020965A (en) * 2012-11-29 2013-04-03 奇瑞汽车股份有限公司 Foreground segmentation method based on significance detection
CN103268586A (en) * 2013-04-27 2013-08-28 电子科技大学 Window fusion method based on heat diffusion theory
CN103337075A (en) * 2013-06-20 2013-10-02 浙江大学 Image significant degree calculation method based on isolux curve
CN103413320A (en) * 2013-08-30 2013-11-27 上海海事大学 Port contaminant saliency detection method
CN103413127A (en) * 2013-09-10 2013-11-27 上海海事大学 Marine target significance detection method based on spectrum singular value decomposition
CN105893957A (en) * 2016-03-30 2016-08-24 上海交通大学 Method for recognizing and tracking ships on lake surface on the basis of vision
CN104050674B (en) * 2014-06-27 2017-01-25 中国科学院自动化研究所 Salient region detection method and device
CN106991682A (en) * 2016-01-21 2017-07-28 深圳中兴力维技术有限公司 The extracting method and device of automatic harbour freighter
CN107169516A (en) * 2017-05-11 2017-09-15 上海海事大学 The marine Small object conspicuousness detection method converted based on K L
CN107862262A (en) * 2017-10-27 2018-03-30 中国航空无线电电子研究所 A kind of quick visible images Ship Detection suitable for high altitude surveillance
CN107886533A (en) * 2017-10-26 2018-04-06 深圳大学 Vision significance detection method, device, equipment and the storage medium of stereo-picture
CN107967474A (en) * 2017-11-24 2018-04-27 上海海事大学 A kind of sea-surface target conspicuousness detection method based on convolutional neural networks
CN108009984A (en) * 2017-11-21 2018-05-08 中国地质大学(武汉) A kind of detection method of water surface salient region towards monitoring water environment
CN108229342A (en) * 2017-12-18 2018-06-29 西南技术物理研究所 A kind of surface vessel target automatic testing method
CN108334865A (en) * 2018-03-12 2018-07-27 王艳 big data analysis platform based on image
CN109188421A (en) * 2018-07-25 2019-01-11 江苏科技大学 A kind of maritime search and rescue system and method for unmanned rescue boat

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976439A (en) * 2010-11-02 2011-02-16 上海海事大学 Visual attention model with combination of motion information in visual system of maritime search and rescue machine
CN102087744A (en) * 2010-02-25 2011-06-08 上海海事大学 Structure tensor method for quick detection of small video target under dynamic ocean background
CN102156881A (en) * 2011-04-13 2011-08-17 上海海事大学 Method for detecting salvage target based on multi-scale image phase information

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102087744A (en) * 2010-02-25 2011-06-08 上海海事大学 Structure tensor method for quick detection of small video target under dynamic ocean background
CN101976439A (en) * 2010-11-02 2011-02-16 上海海事大学 Visual attention model with combination of motion information in visual system of maritime search and rescue machine
CN102156881A (en) * 2011-04-13 2011-08-17 上海海事大学 Method for detecting salvage target based on multi-scale image phase information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LEI REN: "TARGET DETECTION IN MARITIME SEARCH AND RESCUE USING SVD AND FREQUENCY DOMAIN CHARACTERISTICS", 《PROCEEDINGS OF THE 2011 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS》, 13 July 2011 (2011-07-13), pages 556 - 560, XP031966282, DOI: doi:10.1109/ICMLC.2011.6016763 *
任蕾: "应用改进频率调谐的海上小目标检测方法", 《中国图象图形学报》, vol. 17, no. 3, 31 March 2012 (2012-03-31), pages 365 - 369 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020965A (en) * 2012-11-29 2013-04-03 奇瑞汽车股份有限公司 Foreground segmentation method based on significance detection
CN103020965B (en) * 2012-11-29 2016-12-21 奇瑞汽车股份有限公司 A kind of foreground segmentation method based on significance detection
CN103268586A (en) * 2013-04-27 2013-08-28 电子科技大学 Window fusion method based on heat diffusion theory
CN103268586B (en) * 2013-04-27 2015-11-18 电子科技大学 A kind of window fusion method based on diffusion theory
CN103337075A (en) * 2013-06-20 2013-10-02 浙江大学 Image significant degree calculation method based on isolux curve
CN103337075B (en) * 2013-06-20 2016-04-27 浙江大学 A kind of image significance computing method based on isophote
CN103413320A (en) * 2013-08-30 2013-11-27 上海海事大学 Port contaminant saliency detection method
CN103413320B (en) * 2013-08-30 2016-12-28 上海海事大学 A kind of port contaminant saliency detection method
CN103413127A (en) * 2013-09-10 2013-11-27 上海海事大学 Marine target significance detection method based on spectrum singular value decomposition
CN103413127B (en) * 2013-09-10 2016-06-08 上海海事大学 Marine target significance detection method based on spectrum singular value decomposition
CN104050674B (en) * 2014-06-27 2017-01-25 中国科学院自动化研究所 Salient region detection method and device
CN106991682A (en) * 2016-01-21 2017-07-28 深圳中兴力维技术有限公司 The extracting method and device of automatic harbour freighter
CN106991682B (en) * 2016-01-21 2019-12-20 深圳力维智联技术有限公司 Automatic port cargo ship extraction method and device
CN105893957B (en) * 2016-03-30 2019-03-22 上海交通大学 View-based access control model lake surface ship detection recognition and tracking method
CN105893957A (en) * 2016-03-30 2016-08-24 上海交通大学 Method for recognizing and tracking ships on lake surface on the basis of vision
CN107169516A (en) * 2017-05-11 2017-09-15 上海海事大学 The marine Small object conspicuousness detection method converted based on K L
CN107169516B (en) * 2017-05-11 2020-10-23 上海海事大学 Marine small target significance detection method based on K-L transformation
CN107886533A (en) * 2017-10-26 2018-04-06 深圳大学 Vision significance detection method, device, equipment and the storage medium of stereo-picture
CN107862262A (en) * 2017-10-27 2018-03-30 中国航空无线电电子研究所 A kind of quick visible images Ship Detection suitable for high altitude surveillance
CN108009984A (en) * 2017-11-21 2018-05-08 中国地质大学(武汉) A kind of detection method of water surface salient region towards monitoring water environment
CN108009984B (en) * 2017-11-21 2020-07-07 中国地质大学(武汉) Water environment monitoring-oriented water surface saliency area detection method
CN107967474A (en) * 2017-11-24 2018-04-27 上海海事大学 A kind of sea-surface target conspicuousness detection method based on convolutional neural networks
CN108229342A (en) * 2017-12-18 2018-06-29 西南技术物理研究所 A kind of surface vessel target automatic testing method
CN108229342B (en) * 2017-12-18 2021-10-26 西南技术物理研究所 Automatic sea surface ship target detection method
CN108334865A (en) * 2018-03-12 2018-07-27 王艳 big data analysis platform based on image
CN109188421A (en) * 2018-07-25 2019-01-11 江苏科技大学 A kind of maritime search and rescue system and method for unmanned rescue boat
CN109188421B (en) * 2018-07-25 2023-07-04 江苏科技大学 Maritime search and rescue system and method for unmanned search and rescue boat

Also Published As

Publication number Publication date
CN102800086B (en) 2015-02-04

Similar Documents

Publication Publication Date Title
CN102800086B (en) Offshore scene significance detection method
Guo et al. Lightweight deep network-enabled real-time low-visibility enhancement for promoting vessel detection in maritime video surveillance
CN104683767B (en) Penetrating Fog image generating method and device
CN103369209B (en) Vedio noise reduction device and method
CN102098440B (en) Electronic image stabilizing method and electronic image stabilizing system aiming at moving object detection under camera shake
CN108122213A (en) A kind of soft image Enhancement Method based on YCrCb
CN102271254B (en) Depth image preprocessing method
Song et al. An adaptive pansharpening method by using weighted least squares filter
CN106846289A (en) A kind of infrared light intensity and polarization image fusion method based on conspicuousness migration with details classification
CN107403134B (en) Local gradient trilateral-based image domain multi-scale infrared dim target detection method
CN104021532A (en) Image detail enhancement method for infrared image
CN110246088B (en) Image brightness noise reduction method based on wavelet transformation and image noise reduction system thereof
Liu et al. PCA-based sea-ice image fusion of optical data by HIS transform and SAR data by wavelet transform
CN104504652A (en) Image denoising method capable of quickly and effectively retaining edge and directional characteristics
CN102005037A (en) Multimodality image fusion method combining multi-scale bilateral filtering and direction filtering
Xue et al. Motion robust rain detection and removal from videos
Zhang et al. Single image dehazing based on fast wavelet transform with weighted image fusion
Huang et al. Recognition and detection technology of ice-covered insulators under complex environment
Lawgaly et al. Image sharpening for efficient source camera identification based on sensor pattern noise estimation
Liu et al. Infrared and visible image fusion based on region of interest detection and nonsubsampled contourlet transform
Zhang et al. EV-fusion: A novel infrared and low-light color visible image fusion network integrating unsupervised visible image enhancement
Liu et al. Removing rain from single image based on details preservation and background enhancement
Zhang et al. Multisensor Infrared and Visible Image Fusion via Double Joint Edge Preservation Filter and Nonglobally Saliency Gradient Operator
CN111147815A (en) Video monitoring system
Ma et al. Video image clarity algorithm research of USV visual system under the sea fog

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150204

Termination date: 20180621