CN109460764B - Satellite video ship monitoring method combining brightness characteristics and improved interframe difference method - Google Patents

Satellite video ship monitoring method combining brightness characteristics and improved interframe difference method Download PDF

Info

Publication number
CN109460764B
CN109460764B CN201811324612.0A CN201811324612A CN109460764B CN 109460764 B CN109460764 B CN 109460764B CN 201811324612 A CN201811324612 A CN 201811324612A CN 109460764 B CN109460764 B CN 109460764B
Authority
CN
China
Prior art keywords
target
ship
frame
image
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811324612.0A
Other languages
Chinese (zh)
Other versions
CN109460764A (en
Inventor
尹芝勇
汤玉奇
朱紫薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN201811324612.0A priority Critical patent/CN109460764B/en
Publication of CN109460764A publication Critical patent/CN109460764A/en
Application granted granted Critical
Publication of CN109460764B publication Critical patent/CN109460764B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Abstract

The invention belongs to the field of satellite remote sensing, and discloses a satellite video ship monitoring method combining brightness characteristics and an improved interframe difference method, which comprises the following steps: (1) acquiring a single-frame potential target of a satellite video: removing vegetation noise on the basis of extracting a video frame bright target reconstructed by utilizing the differential morphological contour, and obtaining a potential ship target in the video frame; (2) distinguishing the motion state of the ship at intervals: performing difference operation on different video frames by an improved interframe difference method, and extracting a dynamic ship target from potential ship targets; (3) satellite video dynamic ship track tracking: and tracking the track of the dynamic ship target by using the self-adaptive color model. The invention improves the inter-frame difference algorithm, can identify the ship target and the motion state thereof on the premise of smaller calculation amount and change of the background, and tracks the motion track thereof by utilizing the self-adaptive color model.

Description

Satellite video ship monitoring method combining brightness characteristics and improved interframe difference method
Technical Field
The invention relates to a satellite video ship monitoring method combining brightness characteristics and an improved interframe difference method.
Background
With the development of remote sensing satellite technology, new high-resolution satellites are developing from image acquisition to video acquisition. The application of satellite video images has become a popular research topic in the field of remote sensing. Due to the short development period of the video satellite technology, the main aspects of the existing research are moving vehicle detection and ground feature classification. The current moving object detection algorithms mainly comprise a background difference method, an optical flow method and an inter-frame difference method, but the three algorithms have certain defects for the application of video data.
Although the background subtraction method is simple, if background changes such as shading and light rays exist in a scene and noise is too large, a large error is generated; especially in the case of background motion, the background difference law will detect a large number of false objects. The optical flow method has the defect of large calculation amount, and the algorithm is not particularly suitable for the real-time detection of the target based on the remote sensing video satellite data. The interframe difference method is not sensitive enough to slow targets and is very sensitive to environmental noise. In order to better detect the target with slower motion speed, a learner improves the interframe difference method, and provides an accumulated interframe difference Algorithm (AFD) to improve the detection precision of the target with slower speed. However, the AFD algorithm has certain defects when the slow target is detected, and the problems of holes, false targets and the like can be detected when the moving target is detected.
Disclosure of Invention
The invention aims to provide a satellite video ship monitoring method combining brightness characteristics with an improved interframe difference method, so that a ship target to be detected and a motion state of the ship target can be identified on the premise of smaller calculated amount and change of background.
In order to achieve the above object, the present invention provides a satellite video ship monitoring method combining brightness characteristics with an improved interframe difference method, comprising the following steps:
(1) acquiring a single-frame potential target of a satellite video: removing vegetation noise on the basis of extracting a video frame bright target reconstructed by utilizing the differential morphological contour, and obtaining a potential ship target in the video frame;
(11) extracting bright targets from the satellite video images based on differential morphological contour reconstruction;
(12) extracting vegetation from the satellite video image by utilizing the vegetation index;
(13) superposing the extraction results of the step (11) and the step (12), and acquiring a potential ship target from the satellite video image by morphologically processing the extraction results;
(2) distinguishing the motion state of the ship at intervals: performing difference operation on different video frames by an improved interframe difference method, and extracting a dynamic target from potential targets;
(3) satellite video dynamic ship track tracking: tracking the dynamic target by using a self-adaptive color model;
the algorithm for extracting the bright target in the step (11) is as follows:
extracting the maximum value of each pixel in the multispectral image on different wave bands, and taking the maximum value as a brightness image of the image:
B(x,y)=max1≤k≤K(bandk(x,y))
wherein B (x, y) represents the luminance value, band, of the picture element (x, y)k(x, y) represents the spectral value of the pixel on the K wave band, and K is the total number of the wave bands of the multispectral image;
and (3) performing differential morphological contour reconstruction aiming at the result of the white-top-hat transformation of the brightness image:
DMPW_TH(d,s)=|MPW_TH(d,(s+Δs))-MPW_TH(d,s)|
Figure GDA0003420298850000021
wherein the content of the first and second substances,
Figure GDA0003420298850000031
representing the morphological reconstruction operation on the brightness image B, d and s respectively represent the direction and the scale of the selected linear structural element, and deltas is the scale increase step length of the linear structural element and satisfies smin≤s≤smax(ii) a Since the building is more diverse in size and direction of scale than other ground feature types, the average processing result of morphological contour difference of the white top hat transformation result in different scales and directions is the brightness target index:
Figure GDA0003420298850000032
wherein D and S respectively represent the direction number and the scale number of the structural elements in the morphological contour difference reconstruction;
taking the first 20% of the BTI result as a brightness target;
the algorithm for extracting vegetation in the step (12) is as follows:
GBVI=G(x,y)-B(x,y);
then, carrying out binarization processing on the GBVI result by taking 10 as a threshold value, namely marking as 0 if the pixel value is less than 10 and marking as 1 if the pixel value is greater than or equal to 10 after calculation of the GBVI so as to obtain a vegetation extraction result;
wherein, G (x, y) is the brightness value corresponding to the green band of the pixel point (x, y), B (x, y) is the brightness value corresponding to the blue band of the pixel point (x, y), and GBVI is the vegetation band difference index;
the morphological treatment in the step (13) is a morphological closing operation; screening the size of a white connected region in the image after the morphological closing operation;
the process of extracting the dynamic ship target in the step (2) is as follows:
adopting dimensionality reduction processing on the frame number of the video image, and forming a new continuous video image by taking the first frame image per second; then, extracting potential targets from the newly formed video images frame by repeating the steps (11) to (13), recording and comparing the number of the potential ship targets extracted from each frame of the video data after dimensionality reduction, finding out the frame images with the same number and the minimum number, and determining that the number is the number of the potential ship targets in the research area;
in order to enable the targets to move for a certain distance, performing differential calculation on two frames of images with the minimum number of selected targets and the farthest time distance to generate a new target position map, and calculating the centroids of all potential ship targets;
comparing the target centroid recorded by the generated new target position diagram with the target centroid in the frame image which is the farthest from the selected time distance and has the same target number and is in front of the time sequence in the two frames of images, determining whether the target is displaced by setting a certain distance threshold, and if the distance of the centroid is less than 6 pixels, determining that the target corresponding to the centroid is in motion;
and then, carrying out centroid matching on the front frame image and the rear frame image, and distinguishing the motion states of the remaining targets by setting a certain threshold value: firstly, judging a target with a slow moving speed, and if the minimum distance of the mass center matching is smaller than the length of the diagonal line of the target frame of the connected region and is larger than 6 pixels, judging the target as the moving target; if the minimum distance is less than or equal to 6 pixels, judging the target as a static target, and if the rest are noise points; so far, the motion states of all ship targets can be marked on the previous frame image;
and searching and matching the centroid with the first frame of the video image after dimensionality reduction, calculating the distance between the centroids in the two frames of images, if the minimum distance is less than 300 pixels, determining the same target, marking the motion state of the ship target in the first frame of the video image after dimensionality reduction in such a way, and obtaining the final motion state of the ship target on the first frame of the video image after dimensionality reduction.
Further, during screening, pixels with the size of 300-.
Further, the trajectory tracking algorithm in step (3) separates the target object from the background in the first frame using the RGB-based joint probability density function of the selected target object region and the RGB joint probability density functions of the neighborhood around the target object region;
the next color feature of the selected target object is modeled with the color of the target object, i.e., a pixel color based quantized feature that corresponds to a value in the quantized RGB color space, and then the target object and background are separated in other frames using an object color model while tracking the position of the target using the Mean-Shift algorithm.
Through the technical scheme, the following beneficial technical effects can be realized:
the invention improves the inter-frame difference algorithm, and can identify the target to be detected and the motion state thereof on the premise of smaller calculation amount and change of the background.
The invention is researched based on a satellite video ship monitoring method combining brightness characteristics and an improved interframe difference method. And performing difference operation on different video frames by an improved interframe difference method to extract a moving ship target, and performing track tracking on the moving ship target by using a self-adaptive color model. To validate the proposed model, the present invention performed experimental validation using Vancouver, Canada (49 ° 17 'N123 ° 7' W) in its harbor area, video data of International Space Station (ISS) on 7.7.2.2015 and satellite data Jilin No. I in the harbor area of san Diego, USA (32 ° 42 'N117 ° 10' W). Experimental results show that the ship sailing track extracted by the model provided by the invention is basically consistent with the ship motion track interpreted by visual observation.
Additional features and advantages of embodiments of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the embodiments of the invention without limiting the embodiments of the invention. In the drawings:
FIG. 1 is a flow chart of one embodiment of the present invention;
FIG. 2 is a graph illustrating the result of extracting bright objects from a first frame of image of International space station data in accordance with an embodiment of the present invention;
FIG. 3 is a diagram illustrating a vegetation extraction result of a first frame of image of international space station data in accordance with an embodiment of the present invention;
FIG. 4 is a diagram of potential ship target, land, ocean extraction results for a first frame image of international space station data in accordance with an embodiment of the present invention;
FIG. 5 is a diagram illustrating a potential ship target extraction result from a first frame image of international space station data in accordance with an embodiment of the present invention;
FIG. 6 is a diagram illustrating a result of determining a motion state of a data ship at an international space station according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a result of determining a motion state of a data ship according to Jilin I in an embodiment of the present invention;
FIG. 8 is a diagram of a quantized RGB color space in one embodiment of the invention;
FIG. 9 is a diagram of a motion trajectory tracking of a data vessel at an international space station, in accordance with an embodiment of the present invention;
FIG. 10 is a diagram illustrating tracking of the motion trajectory of a data ship such as Jilin No. I in accordance with an embodiment of the present invention;
fig. 11 is a diagram illustrating an example of an improved interframe difference method according to an embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating embodiments of the invention, are given by way of illustration and explanation only, not limitation.
In an embodiment of the present invention, the target is a ship, as shown in fig. 1, the target is extracted according to the brightness characteristic and the spectral vegetation characteristic, then the potential ship target is extracted through operations such as morphological processing, the motion state of the ship is determined by using an improved interframe difference algorithm, and finally the ship track is tracked by using a self-adaptive color model to obtain the motion track diagram of the ship.
The first step is as follows: extraction of bright target by differential morphological contour reconstruction
According to the fact that a building usually shows bright spectral features in a neighborhood range of the building, a difference morphological contour reconstruction algorithm is adopted to extract bright targets, the method is characterized in that structural features and spectral features of the bright targets are shown by multidirectional and multi-scale morphological operations, a series of linear structural elements are used for morphological operations, and difference morphological contour reconstruction is conducted on a top-hat transformation result. As the ship also shows the high-brightness spectral characteristics, the method can be used for extracting potential targets, and then distinguishing the ship targets through morphological operation and threshold value selection. Some of the key morphological operations used for building extraction in this experiment are summarized below:
(1) reconstruction: reconstruction filtering is an important morphological filter and is very useful for image processing because they do not introduce discontinuities and thus preserve the shape in the input image.
(2) And (3) particle size measurement: the size and dimensions of the objects in the image are described. Granular bodies have been introduced for image classification in remote sensing urban areas. The multi-scale morphological characteristics are based on operators with the increasing size of structural elements.
(3) Directionality: most of the existing morphological operations use discoid structural elements. However, the discoid structural elements do not take into account the directional information that is crucial for distinguishing objects with similar spectral characteristics, such as buildings and roads whose spectra are close, but buildings are isotropic and roads are anisotropic in comparison.
The specific algorithm adopted in the invention is as follows:
extracting the maximum value of each pixel in the multispectral image on different wave bands, and taking the maximum value as a brightness image of the image:
B(x,y)=max1≤k≤K(bandk(x,y))
64 69 73 76 90
58 66 72 77 86
54 61 71 76 83
51 56 65 72 80
45 51 59 67 80
band1(x,y)
64 68 73 75 89
58 66 72 77 86
53 60 71 75 82
51 56 65 72 80
45 51 59 67 80
band2(x,y)
Figure GDA0003420298850000071
Figure GDA0003420298850000081
band3(x,y)
66 71 75 78 92
60 68 74 79 88
56 63 73 78 85
53 58 67 74 82
48 53 61 70 82
B(x,y)
wherein B (x, y) represents the luminance value, band, of the picture element (x, y)k(x, y) represents the spectral value of the pixel on the K-th band, and K is the total number of the multispectral image bands.
Differential morphological contour reconstruction (DMP) is performed on the result of the luminance image white top-hat transform:
DMPW_TH(d,s)=|MPW_TH(d,(s+Δs))-MPW_TH(d,s)|
Figure GDA0003420298850000082
wherein the content of the first and second substances,
Figure GDA0003420298850000083
representing the morphological reconstruction operation on the brightness image B, d and s respectively represent the direction and the scale of the selected linear structural element, and deltas is the scale increase step length of the linear structural element and satisfies smin≤s≤smax. Since the building is more diverse in size and direction of scale than other ground feature types, the average processing result of morphological contour difference of the white top hat transformation result in different scales and directions is the brightness target index:
Figure GDA0003420298850000084
Figure GDA0003420298850000085
Figure GDA0003420298850000091
BTI
wherein, MPW_THAnd (D, S) D and S respectively represent the direction number and the scale number of the structural elements in the morphological contour difference reconstruction. It is found that the increase of the D value can not improve the precision of house extraction.
Some important parameter settings in the above calculations:
d value 0, 30, 60, 90, 120 and 150 in sequence
Initial s value 3, 7, 9 and 11 in sequence
Δs 8
D 6
S 4
The first 20% of the BTI results were taken as the luminance target, and the results are shown in fig. 2, where the white area is the luminance target.
0 0 0 0 0
0 0 0 0 0
0 0 0 1 1
0 0 1 1 1
0 0 0 0 0
The object labeled 1 is a bright object and the object labeled 0 is a non-bright object.
As can be seen from fig. 2, the bright target extraction result obtained by the existing algorithm is substantially the same as the result of visual interpretation.
The second step is that: vegetation extraction based on vegetation index features
According to the research of the prior scholars, although the vegetation region is extracted well based on the spectral characteristics of the vegetation, the model processes true-color video satellite data, the brightness of the video data is low, particularly the color characteristics of the vegetation are low, so that the texture characteristics of the vegetation are not clear, and the traditional extraction mode based on the spectral characteristics or the texture characteristics of the vegetation does not respond well to the video data. Aiming at the characteristic of the video satellite data, a simpler vegetation extraction mode is finally adopted, according to the spectral characteristics of vegetation, a larger difference value exists between a G wave band and a B wave band in a visible light wave band, only a ground object class showing green can have the characteristic due to the wave band characteristics of visible light, and an algorithm for extracting vegetation by the wave band ratio is found to be feasible aiming at the characteristic of a research area.
GBVI=G(x,y)-B(x,y)
Wherein G (x, y) is the brightness value corresponding to the green band of the pixel point (x, y), B (x, y) is the brightness value corresponding to the blue band of the pixel point (x, y), and GBVI is the defined vegetation band difference index.
114 108 120 132 136
147 161 155 146 125
211 211 198 159 122
228 219 199 155 111
229 207 170 126 87
G(x,y)
115 109 121 133 137
148 162 156 147 126
200 200 187 149 116
218 208 188 144 105
217 194 157 113 79
B(x,y)
Figure GDA0003420298850000101
Figure GDA0003420298850000111
GBVI
Then, performing binarization processing on the GBVI result by using 10 as a threshold, that is, after the GBVI is calculated, marking a value 0 if the pixel value is less than 10 and marking a value 1 if the pixel value is greater than or equal to 10, so as to reduce noise interference, and obtain a result as shown in fig. 3, where a white area is a vegetation area and a black area is a non-vegetation area.
0 0 0 0 0
0 0 0 0 0
1 1 1 1 0
1 1 1 1 0
1 1 1 1 1
The area marked 1 is a vegetation area and the area marked 0 is a non-vegetation area.
As can be seen from fig. 3, the vegetation extraction result obtained by the existing algorithm is substantially the same as the result of visual interpretation.
The third step: capturing potential vessel targets
Through the two steps, two results of the same frame image are superposed, and morphological closing operation is performed by using a circular structural element with the radius of 25 to obtain a result as shown in fig. 4, wherein a smaller white connection area is a potential ship target, a larger white connection area is land, and a larger black connection area is sea.
And then screening the size of the white connected region, wherein the size of the white connected region is 300-5000 pixels, and marking the white connected region as a potential ship region, wherein the marking result is shown as a white region in fig. 5.
The fourth step: determining a moving vessel based on an improved interframe difference algorithm
After the bright targets and the vegetation are extracted, the center of mass of the ship can be accurately positioned through threshold selection, mask processing and the like. However, at this time, it is not possible to distinguish between a moving ship and a stationary ship, and according to the study of the prior scholars, the present invention determines the motion state of a ship by using an improved frame-to-frame difference method. Because the time resolution of the adopted video satellite data is higher, if the difference is carried out by adopting two adjacent frames, the displacement of the ship target in the two frames of images is not obvious because the speed of the ship is lower, and even noise generated by a sensor during staring can influence the precision; for these reasons, the present invention applies a dimension reduction process to the number of frames of video images, and forms new "continuous" video data by taking the first frame image every second for a certain time interval. And then, extracting the potential ship targets from the newly formed video data frame by frame through the two steps, the two steps and the three steps, wherein the number of the ship targets is not known in advance, and the extracted potential ship targets from each frame possibly contain noise points, so that the noise interference is reduced as much as possible by the following means: and recording and comparing the ship number extracted from each frame of the video data after dimension reduction, finding out the frame image with the same number and the minimum number, and determining that the number is the number of potential ship targets in the research area.
In order to move the ship by a larger distance as much as possible, two frames of images with the minimum number of screened targets and the farthest time distance are selected, difference calculation is carried out, a new ship position map is generated, and the mass center of all potential ship areas is calculated. Comparing the ship centroid recorded by the generated new marking image with the ship centroid in the previous frame image of the time sequence in the two frames of images which are far away in time distance and have the same ship number, matching the centroids, determining whether the ship is displaced by setting a certain distance threshold value, and if the centroid distance is less than 6 pixels, determining that the ship corresponding to the centroid is in motion. This operation can only determine the completely separated ship motion states on the two frames of images before and after.
And then, carrying out centroid matching on the front frame image and the rear frame image, and distinguishing the motion states of the rest ships by setting a certain threshold value. Firstly, judging a ship with a slow moving speed, and if the minimum distance of the mass center matching is smaller than the length of the diagonal line of the target frame in the connected region and is larger than 6 pixels, judging the ship as the moving ship; if the minimum distance is less than or equal to 6 pixels, the ship is judged to be static, and the rest are noise points. So far, the motion states of all ship targets can be marked on the previous frame image.
The centroid of the ship motion state marked on the previous frame image through the above steps is not necessarily the ship centroid of the first frame image of the original video data, at this time, the centroid and the first frame of the video image after dimension reduction need to be searched and matched for the ship centroid, the distance between the centroids in the two frame images is calculated, if the minimum distance is less than 300 pixels, the ship is determined to be the same ship, the ship motion state in the first frame image of the video image after dimension reduction is marked in this way, and the original video data dimension reduction mode is to take the first frame image of each second, so the final motion state of the ship on the first frame image of the original video data can be obtained through the above steps. The verification data is finally judged as follows, wherein a ship judged to be moving is marked as 1, and a ship judged to be stationary is marked as 2. See fig. 6-7.
As shown in fig. 11, to transfer from 3 vesselsFor example, in the case of a ship in which motion is extracted (the reference numerals 1, 2, and 3 represent three corresponding ships), two frames of images having the farthest temporal distance are selected, i.e., the tth image, in order to move the target by a certain distanceminFrame image (figure a) and the T-thmaxFrame image (image b), and difference calculation is performed to generate a new target position map, i.e., DvalueImage (panel c) and calculate all potential target centroids;
according to the generated new target position DvalueThe centroid of the object recorded in the graph (graph c) is farthest from the selected time and has the same number of objects as the T in the two frames of images with the time sequence beforeminComparing the centers of mass of the targets in the frame image (figure a), determining whether the targets are displaced or not by setting a certain distance threshold, and if the Euclidean distance of the centers of mass is less than 6 pixels, determining that the targets corresponding to the centers of mass are in motion; e.g. the ship marked 1 in figure a, passes TminFrame image (fig. a) and TmaxAfter the difference value of the frame image (image b), the difference value can completely exist in DvalueIn the diagram (diagram c), so that the center of mass of the ship 1 does not substantially move on both diagrams, TminCenter of mass of ship 1 in frame image (figure a), and DvalueThe centroids of the ships in the graph (graph c) are respectively matched, and if a connected region meeting the requirement that the centroid distance is less than 6 pixels exists, the ship 1 is judged to be a moving ship.
Then, the T is putminFrame image and TmaxCarrying out centroid matching on the frame image, and distinguishing the motion states of the remaining targets by setting a certain threshold value: firstly, judging a target with a slow moving speed, and if the minimum distance of the mass center matching is smaller than the length of the diagonal line of the target frame of the connected region and is larger than 6 pixels, judging the target as the moving target; e.g. TminThe ship marked 2 in the frame image (fig. a) passes through the channel T due to the slow moving speedmaxAfter the frame image (image b) is subjected to difference, the difference cannot be compared with TmaxThe vessel division with frame image (fig. b) labeled 2 will be at DvalueThe figure (figure c) will form a sticky region and will generate a new centroid, at TminFrame images (fig. a) and DvalueThe corresponding centroids in the graph (graph c) are difficult to matchIs matched with, so as to be paired with Tm nFrame image and TmaxMatching of centroids in frame images, TminShip centroid and T with frame image (figure a) marked as 2maxThe Euclidean distance between the ship centroids marked as 2 in the frame image (figure b) satisfies less than TminThe ship connected region marked as 2 in the frame image (fig. a) has a diagonal length of the minimum circumscribed rectangle of more than 6 pixels, so that the ship 2 is a moving ship.
If the vessel is stationary, the vessel is at TminFrame image and TmaxThe position and mass center on the frame image are basically unchanged, and T can be set for improving the identification precisionminFrame image and TmaxJudging the frame images to be static targets if the minimum distance of the mass centers between the frame images is less than or equal to 6 pixels, and judging the rest to be noise points; e.g. TminThe ship marked 3 in the frame image (fig. a) is a stationary ship, and therefore is at TmaxSimilar ship regions are located at the same positions in the frame image (image b), and after difference, the ship regions are subjected to morphological operation and connected region area size screening, and then the ship regions are subjected to DvalueNo corresponding region exists in the graph (graph c), then for TminCenter of mass and T of No. 3 ship in frame image (picture a)maxThe centroids in the frame image (fig. b) are matched, and the corresponding Euclidean distance is calculated to find TminCenter of mass and T of No. 3 ship in frame image (picture a)maxIf the euclidean distance between the centroids of the No. 3 ships in the frame image (fig. b) is less than 6 pixels, it is determined that the No. 3 ship is a stationary ship.
So far, can be at TminMarking the motion states of all targets on the frame image;
because of TminSince the frame image is not necessarily the first frame image after dimension reduction, T is setminConnecting the centroid of the region in the frame image with the first frame T of the video image after dimensionality reduction1Searching and matching the target centroid of the image, and calculating TminFrame image and T1The Euclidean distance between the centroids of the connected regions in the frame image is determined as the same target if the minimum distance is less than 300 pixels, and the first frame T of the video image after dimension reduction is marked in such a way1The motion state of the target in the image is reduced by taking the first frame image per second, so that the first frame image after dimension reduction is the first frame image of the original data, and the final motion state of the target on the first frame image in the original video image is obtained.
Comparing the ship motion state result judged by the invention with the video data judged by visual interpretation to judge the ship motion state, the potential ship target of the video data of International Space Station (ISS) in 7, month and 2 days of the harbor region 2015 of Vancouver (49 degrees 17 'N123 degrees 7' W) in Canada and the judgment of the ship motion state are completely correct; in the satellite data judgment result of Jilin No. one in the Port area of san Diego (32 deg. 42 'N117 deg. 10' W) in the United states, two land areas are erroneously judged as dynamic and static ships, respectively, and one dynamic ship is erroneously judged as a static ship. The judgment results and the accuracy statistics are shown in the following table.
Figure GDA0003420298850000151
Statistical result of ship motion state
Figure GDA0003420298850000152
Figure GDA0003420298850000161
Accuracy statistics of potential ship target judgment results
Figure GDA0003420298850000162
Statistics of moving ship target judgment results
Accuracy ratio TP/(TP + FP) (1)
Integrity ratio TP/(TP + FN) (2)
Precision TP/(TP + FP + FN) (3)
TP: target detected by algorithm
FN: targets not detected by the algorithm
FP: the algorithm detects the wrong target
Fifthly, ship track tracking based on adaptive color model
The ship trajectory tracking algorithm of the adaptive color model uses the RGB-based joint probability density function of the selected object region and the RGB joint probability density function of the neighborhood around the target object to separate the object from the background in the first frame. And modeling the next color feature of the selected target object by using the color of the target object, then separating the target object and the background in other frames by using an object color model, tracking the position of the object by using a Mean-Shift algorithm, and developing an adaptive color model to solve the problem caused by the change of the appearance of the object in the tracking process. The specific algorithm is as follows:
5.1 object selection
Initially, the user manually selects an object of interest by drawing a rectangle around the object of interest. In order to accurately detect the target object, using the background color near the target object, it is necessary to select the outer rectangle so that the number of background pixels in the region surrounding the object is approximately the same as the number of pixels within the object rectangle. Equation (5-1) is used to define the width of the rectangular region surrounding the target object:
Figure GDA0003420298850000171
where w and h are the width and height of the object window and d is the width of the area around the object rectangle.
5.2 feature extraction
In the cited tracker, the features used to model the object are quantized features based on pixel color, which correspond to values in the quantized RGB color space. Pixel-based quantization features are extracted for the object pixels and surrounding background pixels. Fig. 8 shows the quantized R, G, B color space. Since the target vessel appears as a highlight in the image with a large color difference from the low-light marine background, the 4-bit codes for the R, G and B channels are selected in the selected model such that the total histogram size is 16 × 16 — 4096, and this color depth and histogram size reduction is to improve the computational efficiency and reduce the dimensionality. To represent the target appearance, quantized R, G, B pixel values from the next part of the separation object are used.
5.3 object background separation
The target background separation method is used to detect objects. Quantized R, G, B histograms of regions within the inner rectangle are used to obtain quantized RGB-based joint probability density functions (pdfs) of the object region and quantized R, G, B histograms between regions outside and inside rectangles are used to obtain the surrounding background joint probability density function. The object pixels are determined using log-likelihood ratio (LLR) results for an object region/background region surrounding the object. The log-likelihood of considering a pixel within the bounding rectangle of an object can be obtained by:
Figure GDA0003420298850000181
wherein Ho(i) Is a histogram of the feature values of the pixels in the target object rectangle, and Hb(i) Is a histogram of pixels from a region around the object, where the presented calculation index ranges from 1 to 4096 histogram elements. ε is a small non-zero value used to avoid numerical instability and thus avoid dividing by zero or taking the logarithm of zero, where ε is set to 0.01.
If the log-likelihood function from the previous step is used to detect the object pixel, the result may be the location of the image. Wherein the object color contains a positive value, the background color contains a negative value, and the color shared by the object and the background tends towards zero. The binary mask get function of the object is:
Figure GDA0003420298850000182
wherein tau is0Is the threshold for determining the most reliable object pixel. In the function, τ0Is set to 0.8 to obtain reliable object pixels. Once the model selects the target object region in the first frame, the log-likelihood map and binary mask of the object are obtained sequentially using equation (5-2) and equation (5-3).
5.4 object color modeling and updating
In the first frame, an object color model is automatically developed by using the quantized RGB values of the separated object obtained by equation (5-3). However, the quantized color space can tolerate substantially small variations in target object color and illumination, but still has certain drawbacks for wide variations in object color or scene illumination. It is therefore necessary to have a reliable object tracking color model to update the object. Object color modeling in each frame is computationally complex and time consuming, where criteria are defined to indicate the frames that need object color adaptation. Suppose S0Is the average R-G-B color, S, of the pixels within the separation object0Is a need for color adaptation of the display object. In the calculation presented in each input frame after the object is detected, the average RGB color of the pixels within the separation object is calculated. S in the current frame0When the deviation is more than 0.05 × 256 from the last frame (here, the deviation is set to 15) indicates that the object color is changing and the object color adaptation is performed.
5.5 object locator
Object localization begins with the centroid of a binary object detected in a previously tracked frame. To find the target object pixels, features are extracted from the object rectangle and tested with the object color model. The mean-shift algorithm is adopted to track the target object, and the main idea after the mean-shift algorithm is carried out is to regard points in the space as probability density functions, wherein the densest area in the space corresponds to a local maximum value which is used as the position of the target object, the displacement of the object is given by the displacement of the centroid of pixels of the object, and the displacement is moved to the centroid of the detected binary object in the iteration center of each target object rectangle. The target object rectangle is iteratively shifted and tested until the object is fully placed within the rectangle (mean shift converges). By using equations (5-4) and (5-5), the object centroid is repositioned in each iteration:
Figure GDA0003420298850000191
Figure GDA0003420298850000192
wherein xiAnd yiRepresenting the position of each detected object pixel in the object rectangle in the video frame image coordinates; xnewAnd YnewThe centroid of the target object is repositioned in each iteration, and n is the number of detected object pixels. In the course of the experiment, the centroid motion of less than 5 pixels was considered to be fully converged.
The method can not automatically select a ship target area in the first frame image, so the ship area extracted from the first frame image of the video data obtained through the two, three and four steps is the initial target area of the moving ship. And then tracking the track of each moving ship by using a self-adaptive color model algorithm to obtain the center of mass of each ship in each frame of video data, and connecting the centers of mass of the same ship to form a track. The results of the vessel trajectory tracking in the fleet are shown in fig. 9 and 10.
From the comparison of the automatically detected ship track of the present invention with the results of the visual interpretation of the video data in fig. 9 and 10, it can be seen that the ship track results obtained by the present invention are consistent with the actual ship track.
Information extraction is an important link of remote sensing data application, and the remote sensing data extraction cannot be separated from aspects such as remote sensing mapping, disaster emergency, city planning, change detection, military safety and the like. With the development of remote sensing technology, video satellite technology is gradually developing, and video satellite data is also slowly applied to the research and production fields. The invention provides a new application direction based on video satellite data, potential targets are extracted through differential morphological contour reconstruction, the motion state of the targets is determined by using an improved interframe differential method, and finally the track of the dynamic ship targets is tracked through a self-adaptive color model.
Potential dynamic ship acquisition and ship track extraction are carried out by adopting video data of International Space Station (ISS) of Wencouver in Canada (49 degrees 17' N123 degrees 7' W) in the harbor area, 2015, 7.h, 2.d and Jilin ' I satellite data of the harbor area in san Diego in America (32 degrees 42' N117 degrees 10' W), and results show that the extraction result of the satellite video ship track is basically consistent with the ship motion track which is visually interpreted, and feasibility of the invention is proved.
Although the embodiments of the present invention have been described in detail with reference to the accompanying drawings, the embodiments of the present invention are not limited to the details of the above embodiments, and various simple modifications can be made to the technical solutions of the embodiments of the present invention within the technical idea of the embodiments of the present invention, and the simple modifications all belong to the protection scope of the embodiments of the present invention.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, the embodiments of the present invention do not describe every possible combination.
In addition, any combination of various different implementation manners of the embodiments of the present invention is also possible, and the embodiments of the present invention should be considered as disclosed in the embodiments of the present invention as long as the combination does not depart from the spirit of the embodiments of the present invention.

Claims (3)

1. A satellite video ship monitoring method combining brightness characteristics and an improved interframe difference method is characterized by comprising the following steps:
(1) acquiring a single-frame potential target of a satellite video: removing vegetation noise on the basis of extracting a video frame bright target reconstructed by utilizing the differential morphological contour, and obtaining a potential ship target in the video frame;
(11) extracting bright targets from the satellite video images based on differential morphological contour reconstruction;
(12) extracting vegetation from the satellite video image by utilizing the vegetation index;
(13) superposing the extraction results of the step (11) and the step (12), and acquiring a potential ship target from the satellite video image by morphologically processing the extraction results;
(2) distinguishing the motion state of the ship at intervals: performing difference operation on different video frames by an improved interframe difference method, and extracting a dynamic target from potential targets;
(3) satellite video dynamic ship track tracking: tracking the dynamic target by using a self-adaptive color model;
the algorithm for extracting the bright target in the step (11) is as follows:
extracting the maximum value of each pixel in the multispectral image on different wave bands, and taking the maximum value as a brightness image of the image:
B(x,y)=max1≤k≤K(bandk(x,y))
wherein B (x, y) represents the luminance value, band, of the picture element (x, y)k(x, y) represents the spectral value of the pixel on the K wave band, and K is the total number of the wave bands of the multispectral image;
and (3) performing differential morphological contour reconstruction aiming at the result of the white-top-hat transformation of the brightness image:
DMPW_TH(d,s)=|MPW_TH(d,(s+Δs))-MPW_TH(d,s)|
Figure FDA0003420298840000011
wherein the content of the first and second substances,
Figure FDA0003420298840000012
representing the morphological reconstruction operation on the brightness image B, d and s respectively represent the direction and the scale of the selected linear structural element, and deltas is the scale increase step length of the linear structural element and satisfies smin≤s≤smax(ii) a Because the building has more diversity in the size and direction of the scale relative to other ground object types, the mean processing node for performing morphological contour difference on the white-top hat transformation result in different scales and directionsThe result is the brightness target index:
Figure FDA0003420298840000021
wherein D and S respectively represent the direction number and the scale number of the structural elements in the morphological contour difference reconstruction; taking the first 20% of the BTI result as a brightness target;
the algorithm for extracting vegetation in the step (12) is as follows:
GBVI=G(x,y)-B(x,y);
then, carrying out binarization processing on the GBVI result by taking 10 as a threshold value, namely marking as 0 if the pixel value is less than 10 and marking as 1 if the pixel value is greater than or equal to 10 after calculation of the GBVI so as to obtain a vegetation extraction result;
wherein, G (x, y) is the brightness value corresponding to the green band of the pixel point (x, y), B (x, y) is the brightness value corresponding to the blue band of the pixel point (x, y), and GBVI is the vegetation band difference index;
the morphological treatment in the step (13) is a morphological closing operation; screening the size of a white connected region in the image after the morphological closing operation;
the process of extracting the dynamic ship target in the step (2) is as follows:
adopting dimensionality reduction processing on the frame number of the video image, and forming a new continuous video image by taking the first frame image per second; then, extracting potential targets from the newly formed video images frame by repeating the steps (11) to (13), recording and comparing the number of the potential ship targets extracted from each frame of the video data after dimensionality reduction, finding out the frame images with the same number and the minimum number, and determining that the number is the number of the potential ship targets in the research area;
in order to enable the targets to move for a certain distance, performing differential calculation on two frames of images with the minimum number of selected targets and the farthest time distance to generate a new target position map, and calculating the centroids of all potential ship targets;
comparing the target centroid recorded by the generated new target position diagram with the target centroid in the frame image which is the farthest from the selected time distance and has the same target number and is in front of the time sequence in the two frames of images, determining whether the target is displaced by setting a certain distance threshold, and if the distance of the centroid is less than 6 pixels, determining that the target corresponding to the centroid is in motion;
and then, carrying out centroid matching on the front frame image and the rear frame image, and distinguishing the motion states of the remaining targets by setting a certain threshold value: firstly, judging a target with a slow moving speed, and if the minimum distance of the mass center matching is smaller than the length of the diagonal line of the target frame of the connected region and is larger than 6 pixels, judging the target as the moving target; if the minimum distance is less than or equal to 6 pixels, judging the target as a static target, and if the rest are noise points; so far, the motion states of all ship targets can be marked on the previous frame image;
and searching and matching the centroid with the first frame of the video image after dimensionality reduction, calculating the distance between the centroids in the two frames of images, if the minimum distance is less than 300 pixels, determining the same target, marking the motion state of the ship target in the first frame of the video image after dimensionality reduction in such a way, and obtaining the final motion state of the ship target on the first frame of the video image after dimensionality reduction.
2. The method as claimed in claim 1, wherein the size of the pixels 300-5000 pixels is marked as a potential ship target region during the screening.
3. The satellite video ship monitoring method combining brightness features and improved interframe difference method as claimed in claim 2, wherein the trajectory tracking algorithm in step (3) uses the RGB-based joint probability density function of the selected target object region and the RGB joint probability density function of the neighborhood around the target object region to separate the target object from the background in the first frame;
the next color feature of the selected target object is modeled with the color of the target object, i.e., a pixel color based quantized feature that corresponds to a value in the quantized RGB color space, and then the target object and background are separated in other frames using an object color model while tracking the position of the target using the Mean-Shift algorithm.
CN201811324612.0A 2018-11-08 2018-11-08 Satellite video ship monitoring method combining brightness characteristics and improved interframe difference method Active CN109460764B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811324612.0A CN109460764B (en) 2018-11-08 2018-11-08 Satellite video ship monitoring method combining brightness characteristics and improved interframe difference method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811324612.0A CN109460764B (en) 2018-11-08 2018-11-08 Satellite video ship monitoring method combining brightness characteristics and improved interframe difference method

Publications (2)

Publication Number Publication Date
CN109460764A CN109460764A (en) 2019-03-12
CN109460764B true CN109460764B (en) 2022-02-18

Family

ID=65609721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811324612.0A Active CN109460764B (en) 2018-11-08 2018-11-08 Satellite video ship monitoring method combining brightness characteristics and improved interframe difference method

Country Status (1)

Country Link
CN (1) CN109460764B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111364B (en) * 2019-04-30 2022-12-27 腾讯科技(深圳)有限公司 Motion detection method and device, electronic equipment and storage medium
CN110702869A (en) * 2019-11-01 2020-01-17 无锡中科水质环境技术有限公司 Fish stress avoidance behavior water quality monitoring method based on video image analysis
CN111387966B (en) * 2020-03-20 2022-08-26 中国科学院深圳先进技术研究院 Signal wave reconstruction method and heart rate variability information detection device
CN111553928B (en) * 2020-04-10 2023-10-31 中国资源卫星应用中心 Urban road high-resolution remote sensing self-adaptive extraction method assisted with Openstreetmap information
CN111739059A (en) * 2020-06-20 2020-10-02 马鞍山职业技术学院 Moving object detection method and track tracking method based on frame difference method
CN112270661A (en) * 2020-10-19 2021-01-26 北京宇航系统工程研究所 Space environment monitoring method based on rocket telemetry video
CN112489055B (en) * 2020-11-30 2023-04-07 中南大学 Satellite video dynamic vehicle target extraction method fusing brightness-time sequence characteristics
CN115294486B (en) * 2022-10-08 2023-01-13 彼图科技(青岛)有限公司 Method for identifying and judging illegal garbage based on unmanned aerial vehicle and artificial intelligence
CN115760613B (en) * 2022-11-15 2024-01-05 江苏省气候中心 Blue algae bloom short-time prediction method combining satellite image and optical flow method

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102081801A (en) * 2011-01-26 2011-06-01 上海交通大学 Multi-feature adaptive fused ship tracking and track detecting method
CN103839267A (en) * 2014-02-27 2014-06-04 西安科技大学 Building extracting method based on morphological building indexes
CN103971127A (en) * 2014-05-16 2014-08-06 华中科技大学 Forward-looking radar imaging sea-surface target key point detection and recognition method
CN104463914A (en) * 2014-12-25 2015-03-25 天津工业大学 Improved Camshift target tracking method
CN104751478A (en) * 2015-04-20 2015-07-01 武汉大学 Object-oriented building change detection method based on multi-feature fusion
CN105096338A (en) * 2014-12-30 2015-11-25 天津航天中为数据系统科技有限公司 Moving object extracting method and device
CN105608458A (en) * 2015-10-20 2016-05-25 武汉大学 High-resolution remote sensing image building extraction method
CN106650663A (en) * 2016-12-21 2017-05-10 中南大学 Building true/false change judgement method and false change removal method comprising building true/false change judgement method
CN107092890A (en) * 2017-04-24 2017-08-25 山东工商学院 Naval vessel detection and tracking based on infrared video
CN107465877A (en) * 2014-11-20 2017-12-12 广东欧珀移动通信有限公司 Track focusing method and device and related media production
CN107609534A (en) * 2017-09-28 2018-01-19 北京市遥感信息研究所 An automatic testing method of mooring a boat is stayed in a kind of remote sensing based on harbour spectral information
CN108052859A (en) * 2017-10-31 2018-05-18 深圳大学 A kind of anomaly detection method, system and device based on cluster Optical-flow Feature

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090093959A1 (en) * 2007-10-04 2009-04-09 Trimble Navigation Limited Real-time high accuracy position and orientation system
US10706551B2 (en) * 2017-03-30 2020-07-07 4DM Inc. Object motion mapping using panchromatic and multispectral imagery from single pass electro-optical satellite imaging sensors

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102081801A (en) * 2011-01-26 2011-06-01 上海交通大学 Multi-feature adaptive fused ship tracking and track detecting method
CN103839267A (en) * 2014-02-27 2014-06-04 西安科技大学 Building extracting method based on morphological building indexes
CN103971127A (en) * 2014-05-16 2014-08-06 华中科技大学 Forward-looking radar imaging sea-surface target key point detection and recognition method
CN107465877A (en) * 2014-11-20 2017-12-12 广东欧珀移动通信有限公司 Track focusing method and device and related media production
CN104463914A (en) * 2014-12-25 2015-03-25 天津工业大学 Improved Camshift target tracking method
CN105096338A (en) * 2014-12-30 2015-11-25 天津航天中为数据系统科技有限公司 Moving object extracting method and device
CN104751478A (en) * 2015-04-20 2015-07-01 武汉大学 Object-oriented building change detection method based on multi-feature fusion
CN105608458A (en) * 2015-10-20 2016-05-25 武汉大学 High-resolution remote sensing image building extraction method
CN106650663A (en) * 2016-12-21 2017-05-10 中南大学 Building true/false change judgement method and false change removal method comprising building true/false change judgement method
CN107092890A (en) * 2017-04-24 2017-08-25 山东工商学院 Naval vessel detection and tracking based on infrared video
CN107609534A (en) * 2017-09-28 2018-01-19 北京市遥感信息研究所 An automatic testing method of mooring a boat is stayed in a kind of remote sensing based on harbour spectral information
CN108052859A (en) * 2017-10-31 2018-05-18 深圳大学 A kind of anomaly detection method, system and device based on cluster Optical-flow Feature

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Fault-Tolerant Building Change Detection From Urban High-Resolution Remote Sensing Imagery;Yuqi Tang 等;《IEEE Geoscience and Remote Sensing Letters》;20130111;第10卷(第5期);1060-1064 *
基于数学形态学的差分图像目标检测算法研究;秦雨萍 等;《舰船电子工程》;20180420;第38卷(第4期);16-18,22 *
基于视频图像分析的船舶走锚识别研究;隗杰;《万方平台: https://d.wanfangdata.com.cn/thesis/ChJUaGVzaXNOZXdTMjAyMTA1MTkSCUQwMTMxNTEyORoIcGloc3dzZ2I%3D》;20180803;1-81 *
面向对象的高分辨率影像城市多特征变化检测研究;汤玉奇;《中国博士学位论文全文数据库 信息科技辑》;20140715(第07期);I140-23 *

Also Published As

Publication number Publication date
CN109460764A (en) 2019-03-12

Similar Documents

Publication Publication Date Title
CN109460764B (en) Satellite video ship monitoring method combining brightness characteristics and improved interframe difference method
CN109934200B (en) RGB color remote sensing image cloud detection method and system based on improved M-Net
Zhang et al. Object-oriented shadow detection and removal from urban high-resolution remote sensing images
CN107016357B (en) Video pedestrian detection method based on time domain convolutional neural network
CN104598883B (en) Target knows method for distinguishing again in a kind of multiple-camera monitoring network
EP3438929B1 (en) Foreground and background detection method
CN103077539A (en) Moving object tracking method under complicated background and sheltering condition
WO2018076138A1 (en) Target detection method and apparatus based on large-scale high-resolution hyper-spectral image
CN103761731A (en) Small infrared aerial target detection method based on non-downsampling contourlet transformation
CN103020992A (en) Video image significance detection method based on dynamic color association
CN107609571B (en) Adaptive target tracking method based on LARK features
CN113111878B (en) Infrared weak and small target detection method under complex background
CN112288008A (en) Mosaic multispectral image disguised target detection method based on deep learning
Lian et al. A novel method on moving-objects detection based on background subtraction and three frames differencing
CN112308883A (en) Multi-ship fusion tracking method based on visible light and infrared images
CN113822352A (en) Infrared dim target detection method based on multi-feature fusion
CN111274964B (en) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
CN111209877B (en) Depth map-based infrared small target detection method in complex scene
CN114494342A (en) Method for detecting and tracking marine target of visible light sequence image of synchronous orbit satellite
CN108389219B (en) Weak and small target tracking loss re-detection method based on multi-peak judgment
Yu et al. Haze removal algorithm using color attenuation prior and guided filter
Huang et al. Invasion detection on transmission lines using saliency computation
CN112907616B (en) Pedestrian detection method based on thermal imaging background filtering
Asvadi et al. Object tracking using adaptive object color modeling
Afreen et al. A method of shadow detection and shadow removal for high resolution remote sensing images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant