CN109460764A - A kind of satellite video ship monitoring method of combination brightness and improvement frame differential method - Google Patents

A kind of satellite video ship monitoring method of combination brightness and improvement frame differential method Download PDF

Info

Publication number
CN109460764A
CN109460764A CN201811324612.0A CN201811324612A CN109460764A CN 109460764 A CN109460764 A CN 109460764A CN 201811324612 A CN201811324612 A CN 201811324612A CN 109460764 A CN109460764 A CN 109460764A
Authority
CN
China
Prior art keywords
target
ship
frame
video
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811324612.0A
Other languages
Chinese (zh)
Other versions
CN109460764B (en
Inventor
尹芝勇
汤玉奇
朱紫薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN201811324612.0A priority Critical patent/CN109460764B/en
Publication of CN109460764A publication Critical patent/CN109460764A/en
Application granted granted Critical
Publication of CN109460764B publication Critical patent/CN109460764B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to satellite remote sensing fields, disclose the satellite video ship monitoring method of a kind of combination brightness and improvement frame differential method, include the following steps: that (1) satellite video single frames potential target obtains: on the basis of the video frame rebuild using difference morphological profiles bright Objective extraction, vegetation noise is removed, the potential ship target in video frame is obtained;(2) interval frame ship motion state differentiates: by improved frame differential method to difference operation is carried out between different video frame, dynamic ship target is extracted from potential ship target;(3) track following satellite video dynamic ship track following: is carried out to dynamic ship target using adaptive color model.The present invention improves inter-frame difference algorithm, ship target and its motion state can be identified under the premise of lesser calculation amount and background change, and track its motion profile using adaptive color model.

Description

A kind of satellite video ship monitoring of combination brightness and improvement frame differential method Method
Technical field
The present invention relates to the satellite video ship monitoring methods of a kind of combination brightness and improvement frame differential method.
Background technique
With the development of remote sensing satellite technology, the high-resolution satellite newly released is from Image Acquisition to video acquisition Direction is developed.The application of satellite video image has become one popular research topic of remote sensing fields.Due to video satellite technology The development time limit it is shorter, the main aspect of existing research be moving vehicle detection and terrain classification.And current moving target The algorithm of detection mainly has powerful connections three kinds of calculus of finite differences, optical flow method and frame differential method algorithms, but the application for video data, Three kinds of algorithms have certain defect.
Background subtraction is although relatively simple, if but blocked in scene, the variation of the backgrounds such as light and noise it is excessive, Very big error can then be generated;Especially in background motion, background subtraction then will detect that a large amount of false target. The shortcomings that optical flow method is that calculation amount is larger, which is not particularly suitable for examining in real time based on the target of remote sensing video satellite data It surveys.Frame differential method is not sensitive enough to slow target and very sensitive to ambient noise.In order to preferably detect movement Slow target, has scholar to improve frame differential method, proposes accumulation inter-frame difference algorithm (AFD), improves The detection accuracy of slow target.But AFD algorithm is used also to have certain defect when carrying out " at a slow speed " target detection, The problems such as cavity, false target can be detected when detecting moving target.
Summary of the invention
The object of the present invention is to provide a kind of combination brightness and the satellite video ship for improving frame differential method to monitor Method, so as to identify ship target to be checked and its movement under the premise of lesser calculation amount and background change State.
To achieve the goals above, the present invention provides the satellite video of a kind of combination brightness and improvement frame differential method Ship monitoring method, includes the following steps:
(1) satellite video single frames potential target obtains: mentioning in the bright target of video frame rebuild using difference morphological profiles On the basis of taking, vegetation noise is removed, obtains the potential ship target in video frame;
(11) bright target is extracted from satellite video image based on the reconstruction of difference morphological profiles;
(12) vegetation is extracted from satellite video image using vegetation index;
(13) extraction of step (11) and step (12) is superimposed as a result, regarding by the Morphological scale-space superposition of data from satellite Potential ship target is obtained in frequency image;
(2) interval frame ship motion state differentiates: poor to carrying out between different video frame by improved frame differential method It is worth operation, dynamic object is extracted from potential target;
(3) track following satellite video dynamic ship track following: is carried out to dynamic object using adaptive color model.
Further, the algorithm of the bright target of extraction described in step (11) is as follows:
Maximum value of each pixel on different-waveband in multispectral image is extracted, and as the luminance graph of the image Picture:
B (x, y)=max1≤k≤K(bandk(x,y))
Wherein, B (x, y) indicates the brightness value of pixel (x, y), bandk(x, y) indicates spectrum of the pixel on kth wave band Value, K are multispectral image wave band sum;
The reconstruction of difference morphological profiles is carried out for the result of the white cap transformation of luminance picture:
DMPW_TH(d, s)=| MPW_TH(d,(s+Δs))-MPW_TH(d,s)|
Wherein,It indicates to carry out luminance picture B morphological reconstruction operation, d and s respectively represent selected The direction of linear structure element and scale, Δ s are that the scale of linear structure element increases step-length, and meets smin≤s≤smax;By In building, other opposite atural object classifications have more diversity on the size and Orientation of scale, therefore, in different scale and direction The average value processing result that upper dialogue cap transformation result carries out form profile difference is luma target index:
Wherein, D and S respectively indicates the direction number and scale parameter of structural element in the reconstruction of form profile difference;
Take preceding 20% in BTI result for luma target.
Further, the algorithm that vegetation is extracted described in step (12) is as follows:
GBVI=G (x, y)-B (x, y);
Binary conversion treatment then is carried out for threshold value with 10 to GBVI result, i.e., after the calculating of GBVI, pixel value is less than 10 are labeled as 0, are more than or equal to 10 and are labeled as 1, obtain vegetation with this and extract result;
Wherein, G (x, y) is the corresponding brightness value of green wave band of (x, y) pixel, and B (x, y) is the indigo plant of (x, y) pixel The corresponding brightness value of wave band, GBVI are vegetation waveband difference value index.
Further, Morphological scale-space described in step (13) is morphology closed operation;Also to scheming after morphology closed operation The size of white connected region as in is screened.
It further, is that 300-5000 pixel is labeled as potential ship target region by size when screening.
Further, the process that dynamic ship target is extracted described in step (2) is as follows:
Dimension-reduction treatment is taken to the frame number of video image, passes through the continuous view for taking first frame image per second to form new Frequency image;Then carry out the extraction of potential target frame by frame to the video image newly formed by repeating said steps (11)-(13), The potential ship target quantity that each frame of video data after dimensionality reduction is extracted is recorded and is compared, and is found out with identical Number and the smallest frame image of number, and determine that this number is exactly the potential ship target number of survey region;
In order to make target move a certain distance, and time gap two frames farthest minimum to the target numbers selected Image carries out Difference Calculation, generates new target position map, and calculate all potential ship target mass centers;
It is farthest with the time gap that selects and there is phase according to the target centroid that the new target location figure of generation is recorded With the target centroid comparison in that preceding frame image of time series in the two field pictures of target numbers, by being arranged centainly Distance threshold determines whether target is subjected to displacement, if less than 6 pixels of the distance of mass center, assert mesh corresponding to the mass center Mark is movement;
Front and back two field pictures are then subjected to mass center matching again, by the way that certain threshold value is arranged, to distinguish remaining target Motion state: first determining whether the slower target of movement velocity, if the matched minimum range of mass center is less than the connected region target The cornerwise length of frame, and be greater than 6 pixels, then it is judged as the target of movement;Minimum range is less than or equal to 6 pixels, then sentences Break as static target, it is remaining then be noise spot;So far, the movement shape of all ship targets can be marked on prior image frame State;
The first frame of video image after the mass center and dimensionality reduction is carried out to the search and matching of target centroid, calculates two frame figures The distance between mass center as in, if less than 300 pixels of minimum range, regard as same target, mark drop by this method Ship target motion state in the first frame image of video image after dimension obtains in raw video image on first frame image The final motion state of ship target.
Further, the track following algorithm in the step (3) uses joint of the selected targeted object region based on RGB The joint probability density function of the RGB of probability density function and surrounding target subject area neighborhood, in the first frame by target Object and background separation;
The color of the next color characteristic target object for the target object chosen is modeled, that is, is based on pixel color Quantization characteristic, correspond to the value in the RGB color of quantization, then separated in other frames using object color model Target object and background, while utilizing the position of Mean-Shift algorithm keeps track target.
Through the above technical solutions, following beneficial technical effect may be implemented:
The present invention improves inter-frame difference algorithm, can be before lesser calculation amount and background are changed It puts and identifies target and its motion state to be checked.
The present invention is based on combination brightness and the satellite video ship monitoring method for improving frame differential method to be studied, Firstly, rebuilding the extraction for carrying out bright target based on difference morphological profiles, is operated, mentioned using vegetation index, Morphological scale-space etc. Take out potential ship target.Fortune is extracted to difference operation is carried out between different video frame by improved frame differential method again Dynamic ship target, and track following is carried out to it using adaptive color model.In order to verify proposed model, the present invention is adopted With its seaport area Vancouver, CAN (49 ° of 17'N, 123 ° of 7'W), 2015 international space station on July 2, (ISS) video counts Accordingly and the Jilin No.1 satellite data of its port area San Diego, USA (32 ° of 42'N, 117 ° of 10'W) carries out experimental verification. The experimental results showed that the ship motion profile proposed by the present invention model extracted ship's navigation track and interpreted by visual observation Substantially it coincide.
The other feature and advantage of the embodiment of the present invention will the following detailed description will be given in the detailed implementation section.
Detailed description of the invention
Attached drawing is to further understand for providing to the embodiment of the present invention, and constitute part of specification, under The specific embodiment in face is used to explain the present invention embodiment together, but does not constitute the limitation to the embodiment of the present invention.Attached In figure:
Fig. 1 is the flow chart of one embodiment of the invention;
Fig. 2 is the bright Objective extraction result figure of international space station data first frame image in one embodiment of the invention;
Fig. 3 is that international space station data first frame image vegetation extracts result figure in one embodiment of the invention;
Fig. 4 is the potential ship target of international space station data first frame image, land, ocean in one embodiment of the invention Extract result figure;
Fig. 5 is that the potential ship target of international space station data first frame image extracts result in one embodiment of the invention Figure;
Fig. 6 is international space station data ship motion state judging result figure in one embodiment of the invention;
Fig. 7 is one number ship motion state judging result figure of Jilin in one embodiment of the invention;
Fig. 8 is to quantify RGB color space figure in one embodiment of the invention;
Fig. 9 is international space station data ship tracing of the movement figure in one embodiment of the invention;
Figure 10 is one number ship tracing of the movement figure of Jilin in one embodiment of the invention;
Figure 11 is improved frame differential method exemplary graph in one embodiment of the invention.
Specific embodiment
It is described in detail below in conjunction with specific embodiment of the attached drawing to the embodiment of the present invention.It should be understood that this Locate described specific embodiment and be merely to illustrate and explain the present invention embodiment, is not intended to restrict the invention embodiment.
In one embodiment of the invention, target is ship, as shown in Figure 1, first according to brightness and vegetation Spectral signature extracts target, then by the operation such as Morphological scale-space, extracts, recycles to potential ship target Improved inter-frame difference algorithm determines the motion state of ship, finally using adaptive color model to ship track carry out with Track obtains the motion profile figure of ship.
Step 1: difference morphological profiles, which are rebuild, extracts bright target
Brighter spectral signature is usually shown in its contiguous range according to building, uses difference morphological profiles The algorithm of reconstruction extracts bright target, and this method is that multi-direction, multiple dimensioned morphological operation is utilized is bright to show The structure feature and spectral signature of target, and morphological operation is carried out with a series of linear structural elements, to cap transformation As a result the reconstruction of difference morphological profiles is carried out.Since ship also shows highlighted spectral characteristic, so can using this method To extract potential target, ship target is distinguished by the selection of morphological operation and threshold value again later.Some use The crucial morphological operation that building extracts in this experiment is summarized as follows:
(1) rebuild: re-establishing filter is a kind of important Morphologic filters, highly useful for image procossing, because it Will not introduce discontinuity, and therefore remain the shape in input picture.
(2) granulometry: the size and size of target in description image.Coccoid has been incorporated into remote sensing urban area Image classification.This Multiscale Morphological feature is built upon on the basis of the ever-increasing operator of size of structure element.
(3) directionality: most of existing morphological operation uses disc-shaped structure element.But discoid knot Constitutive element does not account for directional information, and it is to Guan Chong that directional information, which is for the object that differentiation has similar spectral feature, Want, for example the spectrum of building and road is close, but building there is isotropism and road have in comparison it is each to different Property.
Specific algorithm employed in the present invention is as follows:
Maximum value of each pixel on different-waveband in multispectral image is extracted, and as the luminance graph of the image Picture:
B (x, y)=max1≤k≤K(bandk(x,y))
64 69 73 76 90
58 66 72 77 86
54 61 71 76 83
51 56 65 72 80
45 51 59 67 80
band1(x, y)
band2(x, y)
66 71 75 78 92
60 68 74 79 88
56 63 73 78 85
53 58 57 74 82
48 53 61 70 82
hand3(x, y)
66 71 75 78 92
60 68 74 79 88
56 63 73 78 85
53 58 67 74 82
48 53 61 70 82
B (x, y)
Wherein, B (x, y) indicates the brightness value of pixel (x, y), bandk(x, y) indicates spectrum of the pixel on kth wave band Value, K are multispectral image wave band sum.
The reconstruction of difference morphological profiles (DMP) is carried out for the result of the white cap transformation of luminance picture:
DMPW_TH(d, s)=| MPW_TH(d, (s+ Δ s))-MPW_TH(d, s) |
Wherein,It indicates to carry out luminance picture B morphological reconstruction by opening operation, d and s respectively represent selected line Property structural element direction and scale, Δ s be that the scale of linear structure element increases step-length, and meet smin≤s≤smax.Due to Building other opposite atural object classifications on the size and Orientation of scale have more diversity, therefore, on different scale and direction The average value processing result that dialogue cap transformation result carries out form profile difference is luma target index:
0.3333 0.8333 1.5000 2.1667 2.1667
0.1667 1.1667 2.0000 2.1667 2.0000
0 1.1667 2.6667 3.0000 3.1667
0 0.6667 4.0000 4.1667 3.8333
0 0 1.3333 1.1667 1.8333
BTI
Wherein, MPW_TH(d, s) D and S respectively indicates the direction number and scale of structural element in the reconstruction of form profile difference Number.It has been investigated that the increase of D value can not improve the precision of buildings extraction.
Some important parameters setting in above-mentioned calculating:
D value It is followed successively by 0,30,60,90,120,150
Initial s value It is followed successively by 3,7,9,11
Δs 8
D 6
S 4
Preceding 20% obtained in BTI result is luma target, obtains result such as Fig. 2, wherein white area is bright target.
It is bright target labeled as 1, is non-bright target labeled as 0.
From figure 2 it can be seen that the result of bright the Objective extraction result and visual interpretation drawn using existing algorithm It is essentially the same.
Step 2: the vegetation based on vegetation index feature is extracted
According to the research of existing scholar, although having preferable effect to the extraction of vegetation area based on the spectral signature of vegetation Fruit, but due to this model treatment be true color video satellite data, and the brightness of video data is darker, especially vegetation Color characteristic it is darker, cause the textural characteristics of vegetation to be also not and be apparent, thus traditional spectral characteristic based on vegetation or The extracting mode of texture features is not fine for the response of the video data.For this spy of the video satellite data Sign, finally use relatively simple vegetation extracting mode, according to the spectral characteristic of vegetation, in visible light wave range, G-band and There is biggish difference between B wave band, due to the band characteristic of visible light, the atural object class special talent for only showing green there can be this One characteristic, for the feature of survey region, it was found that the algorithm that the band ratio extracts vegetation is feasible.
GBVI=G (x, y)-B (x, y)
Wherein G (x, y) is the corresponding brightness value of green wave band of (x, y) pixel, and B (x, y) is the blue wave of (x, y) pixel The corresponding brightness value of section, GBVI are then the vegetation waveband difference value indexes of definition.
114 108 120 132 136
147 161 155 146 125
211 211 198 159 122
228 219 199 155 111
229 207 170 126 87
G (x, y)
115 109 121 133 137
148 162 156 147 126
200 200 187 149 116
218 208 188 144 105
217 194 157 113 79
B (x, y)
0 0 0 0 0
0 0 0 0 0
11 11 11 10 6
10 11 11 11 6
12 13 13 13 8
GBVI
Binary conversion treatment then is carried out for threshold value with 10 to GBVI result, i.e., after the calculating of GBVI, pixel value is less than 10 0 values of label are more than or equal to 10 and are labeled as 1, the interference of noise is reduced with this, obtains result such as Fig. 3, wherein white area Domain is vegetation area, and black region is nonvegetated area domain.
0 0 0 0 0
0 0 0 0 0
1 1 1 1 0
1 1 1 1 0
1 1 1 1 1
It is vegetation area labeled as 1, is non-vegetation area labeled as 0.
From figure 3, it can be seen that basic using the result that the vegetation that existing algorithm is drawn extracts result and visual interpretation Equally.
Step 3: obtaining potential ship target
By above-mentioned two step, we are overlapped two results of same frame image, and are using radius size 25 circular configuration element carries out morphology closed operation, obtains result such as Fig. 4, wherein lesser white join domain is potential ship Oceangoing ship target, biggish white join domain are land, and the join domain of biggish black is ocean.
Then the size of white connected region is screened, wherein size is that 300-5000 pixels are labeled as potential ship Oceangoing ship region marks result such as Fig. 5 white area.
Step 4: determining moving ship based on improved inter-frame difference algorithm
It, can accurate station keeping ship by processing such as threshold value selection and exposure masks after bright target and vegetation are extracted The mass center of oceangoing ship.But the ship travelled or static ship can not be also distinguished at this time, according to the research of existing scholar, The present invention determines the motion state of ship using frame differential method is improved.Due to the time resolution of the video satellite data of use Rate is higher, if using adjacent two frame carry out difference, since the speed of ship is lower, ship target two field pictures displacement simultaneously It is unobvious, in some instances it may even be possible to be sensor generated noise at " staring ", can all influence precision;For these reasons, this hair The bright frame number to video image takes dimension-reduction treatment, by taking first frame image per second, forms certain time interval and comes Form the video data of new " continuous ".Potential ship is then carried out to the video data newly formed by above-mentioned one two three step frame by frame The extraction of oceangoing ship target, due to not knowing the quantity of ship target in advance, and the potential ship target extracted to each frame can It can include noise spot, so reducing the interference of noise as far as possible in the following manner: by the every of the video data after dimensionality reduction The ships quantity that one frame extracts is recorded and is compared, and is found out with same number, and the smallest frame image of number, and is determined This number is exactly the potential ship target number of survey region.
In order to make ship move biggish distance as far as possible, the target numbers minimum screened and time gap are chosen Farthest two field pictures carry out Difference Calculation, generate new vessel position figure, and calculate the mass center of all potential vessel areas. According to the ship mass center that the new label figure of generation is recorded, with the time gap selected farther out and with identical ship number Two field pictures in time series in that preceding frame image ship mass center comparison, carry out mass center between matching, pass through A certain distance threshold value is set to determine whether ship is subjected to displacement, if less than 6 pixels of centroid distance, assert the mass center institute Corresponding ship is movement.This operation can only determine the ship motion state being kept completely separate in the two field pictures of front and back.
Front and back two field pictures are then subjected to mass center matching again, by the way that certain threshold value is arranged, to distinguish remaining ship Motion state.The slower ship of movement velocity is first determined whether, if the matched minimum range of mass center is less than the connected region target The cornerwise length of frame, and it is greater than the ship that 6 pixels are then judged as movement;Minimum range is less than or equal to 6 pixels and then judges It is remaining then be noise spot for static ship.So far, the movement shape of all ship targets can be marked on prior image frame State.
The not necessarily original view of mass center of the ship motion state marked on prior image frame where through the above steps Frequency needs the first frame of the video image after the mass center and dimensionality reduction carrying out ship at this time according to the ship mass center of first frame image The search and matching of mass center calculate the distance between mass center in two field pictures, if less than 300 pixels of minimum range, assert For same ship, ship motion state in the first frame image of the video image after marking dimensionality reduction by this method, due to Original video data dimensionality reduction mode is to take first frame image per second, so passing through the available original video data of above step The final motion state of ship on first frame image.The final judging result of verify data is as follows, wherein being be judged as labeled as 1 The ship moved is static ship labeled as 2.Referring to Fig. 6-7.
As shown in figure 11, (to represent corresponding three labeled as 1,2,3 for extracting the ship of movement in 3 ships Ship), in order to make target move a certain distance, the two field pictures farthest to the time gap selected, i.e. TminFrame shadow As (figure a) and TmaxFrame image (figure b), carries out Difference Calculation, generates new target position map, i.e. DvalueImage (figure c), and Calculate all potential target mass centers;
According to the new target location D of generationvalueThe target centroid that figure (figure c) is recorded, most with the time gap that selects The preceding T of time series in remote and two field pictures with same target numberminTarget centroid comparison in frame image (figure a), Determine whether target is subjected to displacement by the way that a certain distance threshold value is arranged, if less than 6 pixels of the Euclidean distance of mass center, Then assert that target corresponding to the mass center is movement;Such as the ship for being is marked in figure a, by TminFrame image (figure a) and TmaxAfter frame image (figure b) difference, it can completely be present in DvalueIn figure (figure c), so the mass center of ship 1 is on two figures It does not move substantially, then TminThe mass center of ship 1 in frame image (figure a), with DvalueThe mass center point of ship in figure (figure c) It is not matched, exists and meet connected region of the centroid distance less than 6 pixel requests, then ship 1 is judged as the ship of movement.
Then again by TminFrame image and TmaxFrame image carries out mass center matching, surplus to distinguish by the way that certain threshold value is arranged The motion state of lower target: first determining whether the slower target of movement velocity, if the matched minimum range of mass center is less than the connection The cornerwise length of regional aim frame, and be greater than 6 pixels, then it is judged as the target of movement;For example, TminIn frame image (figure a) Labeled as 2 ship, due to movement velocity is slower etc., process and TmaxIt, can not be with T after frame image (figure b) carries out differencemax Frame image (figure b) is distinguished labeled as 2 ship, can be in DvalueFigure (figure c) will form adhesion region, so that one can be generated newly Mass center, in TminFrame image (figure a) and DvalueCorresponding mass center is difficult to match in figure (figure c), so to TminFrame image and TmaxFrame Mass center in image is matched, TminFrame image (figure a) is labeled as 2 ship mass center and TmaxFrame image (figure b) is labeled as 2 Euclidean distance between ship mass center, which meets, is less than TminThe ship connected region that label is in frame image (figure a) is minimum outer It connects the catercorner length of rectangle and is greater than 6 pixels, so ship 2 is the ship of movement.
If ship be it is static, the ship is in TminFrame image and TmaxPosition mass center on frame image is basically unchanged, and is Raising accuracy of identification, can set TminFrame image and TmaxMass center minimum range is less than or equal to 6 pixels between frame image, then It is judged as static target, it is remaining then be noise spot;Such as TminThe ship that label is in frame image (figure a), because being static Ship, so in TmaxThere is a similar vessel area in same position in frame image (figure b), after difference, by morphological operation And after the screening of connected region size, in DvalueCorresponding region is not present in figure (figure c), then to TminFrame image (figure A) mass center and T of No. 3 ships inmaxMass center in frame image (figure b) is matched, and calculates corresponding Euclidean distance into crossing It can be found that TminThe mass center and T of No. 3 ships in frame image (figure a)maxEurope in frame image (figure b) between the mass center of No. 3 ships Less than 6 pixels of a few Reed distances are then judged as that No. 3 ships are static ship.
It so far, can be in TminThe motion state of all targets is marked on frame image;
Because of TminFirst frame image after frame image not necessarily dimensionality reduction, so by TminThe matter of connected region in frame image The first frame T of video image after the heart and dimensionality reduction1Image carries out the search and matching of target centroid, calculates TminFrame image and T1Frame Euclidean distance in image between connected region mass center, if less than 300 pixels of minimum range, regard as same mesh Mark, the first frame T of the video image after marking dimensionality reduction by this method1Target state in image, because every using taking The second mode of first frame image carries out dimensionality reduction, so first frame image is the first frame image of initial data after dimensionality reduction, thus Obtain the final motion state of target on first frame image in raw video image.
The ship motion state result that judges of the present invention and visual interpretation video data are judged that ship motion state is done pair Than, it is found that its seaport area Vancouver, CAN (49 ° of 17'N, 123 ° of 7'W) international space station on July 2nd, 2015 (ISS) potential ship target and ship the motion state judgement of video data are completely correct;San Diego, USA (32 ° of 42' 117 ° of 10'W of N) its port area Jilin No.1 satellite data judging result in, land area at two is judged by accident into respectively dynamic State and static ship, and a dynamic ship is accidentally judged as static ship.Judging result and precision statistics such as following table.
Ship motion state statistical result
Potential ship target judging result precision statistics
Moving ship object judgement result statistics
Accuracy=TP/ (TP+FP) (1)
Percentage of head rice=TP/ (TP+FN) (2)
Precision=TP/ (TP+FP+FN) (3)
TP: the target that algorithm detects
FN: the undetected target of algorithm
FP: algorithm detects the target to make mistake
Ship track following of 5th step based on adaptive color model
The ship track following algorithm of adaptive color model has used joint probability of the selected objects region based on RGB close The joint probability density function of the RGB of the neighborhood of function and surrounding target object is spent, in the first frame by object and background point From.The color of the next color characteristic target object for the target object chosen models, and is then existed using object color model Target object and background are separated in other frames, while using the position of Mean-Shift algorithm keeps track object, developing adaptive face Color model is brought with the variation for overcoming the problems, such as object appearance in tracing process.Specific algorithm is as follows:
5.1 Object Selection
Initially, by drawing rectangle in interested data collection, user manually selects interested object.In order to accurate Detected target object needs to make that outer rectangular is selected to make in the region of object using the background colour near target object Background pixel quantity is roughly the same with the pixel quantity in object rectangle.Equation (5-1) is for being defined around target object rectangle The width in region:
Wherein w and h is the width and height of object window, and d is the width in object rectangular circumference region.
5.2 feature extraction
In cited tracker, it is characterized in the quantization characteristic based on pixel color for modeling object, corresponds to Value in the RGB color of quantization.Quantization characteristic pixel-based is extracted for object pixel and ambient background pixel.Figure The R of 8 display quantizations, G, B color space.Since target ship is shown as highlighted in the picture, have with low bright marine background Biggish color difference selects R, 4 codings of G and channel B, so that total histogram size is 16* in selected model 16*16=4096, the reduction of this color depth and histogram size are to improve computational efficiency and reduce dimension.For table Show target appearance, uses the R of the quantization of the separation object from next part, G, B pixel value.
5.3 object background separations
Target background separation method is used for test object.Use the R of the quantization in the region in inner rectangular, G, B histogram The R of the quantization between joint probability density function (pdf) and region to obtain the RGB based on quantization of subject area, G, B are straight Outwardly and inwardly rectangle is used to obtain ambient background joint probability density function side's figure.Use the subject area/back for surrounding object Log-likelihood ratio (LLR) result of scene area determines object pixel.The logarithm of pixel is considered seemingly in the bounding rectangles of object Right property can obtain in the following manner:
Wherein Ho(i) be the characteristic value of pixel in target object rectangle histogram, and HbIt (i) is from data collection Region pixel histogram, wherein histogram element of the range of the parameter presented from 1 to 4096.ε is one The nonzero value of very little, to avoid being removed or being taken by zero zero logarithm, sets 0.01 for ε here for avoiding numerical value unstable.
If the log-likelihood function from previous step is used for test object pixel, result may be the position of image It sets.Wherein, object color includes positive value, and background color includes negative value, and tends to zero by the shared color of object and background. The binary mask of object obtains function are as follows:
Wherein τ0It is the threshold value for determining most reliable object pixel.In function, τ0Value be set as 0.8, it is reliable to obtain Object pixel.Once model has selected targeted object region in the first frame, then using formula (5-2) and formula (5-3) by suitable Sequence obtains the log-likelihood mapping and binary mask of object.
The modeling of 5.4 object colors and update
In the first frame, by using the quantization rgb value of the separation object obtained by equation (5-3) automatic development object Color model.However, the color space of quantization can substantially allow the minor change of target object color and illumination, but it is right In object color or scene lighting it is widely varied for or there is certain defect.Therefore have one reliably to image tracing Color model upgating object is necessary.Object color modeling in each frame is computationally complicated and time consumption, defined herein Standard is to indicate the frame for needing object color to adapt to.Assuming that S0It is the average R-G-B color for separating pixel in object, S0Change It is the needs for showing object color and adapting to.In the calculating presented in each input frame after test object, separation pair is calculated As the average RGB color of interior pixel.S in present frame0And (deviation here when last frame is greater than 0.05*256 compared to deviation It is set as 15) indicating that object color is changing and carrying out object color adaptation.
5.5 subject positioner
Object localization starts from the mass center of the binary object detected in the frame previously tracked.In order to find target pair As pixel, feature is extracted from object rectangle and is tested with object color model.Using mean-shift algorithm keeps track mesh Object is marked, the main thought after progress mean-shift algorithm keeps track is that the point in space is considered as probability density function, Most intensive region corresponds to the local maximum as target object position in middle space, and the displacement of object is by object pixel Mass center displacement provides, and is moved to the mass center of the binary object detected at the iteration center of each target object rectangle.Target Object rectangle is shifted and is tested with being iterated, until object is completely placed in rectangle (mean shift convergence).By using etc. Formula (5-4) and (5-5), object mass center relocates in each iteration:
Wherein xiAnd yiIndicate position of the object pixel each detected in object rectangle in video frame images coordinate; XnewAnd YnewThe mass center of the target object relocated in each iteration, n are the quantity of the object pixel detected.It is testing In the process, the center of mass motion less than 5 pixels is considered as Complete Convergence.
Since the algorithm cannot automatically select ship target region in first frame image, so above-mentioned 1 will be passed through The vessel area extracted in the video data first frame image that four steps obtain is the initial target area of moving ship.Then sharp Track following is carried out with ship of the adaptive color model algorithm to each movement, obtains each ship in every frame video data In mass center, connect the mass center of same ship to form track.Ship track following result such as Fig. 9 and 10 in team.
The Comparative result of the present invention automatically detects from Fig. 9 and 10 ship track and visual interpretation video data, It can be found that obtained ship track result and ship actual path are identical through the invention.
Information extraction is an important link of remote sensing data application, remote sensing mapping, calamity emergency, urban planning, variation Detection, military security etc. all be unable to do without the information extraction of remotely-sensed data.With the development of remote sensing technology, video satellite skill Art gradually develops, video satellite data also being applied in research and production field slowly.The invention proposes a kind of new The application direction based on video satellite data, by difference morphological profiles rebuild extract potential target, followed by improvement Frame differential method determine the motion state of target, track is carried out to dynamic ship target finally by adaptive color model Tracking.
The present invention uses its seaport area Vancouver, CAN (49 ° of 17'N, 123 ° of 7'W), and on July 2nd, 2015 is international empty Between stand (ISS) its port area of video data and San Diego, USA (32 ° of 42'N, 117 ° of 10'W) Jilin No.1 satellite Data have carried out potential dynamic ship acquisition and the extraction of ship track, the results showed that, the present invention is to satellite video ship rail The extraction result of mark and the ship motion profile interpreted by visual observation coincide substantially, it was demonstrated that feasibility of the invention.
The optional embodiment of the embodiment of the present invention is described in detail in conjunction with attached drawing above, still, the embodiment of the present invention is simultaneously The detail being not limited in above embodiment can be to of the invention real in the range of the technology design of the embodiment of the present invention The technical solution for applying example carries out a variety of simple variants, these simple variants belong to the protection scope of the embodiment of the present invention.
It is further to note that specific technical features described in the above specific embodiments, in not lance In the case where shield, it can be combined in any appropriate way.In order to avoid unnecessary repetition, the embodiment of the present invention pair No further explanation will be given for various combinations of possible ways.
In addition, any combination can also be carried out between a variety of different embodiments of the embodiment of the present invention, as long as it is not The thought of the embodiment of the present invention is violated, equally should be considered as disclosure of that of the embodiment of the present invention.

Claims (7)

1. a kind of combination brightness and the satellite video ship monitoring method for improving frame differential method, which is characterized in that including Following steps:
(1) satellite video single frames potential target obtains: in the bright Objective extraction of video frame rebuild using difference morphological profiles On the basis of, vegetation noise is removed, the potential ship target in video frame is obtained;
(11) bright target is extracted from satellite video image based on the reconstruction of difference morphological profiles;
(12) vegetation is extracted from satellite video image using vegetation index;
(13) extraction of step (11) and step (12) is superimposed as a result, passing through the Morphological scale-space superposition of data from satellite video figure Potential ship target is obtained as in;
(2) interval frame ship motion state differentiates: by improved frame differential method to progress difference fortune between different video frame It calculates, dynamic object is extracted from potential target;
(3) track following satellite video dynamic ship track following: is carried out to dynamic object using adaptive color model.
2. combination brightness according to claim 1 and the satellite video ship monitoring method for improving frame differential method, It is characterized in that, the algorithm of the bright target of extraction described in step (11) is as follows:
Maximum value of each pixel on different-waveband in multispectral image is extracted, and as the luminance picture of the image:
B (x, y)=max1≤k≤K(bandk(x,y))
Wherein, B (x, y) indicates the brightness value of pixel (x, y), bandk(x, y) indicates that spectral value of the pixel on kth wave band, K are Multispectral image wave band sum;
The reconstruction of difference morphological profiles is carried out for the result of the white cap transformation of luminance picture:
DMPW_TH(d, s)=| MPW_TH(d,(s+Δs))-MPW_TH(d,s)|
Wherein,It indicates to carry out luminance picture B morphological reconstruction operation, d and s respectively represent selected linear The direction of structural element and scale, Δ s are that the scale of linear structure element increases step-length, and meets smin≤s≤smax;Due to building It builds object other opposite atural object classifications on the size and Orientation of scale and has more diversity, it is therefore, right on different scale and direction The average value processing result that white cap transformation result carries out form profile difference is luma target index:
Wherein, D and S respectively indicates the direction number and scale parameter of structural element in the reconstruction of form profile difference;It takes in BTI result Preceding 20% is luma target.
3. combination brightness according to claim 2 and the satellite video ship monitoring method for improving frame differential method, It is characterized in that, the algorithm for extracting vegetation described in step (12) is as follows:
GBVI=G (x, y)-B (x, y);
Binary conversion treatment then is carried out for threshold value with 10 to GBVI result, i.e., after the calculating of GBVI, pixel value is less than 10 Labeled as 0, it is more than or equal to 10 and is labeled as 1, vegetation is obtained with this and extracts result;
Wherein, G (x, y) is the corresponding brightness value of green wave band of (x, y) pixel, and B (x, y) is the blue wave band of (x, y) pixel Corresponding brightness value, GBVI are vegetation waveband difference value index.
4. combination brightness according to claim 3 and the satellite video ship monitoring method for improving frame differential method, It is characterized in that, Morphological scale-space described in step (13) is morphology closed operation;Also in image after morphology closed operation The size of white connected region is screened.
5. combination brightness according to claim 4 and the satellite video ship monitoring method for improving frame differential method, It is characterized in that, being that 300-5000 pixel is labeled as potential ship target region by size when screening.
6. combination brightness according to claim 4 or 5 and the satellite video ship monitoring side for improving frame differential method Method, which is characterized in that the process that dynamic ship target is extracted described in step (2) is as follows:
Dimension-reduction treatment is taken to the frame number of video image, forms new continuous video figure by taking first frame image per second Picture;The extraction for then carrying out potential target frame by frame to the video image newly formed by repeating said steps (11)-(13), will drop The potential ship target quantity that each frame of video data after dimension extracts is recorded and is compared, and is found out with same number And the smallest frame image of number, and determine that this number is exactly the potential ship target number of survey region;
In order to make target move a certain distance, the two field pictures farthest to the target numbers minimum and time gap that select Difference Calculation is carried out, generates new target position map, and calculate all potential ship target mass centers;
According to the target centroid that the new target location figure of generation is recorded, with the time gap selected farthest and with identical mesh The target centroid comparison in the two field pictures of number in that preceding frame image of time series is marked, by the way that a certain distance is arranged Threshold value determines whether target is subjected to displacement, if less than 6 pixels of the distance of mass center, assert that target corresponding to the mass center is Movement;
Front and back two field pictures are then subjected to mass center matching again, by the way that certain threshold value is arranged, to distinguish the movement of remaining target State: first determining whether the slower target of movement velocity, if the matched minimum range of mass center is less than the connected region target frame pair The length of linea angulata, and be greater than 6 pixels, then it is judged as the target of movement;Minimum range is less than or equal to 6 pixels, then is judged as Static target, it is remaining then be noise spot;So far, the motion state of all ship targets can be marked on prior image frame;
The first frame of video image after the mass center and dimensionality reduction is carried out to the search and matching of target centroid, is calculated in two field pictures The distance between mass center, if less than 300 pixels of minimum range, regard as same target, after marking dimensionality reduction by this method Video image first frame image in ship target motion state, obtain in raw video image ship on first frame image The final motion state of target.
7. combination brightness according to claim 6 and the satellite video ship monitoring method for improving frame differential method, It is characterized in that, the track following algorithm in the step (3) is close using joint probability of the selected targeted object region based on RGB Spend the joint probability density function of the RGB of function and surrounding target subject area neighborhood, in the first frame by target object with Background separation;
The color of the next color characteristic target object for the target object chosen is modeled, i.e., based on the amount of pixel color Change feature, correspond to the value in the RGB color of quantization, target is then separated in other frames using object color model Object and background, while utilizing the position of Mean-Shift algorithm keeps track target.
CN201811324612.0A 2018-11-08 2018-11-08 Satellite video ship monitoring method combining brightness characteristics and improved interframe difference method Active CN109460764B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811324612.0A CN109460764B (en) 2018-11-08 2018-11-08 Satellite video ship monitoring method combining brightness characteristics and improved interframe difference method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811324612.0A CN109460764B (en) 2018-11-08 2018-11-08 Satellite video ship monitoring method combining brightness characteristics and improved interframe difference method

Publications (2)

Publication Number Publication Date
CN109460764A true CN109460764A (en) 2019-03-12
CN109460764B CN109460764B (en) 2022-02-18

Family

ID=65609721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811324612.0A Active CN109460764B (en) 2018-11-08 2018-11-08 Satellite video ship monitoring method combining brightness characteristics and improved interframe difference method

Country Status (1)

Country Link
CN (1) CN109460764B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111364A (en) * 2019-04-30 2019-08-09 腾讯科技(深圳)有限公司 Method for testing motion, device, electronic equipment and storage medium
CN110702869A (en) * 2019-11-01 2020-01-17 无锡中科水质环境技术有限公司 Fish stress avoidance behavior water quality monitoring method based on video image analysis
CN111387966A (en) * 2020-03-20 2020-07-10 中国科学院深圳先进技术研究院 Signal wave reconstruction method and heart rate variability information detection device
CN111553928A (en) * 2020-04-10 2020-08-18 中国资源卫星应用中心 Urban road high-resolution remote sensing self-adaptive extraction method assisted by Openstreetmap information
CN111739059A (en) * 2020-06-20 2020-10-02 马鞍山职业技术学院 Moving object detection method and track tracking method based on frame difference method
CN112270661A (en) * 2020-10-19 2021-01-26 北京宇航系统工程研究所 Space environment monitoring method based on rocket telemetry video
CN112489055A (en) * 2020-11-30 2021-03-12 中南大学 Satellite video dynamic vehicle target extraction method fusing brightness-time sequence characteristics
CN115294486A (en) * 2022-10-08 2022-11-04 彼图科技(青岛)有限公司 Method for identifying violation building data based on unmanned aerial vehicle and artificial intelligence
CN115760613A (en) * 2022-11-15 2023-03-07 江苏省气候中心 Blue algae bloom short-time prediction method combining satellite image and optical flow method

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090093959A1 (en) * 2007-10-04 2009-04-09 Trimble Navigation Limited Real-time high accuracy position and orientation system
CN102081801A (en) * 2011-01-26 2011-06-01 上海交通大学 Multi-feature adaptive fused ship tracking and track detecting method
CN103839267A (en) * 2014-02-27 2014-06-04 西安科技大学 Building extracting method based on morphological building indexes
CN103971127A (en) * 2014-05-16 2014-08-06 华中科技大学 Forward-looking radar imaging sea-surface target key point detection and recognition method
CN104463914A (en) * 2014-12-25 2015-03-25 天津工业大学 Improved Camshift target tracking method
CN104751478A (en) * 2015-04-20 2015-07-01 武汉大学 Object-oriented building change detection method based on multi-feature fusion
CN105096338A (en) * 2014-12-30 2015-11-25 天津航天中为数据系统科技有限公司 Moving object extracting method and device
CN105608458A (en) * 2015-10-20 2016-05-25 武汉大学 High-resolution remote sensing image building extraction method
CN106650663A (en) * 2016-12-21 2017-05-10 中南大学 Building true/false change judgement method and false change removal method comprising building true/false change judgement method
CN107092890A (en) * 2017-04-24 2017-08-25 山东工商学院 Naval vessel detection and tracking based on infrared video
CN107465877A (en) * 2014-11-20 2017-12-12 广东欧珀移动通信有限公司 Track focusing method and device and related media production
CN107609534A (en) * 2017-09-28 2018-01-19 北京市遥感信息研究所 An automatic testing method of mooring a boat is stayed in a kind of remote sensing based on harbour spectral information
CN108052859A (en) * 2017-10-31 2018-05-18 深圳大学 A kind of anomaly detection method, system and device based on cluster Optical-flow Feature
US20180286052A1 (en) * 2017-03-30 2018-10-04 4DM Inc. Object motion mapping using panchromatic and multispectral imagery from single pass electro-optical satellite imaging sensors

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090093959A1 (en) * 2007-10-04 2009-04-09 Trimble Navigation Limited Real-time high accuracy position and orientation system
CN102081801A (en) * 2011-01-26 2011-06-01 上海交通大学 Multi-feature adaptive fused ship tracking and track detecting method
CN103839267A (en) * 2014-02-27 2014-06-04 西安科技大学 Building extracting method based on morphological building indexes
CN103971127A (en) * 2014-05-16 2014-08-06 华中科技大学 Forward-looking radar imaging sea-surface target key point detection and recognition method
CN107465877A (en) * 2014-11-20 2017-12-12 广东欧珀移动通信有限公司 Track focusing method and device and related media production
CN104463914A (en) * 2014-12-25 2015-03-25 天津工业大学 Improved Camshift target tracking method
CN105096338A (en) * 2014-12-30 2015-11-25 天津航天中为数据系统科技有限公司 Moving object extracting method and device
CN104751478A (en) * 2015-04-20 2015-07-01 武汉大学 Object-oriented building change detection method based on multi-feature fusion
CN105608458A (en) * 2015-10-20 2016-05-25 武汉大学 High-resolution remote sensing image building extraction method
CN106650663A (en) * 2016-12-21 2017-05-10 中南大学 Building true/false change judgement method and false change removal method comprising building true/false change judgement method
US20180286052A1 (en) * 2017-03-30 2018-10-04 4DM Inc. Object motion mapping using panchromatic and multispectral imagery from single pass electro-optical satellite imaging sensors
CN107092890A (en) * 2017-04-24 2017-08-25 山东工商学院 Naval vessel detection and tracking based on infrared video
CN107609534A (en) * 2017-09-28 2018-01-19 北京市遥感信息研究所 An automatic testing method of mooring a boat is stayed in a kind of remote sensing based on harbour spectral information
CN108052859A (en) * 2017-10-31 2018-05-18 深圳大学 A kind of anomaly detection method, system and device based on cluster Optical-flow Feature

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
YUQI TANG 等: "Fault-Tolerant Building Change Detection From Urban High-Resolution Remote Sensing Imagery", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》 *
汤玉奇: "面向对象的高分辨率影像城市多特征变化检测研究", 《中国博士学位论文全文数据库 信息科技辑》 *
秦雨萍 等: "基于数学形态学的差分图像目标检测算法研究", 《舰船电子工程》 *
隗杰: "基于视频图像分析的船舶走锚识别研究", 《万方平台: HTTPS://D.WANFANGDATA.COM.CN/THESIS/CHJUAGVZAXNOZXDTMJAYMTA1MTKSCUQWMTMXNTEYOROICGLOC3DZZ2I%3D》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111364A (en) * 2019-04-30 2019-08-09 腾讯科技(深圳)有限公司 Method for testing motion, device, electronic equipment and storage medium
CN110111364B (en) * 2019-04-30 2022-12-27 腾讯科技(深圳)有限公司 Motion detection method and device, electronic equipment and storage medium
CN110702869A (en) * 2019-11-01 2020-01-17 无锡中科水质环境技术有限公司 Fish stress avoidance behavior water quality monitoring method based on video image analysis
CN111387966A (en) * 2020-03-20 2020-07-10 中国科学院深圳先进技术研究院 Signal wave reconstruction method and heart rate variability information detection device
CN111553928B (en) * 2020-04-10 2023-10-31 中国资源卫星应用中心 Urban road high-resolution remote sensing self-adaptive extraction method assisted with Openstreetmap information
CN111553928A (en) * 2020-04-10 2020-08-18 中国资源卫星应用中心 Urban road high-resolution remote sensing self-adaptive extraction method assisted by Openstreetmap information
CN111739059A (en) * 2020-06-20 2020-10-02 马鞍山职业技术学院 Moving object detection method and track tracking method based on frame difference method
CN112270661A (en) * 2020-10-19 2021-01-26 北京宇航系统工程研究所 Space environment monitoring method based on rocket telemetry video
CN112270661B (en) * 2020-10-19 2024-05-07 北京宇航系统工程研究所 Rocket telemetry video-based space environment monitoring method
CN112489055A (en) * 2020-11-30 2021-03-12 中南大学 Satellite video dynamic vehicle target extraction method fusing brightness-time sequence characteristics
CN115294486A (en) * 2022-10-08 2022-11-04 彼图科技(青岛)有限公司 Method for identifying violation building data based on unmanned aerial vehicle and artificial intelligence
CN115760613A (en) * 2022-11-15 2023-03-07 江苏省气候中心 Blue algae bloom short-time prediction method combining satellite image and optical flow method
CN115760613B (en) * 2022-11-15 2024-01-05 江苏省气候中心 Blue algae bloom short-time prediction method combining satellite image and optical flow method

Also Published As

Publication number Publication date
CN109460764B (en) 2022-02-18

Similar Documents

Publication Publication Date Title
CN109460764A (en) A kind of satellite video ship monitoring method of combination brightness and improvement frame differential method
CN103927741B (en) SAR image synthesis method for enhancing target characteristics
CN107025652B (en) A kind of flame detecting method based on kinetic characteristic and color space time information
CN104392468B (en) Based on the moving target detecting method for improving visual background extraction
WO2018024030A1 (en) Saliency-based method for extracting road target from night vision infrared image
CA2949844C (en) System and method for identifying, analyzing, and reporting on players in a game from video
CN105894503B (en) A kind of restorative procedure of pair of Kinect plant colour and depth detection image
US20120148103A1 (en) Method and system for automatic object detection and subsequent object tracking in accordance with the object shape
CN108198201A (en) A kind of multi-object tracking method, terminal device and storage medium
CN107767400A (en) Remote sensing images sequence moving target detection method based on stratification significance analysis
CN104318266B (en) A kind of image intelligent analyzes and processes method for early warning
CN106204594A (en) A kind of direction detection method of dispersivity moving object based on video image
CN106557750A (en) It is a kind of based on the colour of skin and the method for detecting human face of depth y-bend characteristics tree
Huerta et al. Exploiting multiple cues in motion segmentation based on background subtraction
CN106127812A (en) A kind of passenger flow statistical method of non-gate area, passenger station based on video monitoring
Lian et al. A novel method on moving-objects detection based on background subtraction and three frames differencing
CN104217442B (en) Aerial video moving object detection method based on multiple model estimation
CN107992856A (en) High score remote sensing building effects detection method under City scenarios
CN106657948A (en) low illumination level Bayer image enhancing method and enhancing device
CN104933728A (en) Mixed motion target detection method
CN107103301B (en) Method and system for matching discriminant color regions with maximum video target space-time stability
CN110298893A (en) A kind of pedestrian wears the generation method and device of color identification model clothes
CN103533332B (en) A kind of 2D video turns the image processing method of 3D video
CN105631405A (en) Multistage blocking-based intelligent traffic video recognition background modeling method
CN102609710A (en) Smoke and fire object segmentation method aiming at smog covering scene in fire disaster image video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant