CN108334885A - A kind of video satellite image space object detection method - Google Patents

A kind of video satellite image space object detection method Download PDF

Info

Publication number
CN108334885A
CN108334885A CN201810109563.2A CN201810109563A CN108334885A CN 108334885 A CN108334885 A CN 108334885A CN 201810109563 A CN201810109563 A CN 201810109563A CN 108334885 A CN108334885 A CN 108334885A
Authority
CN
China
Prior art keywords
coordinate
frame
image
target
filter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201810109563.2A
Other languages
Chinese (zh)
Inventor
项军华
张学阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Aerial Satellite Technology Co Ltd
Original Assignee
Hunan Aerial Satellite Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Aerial Satellite Technology Co Ltd filed Critical Hunan Aerial Satellite Technology Co Ltd
Priority to CN201810109563.2A priority Critical patent/CN108334885A/en
Publication of CN108334885A publication Critical patent/CN108334885A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The present invention relates to a kind of video satellite image space object detection methods.Noise reduction process is carried out to the video satellite image of input by bilateral filtering method first, then it uses and single-frame images is divided based on local image characteristic and the adaptive threshold of previous frame testing result prior information and Kalman filtered results, and then prediction coordinate is calculated with grey scale centre of gravity method, finally target is detected using Kalman filtering method.Compared with prior art, this method can solve the problems, such as that space object brightness changes to None- identified target on video satellite image.

Description

A kind of video satellite image space object detection method
Technical field
The invention belongs to technical field of image processing, are related to the image-recognizing method of field of aerospace, refer in particular to a kind of base In the video satellite image space object detection method of prior information.
Background technology
Video satellite be it is a kind of using video imaging, video data real-time Transmission, people is in circuit interactive operation work side The novel space-based acquisition of information class microsatellite of formula.Countries in the world have put into substantial contribution and scientific and technical personnel to video satellite at present Extensive research has been carried out, and has transmitted the more satellites that can carry out staring imaging over the ground.Ever-increasing extraterrestrial target is to people Class solar-system operation generates increasing influence, and the hot issue for having become space environment field is monitored to it.Compared to ground Observation, Space borne detection avoid influence of the air to echo signal not by limitations such as meteorological condition and geographical locations, have Its only thick advantage.Using video satellite image carry out moving-target detection and tracking, be space movement target is monitored it is effective Method.
For the general considerations of object detecting and tracking in video, have optical flow method, Block- matching, the detection etc. based on template Many algorithms.But these algorithms are all based on the shape feature of target mostly, need enough texture informations, are not particularly suited for Space Faint target detection in video satellite image.Carry out the identification of optical imagery weak signal target under Celestial Background, mainly has following It is difficult:(1) one or several pixels, no planform, for the feature letter utilized are only occupied in image due to target Breath is seldom;(2) influence of the factors such as noise introduced due to space exploration environment and detecting devices so that target is almost submerged in In complicated ambient noise, the difficulty of weak moving target detection is considerably increased;(3) attitude motion of target makes its brightness Constantly variation, or even target can be lost in several frame images.
For these difficult points, many scholars propose many algorithms, main tracking (Trackbefore before including detection Detection, TBD) and preceding detection (Detectbefore Tracking, the DBT) two major classes of tracking.It is assume detection more (Multistage Hypothesizing Testing, MHT) and algorithm based on Dynamic Programming can be classified as TBD, these calculations Method is also highly effective when target signal to noise ratio is very low, but excessively high computation complexity and threshold value On The Choice are at its weakness. In practice, the target detection in satellite image more uses DBT class algorithms.Patent 201510507109.9 provides one kind and is based on The extraterrestrial target detection method of movable information solves first and second difficult points, but third difficult point is solved not yet Certainly and the loss of target can be led to.
Invention content
The problem of for space object brightness on video satellite image changing that target can not be detected, the present invention carries A kind of video satellite image space object detection method is gone out.
The technical scheme is that:
A kind of video satellite image space object detection method, includes the following steps:
S1 carries out noise reduction process to the video satellite image of input;
Specifically, using bilateral filtering method (TOMASI C, MANDUCHI R.Bilateral filtering for gray and color images;proceedings of the international conference on computer Vision, Bombay, India, F, 1998 [C]), this method is simple, non-iterative, can achieve the purpose that protect side denoising;
Noise in video image includes mainly space radiation noise, Celestial Background noise and CCD dark current noises etc.,
The weight of filter includes two parts:First part is identical as Gaussian filter, and second part considers pixel ash Spend similitude;The diameter of filter is set as 5 pixels, and weight w is given by:
Wherein K is generalized constant, (xc,yc) it is the corresponding image coordinate of filter center, (xi,yi) be filter (i, J) the corresponding image coordinate in position, f (x, y) are the video satellite images of input in the gray value of coordinate (x, y), σsIt is set as 10, σr It is set as 75;
S2 divides single-frame images;
Further, by introducing based on local image characteristic and previous frame testing result prior information and Kalman filtering As a result adaptive threshold, and utilize the adaptive threshold fuzziness single-frame images;
Specifically, either fixed star or target, gray value will be more than the pixel in its neighborhood, be considered as base Divide image in the variable thresholding of partial statistics, to 7 × 7 neighborhood meters of every bit (x, y) in the video satellite image of input Calculate standard deviation sigmaxyWith mean value mxy, they are description of local contrast and average gray, obtain later based on local mean value and The threshold value of standard deviation:
Txy=a σxy+bmxy (2)
Wherein TxyIt is judgment threshold, b is greater than 0 constant;
For space movement target, their brightness usually changes with the variation of posture;If a is set as constant, Target will lose in some frames, it is contemplated that the continuity of target movement, if be detected as in kth frame (x, y) possible Coordinates of targets, then in (k+1) frame, the probability for detecting target in 7 × 7 neighborhoods of (x, y) just greatly increases;It is comprehensive The continuity of object brightness variation, if not detecting target, a in 7 × 7 neighborhoods of (k+1) frame (x, y)kIt can multiply With an attenuation coefficient, i.e. ak(ρ is attenuation coefficient and ρ to ρ<1), a herekThe value of a i.e. in kth frame;To be based on image office Portion's characteristic and the adaptive threshold of previous frame testing result prior information and Kalman filtered results can be given by:
Wherein P (x, y, k | k-1) refers to whether kth frame point (x, y) in the target detected by (k-1) frame passes through karr Graceful filtering obtains in 7 × 7 neighborhoods of kth frame prediction coordinate, and 1 is equal in neighborhood, and 0 is equal to not in neighborhood;Predict coordinate by Kalman filtering obtains, and will be obtained in S3 steps;
Formula (2) and the difference of formula (3) are that based on local image characteristic and previous frame testing result prior information And the adaptation coefficient of Kalman filtered results;The initial value of a is set as being more than 1, and after the gray value of (x, y) becomes larger again Reset to initial value;Preferably, gray value resets to initial value after being more than 150;
It is as follows to divide image algorithm:
Wherein f (x, y) is gray value of the video satellite original image at (x, y) of input, after g (x, y) is dividing processing Gray value of the image at (x, y);
By introducing based on oneself of local image characteristic and previous frame testing result prior information and Kalman filtered results Threshold value is adapted to, and utilizes the adaptive threshold fuzziness single-frame images, space object brightness on video satellite image is solved and changes Caused by target lose problem;
S3 calculates prediction coordinate;
Specifically, point target is imaged on the focal planes CCD and to occupy a pixel in perfect optical system, but in reality It under image-forming condition, is influenced by diffraction from circular aperture, target is caused to be diffused as more pixels by an imaging in the imaging of focal plane, at this moment, Coordinate of the target in focal plane is determined by its gray scale center position, calculates picture using simple and effective grey scale centre of gravity method here Point coordinates, positioning accuracy is up to 0.1~0.3 pixel;
Image binaryzation is divided, identifies that target area S, its grey scale centre of gravity coordinate calculation formula of target area S are:
Wherein f (x, y) is gray value of the video satellite original image of input at (x, y), (xS,yS) it is region S Grey scale centre of gravity coordinate;
S4 detects target;
Preferably, target is detected using Kalman filtering method;
Further, include the following steps:
S401, first frame image obtain n1 point (x(1),y(1)),(x(2),y(2)),…,(x(n1),y(n1)), as n1 point The coordinate of class puts class for each and establishes a Kalman filter;Kalman filtering is a kind of efficient recursion filter, it It can estimate the state of linear dynamic system from a series of noise-containing measurements, can be used for predicting target in next frame Coordinate;
Assuming that target is x in the state vector of kth framek=(xk,yk,vxk,vyk)T, i.e. seat of the target in pixel coordinate system Mark (xk,yk) and speed (vxk,vyk) (unit:Pixel/Δ t), then system equation be:
Wherein F is state-transition matrix, and H is calculation matrix, zkMeasure vector, i.e., the coordinate detected in kth frame, w It is process noise, it is assumed that be the zero mean Gaussian white noise that covariance matrix is Q, be denoted asV is measurement noise, false It is set as the zero mean Gaussian white noise that covariance matrix is R, is denoted as
It enablesIt is Kalman filter in the state estimation of kth frame, Pk|kFor Posterior estimator error co-variance matrix, embody The levels of precision of measurement estimated value, thenAnd Pk|kThe state of filter is represented, the process of Kalman filter is as follows:
Initialization:InitializationWith P0|0, forx0|0,y0|0Pass through list for first frame The initial coordinate that frame is divided and grey scale centre of gravity method obtains, vx0|0,vy0|0It is set as 0;
Prediction:In forecast period, filter uses the state estimation of previous frame, makes the state estimation to present frame, in advance The coordinate measured will be used for single-frame images segmentation and Track association:
Update:In the more new stage, filter is utilized optimizes the predicted value obtained in forecast period to the measured value of present frame:
Wherein KkIt is an intermediate variable, referred to as optimal kalman gain;
Therefore, the method combined using grey scale centre of gravity method and Kalman filtering method can accurately calculate prediction coordinate simultaneously Further increase the accuracy of Track association;
S402, the second frame image obtain n2 point (u(1),v(1)),(u(2),v(2)),…,(u(n2),v(n2)), by the second frame figure Obtain each putting as in successively with existing class Kalman filter prediction being obtained in first frame image in step S401 Coordinate (formula (7) obtains) compare, when 2 points distance be less than threshold epsilon1When, it is believed that be it is a kind of, no longer with other class coordinates Compare, and replace old coordinate as the coordinate of point class by new coordinate, and new coordinate is respective as measured value update The state of Kalman filter;If 2 points of distances are more than threshold value, a karr is established as new point class, and for the class Graceful filter;Each frame image of acquisition is handled according to the method;
S403 starts to judge whether it is target, the speed mould for calculating l point grows it when some point class has at l With with threshold epsilon2Compare, is less than ε2It is considered star background or noise, is more than ε2Then it is considered space movement target;
Here threshold epsilon2It is primarily used to eliminate the image motion caused by the unstable of satellite platform and other noises, Threshold value is preferably taken as 2;In this way, based on target imaging more interframe movements continuity, you can detect target in the picture, And have updated the movement locus of target;
It is identified it is thus achieved that being detected to extraterrestrial target.
The beneficial effects of the invention are as follows:
1) it solves target loss caused by the variation of space object brightness on video satellite image using adaptive threshold to ask Topic;
2) use of Kalman filtering increases the accuracy of Track association;
3) algorithm is simple, is easy to Project Realization.
Description of the drawings
Fig. 1 is the flow chart of the present invention
Fig. 2 is the result obtained after the 9th frame image segmentation in embodiment
Fig. 3 is the result obtained after the 30th frame image segmentation in embodiment
Fig. 4, Fig. 5 are the superposition of 1000 frame images in embodiment
Fig. 6 is target trajectory in embodiment
Specific implementation mode
The present invention is described in detail with specific embodiment below in conjunction with the accompanying drawings.The present embodiment is with technical solution of the present invention Premised on implemented, give detailed embodiment and specific operating process, but protection scope of the present invention is not limited to The embodiment of subordinate.
Embodiment
Based on the video of the in-orbit shooting of satellite, the present invention is verified.Camera focus is 1000mm, and Pixel size is 8.33 microns, video image 25 frame per second, resolution ratio is 960 × 576, totally 1000 frame image.
S1 carries out noise reduction process to the video satellite image of input;
The weight of two-sided filter includes two parts:First part is identical as Gaussian filter, and second part considers picture Plain grey similarity;The diameter of filter is set as 5, and weight w is given by:
Wherein K is generalized constant, (xc,yc) it is filter center, (xi,yi) it is specific coordinate in image, f (x, y) is The video satellite image of input is in the gray value of point (x, y), σsIt is set as 10, σrIt is set as 75;
S2 divides single-frame images;
Adaptive threshold based on local image characteristic and previous frame testing result prior information and Kalman filtered results It can be given by:
Wherein P (x, y, k | k-1) refers to that kth frame point (x, y) appears in and detects that target passes through Kalman by (k-1) frame Filtering obtains the probability in kth frame prediction 7 × 7 neighborhood of coordinate, is equal to 0 or 1;
In formula, a (x, y) initial value is set as 1.1, b and is set as 1, and attenuation coefficient ρ is set as 0.8, and the neighborhood that when segmentation is based on makes Multiply 5 neighborhoods with 5;
It is as follows to divide image algorithm:
Wherein f (x, y) is gray value of the original image at (x, y), and g (x, y) is image after dividing processing at (x, y) Gray value.
S3 calculates prediction coordinate;
Image binaryzation is divided, identifies that target area S, grey scale centre of gravity coordinate calculation formula are:
Wherein f (x, y) is gray value of the video satellite original image of input at (x, y), (xS,yS) be region S ash Spend barycentric coodinates;
Preferably, to 3 target areas (attached drawing 2) can be obtained after the 9th frame image segmentation, to can after the 30th frame image segmentation Obtain 5 target areas (attached drawing 3).These regions include target, fixed star and noise, cannot be distinguished in single frames, need to pass through The movable information that includes in multiframe detects target.Attached drawing 2 and attached drawing 3 also show the variation of object brightness.
S4 detects target;
S401, first frame image obtain 9 points (373.5,62), (116.5,179), (325,252), (347,285), (39,542), (370.5,605), (499,639), (68,753), (426.5,770) as the coordinate of 9 classes, and are every One class establishes a Kalman filter;
Assuming that target is x in the state vector of kth framek=(xk,yk,vxk,vyk)T, i.e. its coordinate in pixel coordinate system (xk,yk) and speed (vxk,vyk) (unit:Pixel/Δ t), then system equation be:
In formula (6), Q is set as 10-4I4, R is set as 0.2I2, wherein InIt is the unit matrix of n × n;
It enablesIt is Kalman filter in the state estimation of kth frame, Pk|kFor Posterior estimator error co-variance matrix, embody The levels of precision of estimated value is measured, thenAnd Pk|kThe state of filter is represented, the process of Kalman filter is as follows:
Initialization:InitializationWith P0|0, forx0|0,y0|0Pass through list for first frame The initial coordinate that frame is divided and grey scale centre of gravity method obtains, vx0|0,vy0|0It is set as 0;
Prediction:In forecast period, filter uses the state estimation of previous frame, makes the state estimation to present frame, in advance The coordinate measured will be used for single-frame images segmentation and Track association:
Update:In the more new stage, filter is utilized optimizes the predicted value obtained in forecast period to the measured value of present frame:
Wherein KkIt is an intermediate variable, referred to as optimal kalman gain;
In the Kalman filter initialization of the present embodiment, P0|0It is uniformly set as diag (0.2,0.2,0,0), initial speed Degree is uniformly set as 0, then 9 classes predicts to obtain respectively the coordinate of next frame as (373.5,62), (116.5,179), (325, 252),(347,285),(39,542),(370.5,605),(499,639),(68,753),(426.5,770)。
S402, the second frame image obtain 8 points (373.5,62), (116.5,179), (325,252), (348,285), (40,543), (370,605), (68,753), (426.5,770), will each put successively with existing class Kalman filter The coordinate of prediction compares two-by-two, when distance is less than ε1When=5, it is believed that be a kind of, and old coordinate conduct is replaced by new coordinate The coordinate of point class, if the distance of two points is both greater than threshold epsilon1, then as new point class.So as to obtain 9 classes now Coordinate be (373.5,62), (116.5,179), (325,252), (348,285), (40,543), (370,605), (499, 639), (68,753), (426.5,770), the 7th class only have 1 point, and a new Kalman filtering is established for the class Device, remaining point class has 2 points, and the state of respective Kalman filter is updated using new coordinate as measured value.
So each frame of acquisition is handled.
S403 starts to judge whether it is target, the speed mould for calculating 20 points is long when some point class has at 20 The sum of and threshold epsilon2=2 compare, less than being considered star background or noise, more than being then considered space movement target.
Thus it can detect that the target in image.
In order to intuitively show target, 1000 frame image superpositions are obtained into attached drawing 4, it can be seen that the movement rail of a target Mark, brightness change highly significant, adaptive space object detector detect the target, as shown in Fig. 5.Because 960 × 576 resolution ratio are excessive, illustrate only picture part here, and every 50 frame marks target with a white edge.Attached drawing 6 gives target fortune Dynamic track.
In addition, to this 1000 frame image, if a in formulaxyIt is taken as constant, target is only detected in totally 579 frames, so And algorithm proposed by the present invention interior in totally 947 frames can detect that target, detection probability have obtained greatly improving.

Claims (7)

1. a kind of video satellite image space object detection method, which is characterized in that include the following steps:
S1 carries out noise reduction process to the video satellite image of input;
S2 divides single-frame images:By introducing based on local image characteristic and previous frame testing result prior information and Kalman The adaptive threshold of filter result, and utilize the adaptive threshold fuzziness single-frame images;
S3 calculates prediction coordinate;
S4 detects target.
2. a kind of video satellite image space object detection method according to claim 1, which is characterized in that the S1 is adopted Weight with bilateral filtering method, filter includes two parts:First part is identical as Gaussian filter, and second part considers picture Plain grey similarity;The diameter of filter is set as 5 pixels, and weight w is given by:
Wherein K is generalized constant, (xc,yc) it is the corresponding image coordinate of filter center, (xi,yi) it is the position filter (i, j) Corresponding image coordinate is set, f (x, y) is the video satellite image of input in the gray value of coordinate (x, y), σsIt is set as 10, σrIt is set as 75。
3. a kind of video satellite image space object detection method according to claim 1, which is characterized in that the S2's Specific method is:
Image is divided using the variable thresholding based on partial statistics, to 7 of every bit (x, y) in the video satellite image of input × 7 neighborhoods calculate standard deviation sigmaxyWith mean value mxy, they are description of local contrast and average gray, are based on later The judgment threshold of local mean value and standard deviation:
Txy=a σxy+bmxy (2)
Wherein TxyIt is judgment threshold, b is greater than 0 constant;
For space movement target, their brightness usually changes with the variation of posture;If a is set as constant, target It will be lost in some frames, it is contemplated that the continuity of target movement, if being detected as possible target in kth frame (x, y) Coordinate, then in (k+1) frame, the probability for detecting target in 7 × 7 neighborhoods of (x, y) just greatly increases;Integration objective The continuity of brightness change, if not detecting target, a in 7 × 7 neighborhoods of (k+1) frame (x, y)kOne can be multiplied by A attenuation coefficient, i.e. ak(ρ is attenuation coefficient and ρ to ρ<1), a herekThe value of a i.e. in kth frame;To special based on image local Property and the adaptive threshold of previous frame testing result prior information and Kalman filtered results can be given by:
Wherein P (x, y, k | k-1) refer to whether kth frame point (x, y) is filtered in the target detected by (k-1) frame by Kalman Wave obtains in 7 × 7 neighborhoods of kth frame prediction coordinate, and 1 is equal in neighborhood, and 0 is equal to not in neighborhood;Predict coordinate by karr Graceful filtering obtains, and will be obtained in S3 steps;
Formula (2) and the difference of (3) are that based on local image characteristic and previous frame testing result prior information and Kalman The adaptation coefficient of filter result, the initial value of a is set as being more than 1, and is more than after the gray value of (x, y) becomes larger again and sets Initial value is reset to after determining threshold value;
It is as follows to divide image algorithm:
Wherein f (x, y) is gray value of the video satellite image of input at (x, y), and g (x, y) is the image after dividing processing Gray value at (x, y).
4. a kind of video satellite image space object detection method according to claim 3, which is characterized in that the S2 steps In rapid, the value of a resets to initial value after the gray value of (x, y) is more than 150.
5. a kind of video satellite image space object detection method according to claim 1, which is characterized in that the S3 is adopted With grey scale centre of gravity method, specially:
Image binaryzation is divided, identifies that target area S, its grey scale centre of gravity coordinate calculation formula of target area S are:
Wherein f (x, y) is gray value of the video satellite original image of input at (x, y), (xS,yS) it is target area S Grey scale centre of gravity coordinate.
6. a kind of video satellite image space object detection method according to claim 1, which is characterized in that the S4 is adopted Target is detected with Kalman filtering method.
7. a kind of video satellite image space object detection method according to claim 6, which is characterized in that the S4's Specific method is:
S401, first frame image obtain n1 point (x(1),y(1)),(x(2),y(2)),…,(x(n1),y(n1)), as a classes of n1 Coordinate puts class for each and establishes a Kalman filter;
Assuming that state vector of the target in kth frame image is xk=(xk,yk,vxk,vyk)T, i.e., target is in pixel coordinate system Coordinate (xk,yk) and speed (vxk,vyk), then system equation is:
Wherein F is state-transition matrix, and H is calculation matrix, zkVector is measured, i.e., the coordinate detected in kth frame, w is process Noise, it is assumed that be the zero mean Gaussian white noise that covariance matrix is Q, be denoted asV is measurement noise, it is assumed that for association Variance matrix is the zero mean Gaussian white noise of R, is denoted as
It enablesIt is Kalman filter in the state estimation of kth frame image, Pk|kFor Posterior estimator error co-variance matrix, embody The levels of precision of estimated value, thenAnd Pk|kThe state of Kalman filter is represented, the process of Kalman filter is as follows:
Initialization:InitializationWith P0|0, forx0|0,y0|0Pass through single frames point for first frame Cut the initial coordinate obtained with grey scale centre of gravity method, vx0|0,vy0|0It is set as 0;
Prediction:In forecast period, filter uses the state estimation of previous frame, makes the state estimation to present frame, measure in advance To coordinate will be used for single-frame images segmentation and Track association:
Update:In the more new stage, filter is utilized optimizes the predicted value obtained in forecast period to the measured value of present frame:
Wherein KkIt is an intermediate variable, referred to as optimal kalman gain;
S402, the second frame image obtain n2 point (u(1),v(1)),(u(2),v(2)),…,(u(n2),v(n2)), it will be in the second frame image Each of obtain the point seat with the existing class Kalman filter prediction obtained in first frame image in step S401 successively Mark (formula (7) obtains) compares, when 2 points of distances are less than predetermined threshold value ε1When, it is believed that be it is a kind of, no longer with other class coordinates Compare, and replace old coordinate as the coordinate of point class by new coordinate, and new coordinate is respective as measured value update The state of Kalman filter;If 2 points of distances are both greater than predetermined threshold value ε1, then as new point class, and established for the class One Kalman filter;Each frame image of acquisition is handled according to the method;
S403 starts to judge whether it is target when some point class has at l, calculate l point speed mould the sum of grow with Predetermined threshold value ε2Compare, is less than ε2It is considered star background or noise, is more than ε2Then it is considered space movement target.
CN201810109563.2A 2018-02-05 2018-02-05 A kind of video satellite image space object detection method Withdrawn CN108334885A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810109563.2A CN108334885A (en) 2018-02-05 2018-02-05 A kind of video satellite image space object detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810109563.2A CN108334885A (en) 2018-02-05 2018-02-05 A kind of video satellite image space object detection method

Publications (1)

Publication Number Publication Date
CN108334885A true CN108334885A (en) 2018-07-27

Family

ID=62928027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810109563.2A Withdrawn CN108334885A (en) 2018-02-05 2018-02-05 A kind of video satellite image space object detection method

Country Status (1)

Country Link
CN (1) CN108334885A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091088A (en) * 2019-12-12 2020-05-01 中国人民解放军战略支援部队航天工程大学 Video satellite information supported marine target real-time detection positioning system and method
CN111429479A (en) * 2020-03-26 2020-07-17 中国科学院长春光学精密机械与物理研究所 Space target identification method based on image integral mean value
CN111652906A (en) * 2020-05-11 2020-09-11 中国科学院空间应用工程与技术中心 Adaptive tracking method, device and equipment for satellite video ground dynamic target rotation
CN112330669A (en) * 2020-11-27 2021-02-05 北京理工大学 Star point position positioning method of star sensor based on point light source diffraction starburst phenomenon
CN113709324A (en) * 2020-05-21 2021-11-26 武汉Tcl集团工业研究院有限公司 Video noise reduction method, video noise reduction device and video noise reduction terminal
CN115661154A (en) * 2022-12-27 2023-01-31 长江勘测规划设计研究有限责任公司 System and method for identifying contact state of collector ring carbon brush of generator through machine vision

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103838258A (en) * 2014-02-26 2014-06-04 上海微小卫星工程中心 Automatic tracking method and system applied to space-based space target
CN105023279A (en) * 2015-08-18 2015-11-04 中国人民解放军国防科学技术大学 Motion-information-base spatial moving object detection method of video image
CN105184829A (en) * 2015-08-28 2015-12-23 华中科技大学 Closely spatial object detection and high-precision centroid location method
CN106651904A (en) * 2016-12-02 2017-05-10 北京空间机电研究所 Wide-size-range multi-space target capture tracking method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103838258A (en) * 2014-02-26 2014-06-04 上海微小卫星工程中心 Automatic tracking method and system applied to space-based space target
CN105023279A (en) * 2015-08-18 2015-11-04 中国人民解放军国防科学技术大学 Motion-information-base spatial moving object detection method of video image
CN105184829A (en) * 2015-08-28 2015-12-23 华中科技大学 Closely spatial object detection and high-precision centroid location method
CN106651904A (en) * 2016-12-02 2017-05-10 北京空间机电研究所 Wide-size-range multi-space target capture tracking method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XUEYANG ZHANG,ET AL: "《Space Target Detection in Video Satellite Image via Prior Information》", 《CCCV 2017:COMPUTER VISION》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091088A (en) * 2019-12-12 2020-05-01 中国人民解放军战略支援部队航天工程大学 Video satellite information supported marine target real-time detection positioning system and method
CN111429479A (en) * 2020-03-26 2020-07-17 中国科学院长春光学精密机械与物理研究所 Space target identification method based on image integral mean value
CN111429479B (en) * 2020-03-26 2022-10-11 中国科学院长春光学精密机械与物理研究所 Space target identification method based on image integral mean value
CN111652906A (en) * 2020-05-11 2020-09-11 中国科学院空间应用工程与技术中心 Adaptive tracking method, device and equipment for satellite video ground dynamic target rotation
CN111652906B (en) * 2020-05-11 2021-04-20 中国科学院空间应用工程与技术中心 Adaptive tracking method, device and equipment for satellite video ground dynamic target rotation
CN113709324A (en) * 2020-05-21 2021-11-26 武汉Tcl集团工业研究院有限公司 Video noise reduction method, video noise reduction device and video noise reduction terminal
CN112330669A (en) * 2020-11-27 2021-02-05 北京理工大学 Star point position positioning method of star sensor based on point light source diffraction starburst phenomenon
CN112330669B (en) * 2020-11-27 2022-12-20 北京理工大学 Star point position positioning method of star sensor based on point light source diffraction starburst phenomenon
CN115661154A (en) * 2022-12-27 2023-01-31 长江勘测规划设计研究有限责任公司 System and method for identifying contact state of collector ring carbon brush of generator through machine vision

Similar Documents

Publication Publication Date Title
CN108334885A (en) A kind of video satellite image space object detection method
CN109919974B (en) Online multi-target tracking method based on R-FCN frame multi-candidate association
CN110689562A (en) Trajectory loop detection optimization method based on generation of countermeasure network
CN109584213B (en) Multi-target number selection tracking method
CN111563878B (en) Space target positioning method
US20070133840A1 (en) Tracking Using An Elastic Cluster of Trackers
CN106296726A (en) A kind of extraterrestrial target detecting and tracking method in space-based optical series image
CN112991391A (en) Vehicle detection and tracking method based on radar signal and vision fusion
JP6858415B2 (en) Sea level measurement system, sea level measurement method and sea level measurement program
CN105374049B (en) Multi-corner point tracking method and device based on sparse optical flow method
CN110555868A (en) method for detecting small moving target under complex ground background
CN108804992A (en) A kind of Demographics&#39; method based on deep learning
CN112541424A (en) Real-time detection method for pedestrian falling under complex environment
Lian et al. A novel method on moving-objects detection based on background subtraction and three frames differencing
CN108629792A (en) Laser eyepiece detection method and device based on background modeling Yu background difference
WO2018164575A1 (en) Method of detecting moving objects from a temporal sequence of images
CN112907557A (en) Road detection method, road detection device, computing equipment and storage medium
CN111145198B (en) Non-cooperative target motion estimation method based on rapid corner detection
KR101803340B1 (en) Visual odometry system and method
CN111368770A (en) Gesture recognition method based on skeleton point detection and tracking
CN111089586B (en) All-day star sensor star point extraction method based on multi-frame accumulation algorithm
CN116883897A (en) Low-resolution target identification method
CN112016558A (en) Medium visibility identification method based on image quality
CN116862832A (en) Three-dimensional live-action model-based operator positioning method
CN112069997B (en) Unmanned aerial vehicle autonomous landing target extraction method and device based on DenseHR-Net

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20180727