CN116188519A - Ship target motion state estimation method and system based on video satellite - Google Patents

Ship target motion state estimation method and system based on video satellite Download PDF

Info

Publication number
CN116188519A
CN116188519A CN202310069909.1A CN202310069909A CN116188519A CN 116188519 A CN116188519 A CN 116188519A CN 202310069909 A CN202310069909 A CN 202310069909A CN 116188519 A CN116188519 A CN 116188519A
Authority
CN
China
Prior art keywords
image
vessel
mask image
vessels
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310069909.1A
Other languages
Chinese (zh)
Other versions
CN116188519B (en
Inventor
姚力波
刘勇
孟洁
孙炜炜
肖化超
李晓博
盛磊
乔兴旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Naval Aeronautical University
Original Assignee
Naval Aeronautical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Naval Aeronautical University filed Critical Naval Aeronautical University
Priority to CN202310069909.1A priority Critical patent/CN116188519B/en
Publication of CN116188519A publication Critical patent/CN116188519A/en
Application granted granted Critical
Publication of CN116188519B publication Critical patent/CN116188519B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/60Rotation of a whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The invention discloses a ship target motion state estimation method and system based on a video satellite, and relates to the technical field of satellite remote sensing information processing. The method comprises the following steps: acquiring video satellite images of a target area at any two moments to obtain a first image and a second image; image segmentation and rotary rectangular frame fitting operations are sequentially carried out on the first image and the second image respectively, so that size parameters of rectangular frames corresponding to vessels in the two images are obtained; calculating the position parameters of each vessel in the two images based on the size parameters of the rectangular frames corresponding to each vessel in the two images; carrying out target association on the vessels in the two images according to the position parameters of the vessels in the two images to obtain a target association result; and obtaining a ship direction estimated value according to the target association result and the rotation angle of the rectangular frame corresponding to each ship in the two images, and obtaining the target motion state of the ship according to the ship direction estimated value. The invention can improve the estimation precision of the ship target motion state under the condition of no ground information reference.

Description

Ship target motion state estimation method and system based on video satellite
Technical Field
The invention relates to the technical field of satellite remote sensing information processing, in particular to a ship target motion state estimation method and system based on a video satellite.
Background
The video satellite is a new system earth observation satellite, and is mainly divided into a low-orbit video satellite and a static orbit staring imaging satellite. The video satellite can perform staring imaging on the region of interest to obtain the motion state of the target of interest in the region for a period of time, such as the navigation direction and speed of the ship target, and has great application value in the field of ocean monitoring. Because of errors of the posture, the orbit and the like of the video satellite, the shot inter-frame images have relative jitter (tens of meters to hundreds of meters) to interfere with the estimation of the motion state of the target, and the matching is needed by using the homonymous points of the inter-frame images, so that the image registration is realized to eliminate the influence of the inter-frame jitter. At present, a target motion state estimation method based on video satellites is mainly researched on the basis of interframe image registration. Under the conditions of ports, coasts and the like, fixed homonymous points can be found in land areas to finish inter-frame image registration, but under the condition of no ground information reference in the scenes of no land in middle, open sea and the like, the fixed homonymous points are difficult to find for matching, and accurate inter-frame image registration is difficult to perform, so that the motion state estimation error is larger.
Disclosure of Invention
The invention aims to provide a ship target motion state estimation method and system based on a video satellite, which can improve the estimation precision of the ship target motion state under the condition of no ground information reference.
In order to achieve the above object, the present invention provides the following solutions:
a ship target motion state estimation method based on video satellites comprises the following steps:
acquiring video satellite images of a target area at any two moments to obtain a first image and a second image; the target area comprises a plurality of vessels at any two moments, and the vessels which are the same in the first image and the second image exist; the same vessels form a vessel set to be estimated;
respectively carrying out image segmentation processing on the first image and the second image to obtain a first mask image and a second mask image;
performing rotary rectangular frame fitting operation on the first mask image and the second mask image respectively to obtain the size parameters of rectangular frames corresponding to all ships in the first mask image and the size parameters of rectangular frames corresponding to all ships in the second mask image; the size parameters include a center pixel position, a length, and a rotation angle;
calculating the position parameters of each vessel in the first mask image and the position parameters of each vessel in the second mask image based on the size parameters of the rectangular frames corresponding to each vessel in the first mask image and the size parameters of the rectangular frames corresponding to each vessel in the second mask image; the location parameters include: position coordinates and length;
performing target association on the vessels in the first mask image and the vessels in the second mask image according to the position parameters of the vessels in the first mask image and the position parameters of the vessels in the second mask image to obtain a target association result;
obtaining course estimation values of all vessels in the vessel set to be estimated according to the target association result, the rotation angles of the rectangular frames corresponding to all vessels in the first mask image and the rotation angles of the rectangular frames corresponding to all vessels in the second mask image;
obtaining the target motion state of each vessel in the vessel set to be estimated according to the course estimated value of each vessel in the vessel set to be estimated, the target association result, the position coordinates of each vessel in the first mask image and the position coordinates of each vessel in the second mask image; the target motion state includes a ship speed and a ship direction.
Optionally, the image segmentation processing is performed on the first image and the second image to obtain a first mask image and a second mask image, which specifically includes:
respectively performing geometric correction preprocessing on the first image and the second image to obtain a preprocessed first image and a preprocessed second image;
and respectively carrying out image segmentation processing on the preprocessed first image and the preprocessed second image to obtain a first mask image and a second mask image.
Optionally, the obtaining the target motion state of each vessel in the vessel set to be estimated according to the heading estimation value of each vessel in the vessel set to be estimated, the target association result, the position coordinates of each vessel in the first mask image and the position coordinates of each vessel in the second mask image specifically includes:
judging whether the heading of each vessel in the vessel set to be estimated is parallel or not according to the heading estimation value of each vessel in the vessel set to be estimated;
if so, obtaining a target motion state of each vessel in the vessel set to be estimated according to the target association result, the position coordinates of each vessel in the first mask image and the position coordinates of each vessel in the second mask image and the time interval; the time interval is a time interval between acquiring the first image and acquiring the second image;
if not, obtaining the target motion state of each vessel in the vessel set to be estimated according to the target association result, the course estimation value of each vessel in the vessel set to be estimated, the position coordinate of each vessel in the first mask image, the position coordinate of each vessel in the second mask image and the time interval.
Optionally, the obtaining the target motion state of each vessel in the vessel set to be estimated according to the target association result, the position coordinates of each vessel in the first mask image, and the position coordinates of each vessel in the second mask image and the time interval specifically includes:
for any group of two vessels which are mutually associated in the target association result, calculating the difference of the position coordinates of the two vessels to obtain the geographic distance of the two vessels;
and calculating the difference value between the geographic distance and the time interval to obtain the target motion states of the two vessels.
A video satellite-based ship target motion state estimation system, comprising:
the acquisition module is used for acquiring video satellite images of the target area at any two moments to obtain a first image and a second image; the target area comprises a plurality of vessels at any two moments, and the vessels which are the same in the first image and the second image exist; the same vessels form a vessel set to be estimated;
the image segmentation module is used for respectively carrying out image segmentation processing on the first image and the second image to obtain a first mask image and a second mask image;
the size parameter calculation module is used for respectively carrying out rotary rectangular frame fitting operation on the first mask image and the second mask image to obtain size parameters of rectangular frames corresponding to vessels in the first mask image and size parameters of rectangular frames corresponding to vessels in the second mask image; the size parameters include a center pixel position, a length, and a rotation angle;
the position parameter calculation module is used for calculating the position parameters of the vessels in the first mask image and the position parameters of the vessels in the second mask image based on the size parameters of the rectangular frames corresponding to the vessels in the first mask image and the size parameters of the rectangular frames corresponding to the vessels in the second mask image; the location parameters include: position coordinates and length;
the target association module is used for carrying out target association on the vessels in the first mask image and the vessels in the second mask image according to the position parameters of the vessels in the first mask image and the position parameters of the vessels in the second mask image to obtain a target association result;
the course estimation value calculation module is used for obtaining course estimation values of the vessels in the vessel set to be estimated according to the target association result, the rotation angles of the rectangular frames corresponding to the vessels in the first mask image and the rotation angles of the rectangular frames corresponding to the vessels in the second mask image;
the target motion state determining module is used for obtaining the target motion state of each vessel in the vessel set to be estimated according to the course estimated value of each vessel in the vessel set to be estimated, the target association result, the position coordinates of each vessel in the first mask image and the position coordinates of each vessel in the second mask image; the target motion state includes a ship speed and a ship direction.
Optionally, the image segmentation module specifically includes:
the preprocessing unit is used for respectively carrying out geometric correction preprocessing on the first image and the second image to obtain a preprocessed first image and a preprocessed second image;
and the image segmentation unit is used for respectively carrying out image segmentation processing on the preprocessed first image and the preprocessed second image to obtain a first mask image and a second mask image.
Optionally, the target motion state determining module specifically includes:
the judging unit is used for judging whether the heading of each vessel in the vessel set to be estimated is parallel or not according to the heading estimation value of each vessel in the vessel set to be estimated;
the first target motion state determining unit is used for obtaining the target motion state of each vessel in the vessel set to be estimated according to the target association result, the position coordinates of each vessel in the first mask image and the position coordinates of each vessel in the second mask image and the time interval if the target motion state is determined; the time interval is a time interval between acquiring the first image and acquiring the second image;
and the second target motion state determining unit is used for obtaining the target motion state of each vessel in the vessel set to be estimated according to the target association result, the course estimated value of each vessel in the vessel set to be estimated, the position coordinate of each vessel in the first mask image, the position coordinate of each vessel in the second mask image and the time interval if not.
Optionally, the first target motion state determining unit specifically includes:
the geographic distance calculating subunit is used for calculating the difference of the position coordinates of any group of two vessels which are mutually related in the target association result to obtain the geographic distance of the two vessels;
and the first target motion state determining subunit is used for calculating the difference value between the geographic distance and the time interval to obtain the target motion states of the two vessels.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects: the method comprises the steps of sequentially carrying out image segmentation and rotating rectangular frame fitting operation on a first image and a second image respectively to obtain size parameters of rectangular frames corresponding to vessels in the two images; calculating the position parameters of each vessel in the two images based on the size parameters of the rectangular frames corresponding to each vessel in the two images; carrying out target association on the vessels in the two images according to the position parameters of the vessels in the two images to obtain a target association result; according to the target association result and the rotation angle of the rectangular frame corresponding to each ship in the two images, the ship direction estimated value of each ship is obtained, the target motion state of each ship in the ship set to be estimated is obtained according to the ship direction estimated value, a reference object is not needed in the calculation process, the target motion state is determined according to the ship direction estimated value, and the estimation precision of the ship target motion state under the condition of no ground information reference can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for estimating the motion state of a ship target based on a video satellite according to an embodiment of the present invention;
fig. 2 is a schematic diagram of ship speed estimation according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
As shown in fig. 1, an embodiment of the present invention provides a method for estimating a motion state of a ship target based on a video satellite, including:
acquiring video satellite images of a target area at any two moments to obtain a first image and a second image; the target area comprises a plurality of vessels at any two moments, and the vessels which are the same in the first image and the second image exist; the same vessels form a set of vessels to be estimated.
And respectively carrying out image segmentation processing on the first image and the second image to obtain a first mask image and a second mask image.
Performing rotary rectangular frame fitting operation on the first mask image and the second mask image respectively to obtain the size parameters of rectangular frames corresponding to all ships in the first mask image and the size parameters of rectangular frames corresponding to all ships in the second mask image; the dimensional parameters include center pixel position, length, and rotation angle.
Calculating the position parameters of each vessel in the first mask image and the position parameters of each vessel in the second mask image based on the size parameters of the rectangular frames corresponding to each vessel in the first mask image and the size parameters of the rectangular frames corresponding to each vessel in the second mask image; the location parameters include: position coordinates and length.
And carrying out target association on the vessels in the first mask image and the vessels in the second mask image according to the position parameters of the vessels in the first mask image and the position parameters of the vessels in the second mask image to obtain a target association result.
And obtaining the course estimated value of each vessel in the vessel set to be estimated according to the target association result, the rotation angle of the rectangular frame corresponding to each vessel in the first mask image and the rotation angle of the rectangular frame corresponding to each vessel in the second mask image. The heading can be approximated to the angle of the ship body, namely the angle of the rotating rectangular frame obtained by target detection, without considering the influence of factors such as water flow and the like on the heading of the ship target, but because the heads and the tails of many ship targets are difficult to distinguish, the heading obtained by target detection can be blurred by 180 degrees, namely the heading is just opposite to the actual heading and can be determined by combining with subsequent processing, the heading obtained according to the rotating angle of the rectangular frame is determined to be the heading estimation value, and in the current step, the rotating angle (the rotating angle of the rectangular frame corresponding to the ship in the second mask image minus the rotating angle of the rectangular frame corresponding to the ship in the first mask image) can be considered to be directly equal to the heading estimation value.
Obtaining the target motion state of each vessel in the vessel set to be estimated according to the course estimated value of each vessel in the vessel set to be estimated, the target association result, the position coordinates of each vessel in the first mask image and the position coordinates of each vessel in the second mask image; the target motion state includes a ship speed and a ship direction.
In an actual application, the image segmentation processing is performed on the first image and the second image to obtain a first mask image and a second mask image, which specifically includes:
and respectively performing geometric correction preprocessing on the first image and the second image to obtain a preprocessed first image and a preprocessed second image.
And respectively carrying out image segmentation processing on the preprocessed first image and the preprocessed second image to obtain a first mask image and a second mask image.
In practical application, the image segmentation processing is performed on the preprocessed first image and the preprocessed second image to obtain a first mask image and a second mask image, which comprises the following steps: and performing Ship target Detection on the preprocessed image by using a mainstream deep learning image segmentation algorithm mask RCNN frame to obtain a target mask region, wherein deep learning training data adopts a public data set such as Airbus company clip Detection and the like, and is marked as a mask type.
In practical application, based on the size parameter of the rectangular frame corresponding to each ship in the first mask image and the size parameter of the rectangular frame corresponding to each ship in the second mask image, the position parameter of each ship in the first mask image and the position parameter of each ship in the second mask image are calculated, and the steps are as follows: and according to the size parameters of the rectangular frames corresponding to the vessels in the first mask image and the size parameters of the rectangular frames corresponding to the vessels in the second mask image, combining the auxiliary information such as satellite image pixel resolution, four-corner coordinates and the like to obtain the actual geographic projection coordinates, length and direction of the vessel targets.
In practical application, the steps of carrying out target association on the vessels in the first mask image and the vessels in the second mask image according to the position parameters of the vessels in the first mask image and the position parameters of the vessels in the second mask image to obtain a target association result are as follows: ship target detection results based on two moments of video satellite, and target is carried out by utilizing distance and length characteristicsAnd (5) label association. Let the number of targets detected twice be m, n and d respectively ij 、l ij Respectively the difference of the geographical distance and the length of any two targets (vessels), d ij 、l ij A certain threshold value needs to be met to be able to carry out target association, the target association is converted into the following optimization problem,
Figure BDA0004064370120000081
j=1i=1
Figure BDA0004064370120000082
wherein T is ij As a matching function, deltaT is the imaging interval, v max For maximum speed constraint, deltal is a length threshold, and solving the optimization problem through a Hungary algorithm to obtain a one-to-one correspondence of targets in the two images.
In practical application, the obtaining the target motion state of each vessel in the vessel set to be estimated according to the course estimation value of each vessel in the vessel set to be estimated, the target association result, the position coordinates of each vessel in the first mask image and the position coordinates of each vessel in the second mask image specifically includes:
judging whether the heading of each vessel in the vessel set to be estimated is parallel or not according to the heading estimation value of each vessel in the vessel set to be estimated.
If so, obtaining a target motion state of each vessel in the vessel set to be estimated according to the target association result, the position coordinates of each vessel in the first mask image and the position coordinates of each vessel in the second mask image and the time interval; the time interval is the time interval between the acquisition of the first image and the acquisition of the second image.
If not, obtaining the target motion state of each vessel in the vessel set to be estimated according to the target association result, the course estimation value of each vessel in the vessel set to be estimated, the position coordinate of each vessel in the first mask image, the position coordinate of each vessel in the second mask image and the time interval.
In practical application, the obtaining the target motion state of each vessel in the vessel set to be estimated according to the target association result, the position coordinates of each vessel in the first mask image, and the position coordinates of each vessel in the second mask image and the time interval specifically includes:
and calculating the difference of the position coordinates of any group of two vessels (namely the same vessel under two imaging moments) which are mutually related in the target association result to obtain the geographic distance of the two vessels.
And calculating the difference value between the geographic distance and the time interval to obtain target motion states of the two vessels, and determining whether the heading is opposite to the heading estimation value according to the positive and negative speeds.
As shown in fig. 2, assuming that all ship targets in the video satellite image area approximately follow uniform linear motion in a short time and the image inter-frame variation amount is approximately in a translational relationship as a whole, it can be expressed as for the ship target 1
Figure BDA0004064370120000091
Wherein Δx and Δy are relative positional offsets between the satellite remote sensing images at time T and time T ' (T ' > T, Δt=t ' -T), and the positions at the two times are (x 1 ,y 1 ) And (x) 1 ′,y 1 '), if the two images complete a precise registration, there is no offset, i.e., Δx=0, Δy=0. θ 1 V for obtaining heading estimation value 1 For speed of navigation, v 1 The deltax and deltay are unknown quantities, and the equation can be expressed as follows after transformation
Figure BDA0004064370120000101
For target 2, the same asThe theory can establish two equations with the unknown quantity v 2 Δx, Δy, then two targets can build 4 equations, 4 unknowns, expressed as
Figure BDA0004064370120000102
A can be transformed to B by elementary transformation of determinant,
Figure BDA0004064370120000103
/>
the determinant of the matrix has
Figure BDA0004064370120000104
When theta is as 2 =θ 1 Or theta 2 =θ 1 180 °, i.e. when the heading of two targets are parallel (same direction or opposite direction), |a|=0, the equation has arbitrary solution; when the heading is not parallel (cross), the equation has a unique solution. Judging whether the heading of each vessel in the vessel set to be estimated is parallel according to the heading estimation value of each vessel in the vessel set to be estimated, and in practical application, obtaining a target motion state of each vessel in the vessel set to be estimated according to the target association result, the heading estimation value of each vessel in the vessel set to be estimated, the position coordinates of each vessel in the first mask image, the position coordinates of each vessel in the second mask image and the time interval, wherein the steps are as follows: when n targets exist in the scene, 2n equations can be established, wherein the unknown number is n+2, and the speed and the relative position offset can be solved through a least square method. Wherein, arbitrary two heading theta i And theta j Can be expressed as the angular difference of (2)
Δθ ij =min{180°-|θ ij |,|θ ij |}
Setting the condition meeting the heading intersection as theta min ≤Δθ ij Less than or equal to 90 DEG, wherein theta min To meet the minimum angle of heading intersection. Therefore, if a group of ship targets meeting the heading crossing condition exist in the satellite observation area, the navigational speeds of all targets can be calculated. When the calculated navigational speed is more than or equal to 0, the estimated heading value is correct, otherwise, the estimated heading value is opposite to the actual heading, and 180-degree correction is needed.
In practical application, the ship target detection and direction estimation can also adopt the existing improved deep learning methods such as YOLO and the like, and the target association can also adopt the association method based on other characteristics.
The embodiment of the invention also provides a ship target motion state estimation system based on the video satellite, which corresponds to the method and comprises the following steps:
the acquisition module is used for acquiring video satellite images of the target area at any two moments to obtain a first image and a second image; the target area comprises a plurality of vessels at any two moments, and the vessels which are the same in the first image and the second image exist; the same vessels form a set of vessels to be estimated.
And the image segmentation module is used for respectively carrying out image segmentation processing on the first image and the second image to obtain a first mask image and a second mask image.
The size parameter calculation module is used for respectively carrying out rotary rectangular frame fitting operation on the first mask image and the second mask image to obtain size parameters of rectangular frames corresponding to vessels in the first mask image and size parameters of rectangular frames corresponding to vessels in the second mask image; the dimensional parameters include center pixel position, length, and rotation angle.
The position parameter calculation module is used for calculating the position parameters of the vessels in the first mask image and the position parameters of the vessels in the second mask image based on the size parameters of the rectangular frames corresponding to the vessels in the first mask image and the size parameters of the rectangular frames corresponding to the vessels in the second mask image; the location parameters include: position coordinates and length.
And the target association module is used for carrying out target association on the vessels in the first mask image and the vessels in the second mask image according to the position parameters of the vessels in the first mask image and the position parameters of the vessels in the second mask image to obtain a target association result.
The course estimation value calculation module is used for obtaining course estimation values of the vessels in the vessel set to be estimated according to the target association result, the rotation angles of the rectangular frames corresponding to the vessels in the first mask image and the rotation angles of the rectangular frames corresponding to the vessels in the second mask image.
The target motion state determining module is used for obtaining the target motion state of each vessel in the vessel set to be estimated according to the course estimated value of each vessel in the vessel set to be estimated, the target association result, the position coordinates of each vessel in the first mask image and the position coordinates of each vessel in the second mask image; the target motion state includes a ship speed and a ship direction.
In practical application, the image segmentation module specifically includes:
and the preprocessing unit is used for respectively carrying out geometric correction preprocessing on the first image and the second image to obtain a preprocessed first image and a preprocessed second image.
And the image segmentation unit is used for respectively carrying out image segmentation processing on the preprocessed first image and the preprocessed second image to obtain a first mask image and a second mask image.
In practical application, the target motion state determining module specifically includes:
the judging unit is used for judging whether the heading of each vessel in the vessel set to be estimated is parallel or not according to the heading estimation value of each vessel in the vessel set to be estimated.
The first target motion state determining unit is used for obtaining the target motion state of each vessel in the vessel set to be estimated according to the target association result, the position coordinates of each vessel in the first mask image and the position coordinates of each vessel in the second mask image and the time interval if the target motion state is determined; the time interval is the time interval between the acquisition of the first image and the acquisition of the second image.
And the second target motion state determining unit is used for obtaining the target motion state of each vessel in the vessel set to be estimated according to the target association result, the course estimated value of each vessel in the vessel set to be estimated, the position coordinate of each vessel in the first mask image, the position coordinate of each vessel in the second mask image and the time interval if not.
In practical application, the first target motion state determining unit specifically includes:
and the geographic distance calculating subunit is used for calculating the difference of the position coordinates of any group of two vessels which are mutually related in the target association result to obtain the geographic distance of the two vessels.
And the first target motion state determining subunit is used for calculating the difference value between the geographic distance and the time interval to obtain the target motion states of the two vessels.
The basic idea of the invention is as follows: carrying out ship target detection by utilizing a deep learning image segmentation algorithm to obtain a target mask region, calculating the position, rotation angle, length and the like of a central pixel of a rectangular frame, and combining image auxiliary information to obtain information such as target geographic projection coordinates, directions, length and the like; performing target association by using detection results of two imaging moments of the video satellite to obtain a one-to-one correspondence of targets; judging whether the crossing condition of the course is met according to the number and the direction of the targets in the area, if so, estimating the course speed based on the crossing course angle, if not, directly calculating the course speed, and further correcting the course according to the positive and negative of the course speed, thereby solving the problem of estimating the motion state of the ship targets by the video satellite under the condition of no ground information reference.
The method can be applied to low-orbit video satellites, static orbit staring imaging satellites and other types of satellites, and improves the estimation precision of the motion state of the ship target under the condition that the video satellites have no ground information reference in the middle and open sea areas and the like.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (8)

1. The ship target motion state estimation method based on the video satellite is characterized by comprising the following steps of:
acquiring video satellite images of a target area at any two moments to obtain a first image and a second image; the target area comprises a plurality of vessels at any two moments, and the vessels which are the same in the first image and the second image exist; the same vessels form a vessel set to be estimated;
respectively carrying out image segmentation processing on the first image and the second image to obtain a first mask image and a second mask image;
performing rotary rectangular frame fitting operation on the first mask image and the second mask image respectively to obtain the size parameters of rectangular frames corresponding to all ships in the first mask image and the size parameters of rectangular frames corresponding to all ships in the second mask image; the size parameters include a center pixel position, a length, and a rotation angle;
calculating the position parameters of each vessel in the first mask image and the position parameters of each vessel in the second mask image based on the size parameters of the rectangular frames corresponding to each vessel in the first mask image and the size parameters of the rectangular frames corresponding to each vessel in the second mask image; the location parameters include: position coordinates and length;
performing target association on the vessels in the first mask image and the vessels in the second mask image according to the position parameters of the vessels in the first mask image and the position parameters of the vessels in the second mask image to obtain a target association result;
obtaining course estimation values of all vessels in the vessel set to be estimated according to the target association result, the rotation angles of the rectangular frames corresponding to all vessels in the first mask image and the rotation angles of the rectangular frames corresponding to all vessels in the second mask image;
obtaining the target motion state of each vessel in the vessel set to be estimated according to the course estimated value of each vessel in the vessel set to be estimated, the target association result, the position coordinates of each vessel in the first mask image and the position coordinates of each vessel in the second mask image; the target motion state includes a ship speed and a ship direction.
2. The method for estimating a motion state of a ship target based on a video satellite according to claim 1, wherein the image segmentation processing is performed on the first image and the second image to obtain a first mask image and a second mask image, respectively, specifically comprising:
respectively performing geometric correction preprocessing on the first image and the second image to obtain a preprocessed first image and a preprocessed second image;
and respectively carrying out image segmentation processing on the preprocessed first image and the preprocessed second image to obtain a first mask image and a second mask image.
3. The method for estimating a motion state of a ship target based on a video satellite according to claim 1, wherein the obtaining the motion state of the target of each ship in the ship set to be estimated according to the heading estimation value of each ship in the ship set to be estimated, the target association result, the position coordinates of each ship in the first mask image, and the position coordinates of each ship in the second mask image specifically includes:
judging whether the heading of each vessel in the vessel set to be estimated is parallel or not according to the heading estimation value of each vessel in the vessel set to be estimated;
if so, obtaining a target motion state of each vessel in the vessel set to be estimated according to the target association result, the position coordinates of each vessel in the first mask image and the position coordinates of each vessel in the second mask image and the time interval; the time interval is a time interval between acquiring the first image and acquiring the second image;
if not, obtaining the target motion state of each vessel in the vessel set to be estimated according to the target association result, the course estimation value of each vessel in the vessel set to be estimated, the position coordinate of each vessel in the first mask image, the position coordinate of each vessel in the second mask image and the time interval.
4. The method for estimating a motion state of a ship target based on a video satellite according to claim 3, wherein the obtaining the motion state of the target of each ship in the ship set to be estimated according to the target association result, the position coordinates of each ship in the first mask image, and the position coordinates and time intervals of each ship in the second mask image specifically includes:
for any group of two vessels which are mutually associated in the target association result, calculating the difference of the position coordinates of the two vessels to obtain the geographic distance of the two vessels;
and calculating the difference value between the geographic distance and the time interval to obtain the target motion states of the two vessels.
5. A video satellite-based ship target motion state estimation system, comprising:
the acquisition module is used for acquiring video satellite images of the target area at any two moments to obtain a first image and a second image; the target area comprises a plurality of vessels at any two moments, and the vessels which are the same in the first image and the second image exist; the same vessels form a vessel set to be estimated;
the image segmentation module is used for respectively carrying out image segmentation processing on the first image and the second image to obtain a first mask image and a second mask image;
the size parameter calculation module is used for respectively carrying out rotary rectangular frame fitting operation on the first mask image and the second mask image to obtain size parameters of rectangular frames corresponding to vessels in the first mask image and size parameters of rectangular frames corresponding to vessels in the second mask image; the size parameters include a center pixel position, a length, and a rotation angle;
the position parameter calculation module is used for calculating the position parameters of the vessels in the first mask image and the position parameters of the vessels in the second mask image based on the size parameters of the rectangular frames corresponding to the vessels in the first mask image and the size parameters of the rectangular frames corresponding to the vessels in the second mask image; the location parameters include: position coordinates and length;
the target association module is used for carrying out target association on the vessels in the first mask image and the vessels in the second mask image according to the position parameters of the vessels in the first mask image and the position parameters of the vessels in the second mask image to obtain a target association result;
the course estimation value calculation module is used for obtaining course estimation values of the vessels in the vessel set to be estimated according to the target association result, the rotation angles of the rectangular frames corresponding to the vessels in the first mask image and the rotation angles of the rectangular frames corresponding to the vessels in the second mask image;
the target motion state determining module is used for obtaining the target motion state of each vessel in the vessel set to be estimated according to the course estimated value of each vessel in the vessel set to be estimated, the target association result, the position coordinates of each vessel in the first mask image and the position coordinates of each vessel in the second mask image; the target motion state includes a ship speed and a ship direction.
6. The system for estimating a motion state of a ship target based on a video satellite according to claim 5, wherein the image segmentation module specifically comprises:
the preprocessing unit is used for respectively carrying out geometric correction preprocessing on the first image and the second image to obtain a preprocessed first image and a preprocessed second image;
and the image segmentation unit is used for respectively carrying out image segmentation processing on the preprocessed first image and the preprocessed second image to obtain a first mask image and a second mask image.
7. The video satellite-based ship target motion state estimation system of claim 5, wherein the target motion state determination module specifically comprises:
the judging unit is used for judging whether the heading of each vessel in the vessel set to be estimated is parallel or not according to the heading estimation value of each vessel in the vessel set to be estimated;
the first target motion state determining unit is used for obtaining the target motion state of each vessel in the vessel set to be estimated according to the target association result, the position coordinates of each vessel in the first mask image and the position coordinates of each vessel in the second mask image and the time interval if the target motion state is determined; the time interval is a time interval between acquiring the first image and acquiring the second image;
and the second target motion state determining unit is used for obtaining the target motion state of each vessel in the vessel set to be estimated according to the target association result, the course estimated value of each vessel in the vessel set to be estimated, the position coordinate of each vessel in the first mask image, the position coordinate of each vessel in the second mask image and the time interval if not.
8. The video satellite-based ship target motion state estimation system according to claim 7, wherein the first target motion state determination unit specifically comprises:
the geographic distance calculating subunit is used for calculating the difference of the position coordinates of any group of two vessels which are mutually related in the target association result to obtain the geographic distance of the two vessels;
and the first target motion state determining subunit is used for calculating the difference value between the geographic distance and the time interval to obtain the target motion states of the two vessels.
CN202310069909.1A 2023-02-07 2023-02-07 Ship target motion state estimation method and system based on video satellite Active CN116188519B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310069909.1A CN116188519B (en) 2023-02-07 2023-02-07 Ship target motion state estimation method and system based on video satellite

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310069909.1A CN116188519B (en) 2023-02-07 2023-02-07 Ship target motion state estimation method and system based on video satellite

Publications (2)

Publication Number Publication Date
CN116188519A true CN116188519A (en) 2023-05-30
CN116188519B CN116188519B (en) 2023-10-03

Family

ID=86441734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310069909.1A Active CN116188519B (en) 2023-02-07 2023-02-07 Ship target motion state estimation method and system based on video satellite

Country Status (1)

Country Link
CN (1) CN116188519B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014025786A (en) * 2012-07-26 2014-02-06 Hitachi Ltd Target motion analysis method and target motion analysis device
CN106485722A (en) * 2016-09-21 2017-03-08 北京航天宏图信息技术股份有限公司 Reach port in a kind of remote sensing image Ship Detection
US20180286052A1 (en) * 2017-03-30 2018-10-04 4DM Inc. Object motion mapping using panchromatic and multispectral imagery from single pass electro-optical satellite imaging sensors
CN108805904A (en) * 2018-05-25 2018-11-13 中国空间技术研究院 A kind of moving ship detection and tracking based on satellite sequence image
CN109034075A (en) * 2018-07-31 2018-12-18 中国人民解放军61646部队 The method of face battle array gazing type remote sensing satellite tracking Ship Target
CN111179638A (en) * 2020-01-08 2020-05-19 中国船舶重工集团公司第七二四研究所 Ship AIS target navigation monitoring method based on time sequence
US20200160061A1 (en) * 2017-12-11 2020-05-21 Zhuhai Da Hengqin Technology Development Co., Ltd. Automatic ship tracking method and system based on deep learning network and mean shift
CN111402299A (en) * 2020-04-08 2020-07-10 中国人民解放军海军航空大学 Remote sensing image target tracking method and device based on stationary orbit staring satellite
CN112346096A (en) * 2020-11-10 2021-02-09 中国人民解放军海军航空大学 High-low orbit remote sensing satellite ship target track and point track correlation method and system
US20210073573A1 (en) * 2018-11-15 2021-03-11 Shanghai Advanced Avionics Co., Ltd. Ship identity recognition method based on fusion of ais data and video data
KR102235787B1 (en) * 2020-01-09 2021-04-05 씨드로닉스(주) Device and method for monitoring a berthing
CN112686095A (en) * 2020-12-04 2021-04-20 中国人民解放军海军航空大学 Ship target track correlation method for stationary orbit staring satellite remote sensing image
CN113393497A (en) * 2021-07-07 2021-09-14 中国人民解放军海军航空大学 Ship target tracking method, device and equipment of sequence remote sensing image under condition of broken clouds
CN114494327A (en) * 2022-01-19 2022-05-13 三亚海兰寰宇海洋信息科技有限公司 Method, device and equipment for processing flight path of target object
CN114898213A (en) * 2022-05-19 2022-08-12 中国人民解放军国防科技大学 AIS knowledge assistance-based remote sensing image rotating ship target detection method

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014025786A (en) * 2012-07-26 2014-02-06 Hitachi Ltd Target motion analysis method and target motion analysis device
CN106485722A (en) * 2016-09-21 2017-03-08 北京航天宏图信息技术股份有限公司 Reach port in a kind of remote sensing image Ship Detection
US20180286052A1 (en) * 2017-03-30 2018-10-04 4DM Inc. Object motion mapping using panchromatic and multispectral imagery from single pass electro-optical satellite imaging sensors
US20200160061A1 (en) * 2017-12-11 2020-05-21 Zhuhai Da Hengqin Technology Development Co., Ltd. Automatic ship tracking method and system based on deep learning network and mean shift
CN108805904A (en) * 2018-05-25 2018-11-13 中国空间技术研究院 A kind of moving ship detection and tracking based on satellite sequence image
CN109034075A (en) * 2018-07-31 2018-12-18 中国人民解放军61646部队 The method of face battle array gazing type remote sensing satellite tracking Ship Target
US20210073573A1 (en) * 2018-11-15 2021-03-11 Shanghai Advanced Avionics Co., Ltd. Ship identity recognition method based on fusion of ais data and video data
CN111179638A (en) * 2020-01-08 2020-05-19 中国船舶重工集团公司第七二四研究所 Ship AIS target navigation monitoring method based on time sequence
KR102235787B1 (en) * 2020-01-09 2021-04-05 씨드로닉스(주) Device and method for monitoring a berthing
CN111402299A (en) * 2020-04-08 2020-07-10 中国人民解放军海军航空大学 Remote sensing image target tracking method and device based on stationary orbit staring satellite
CN112346096A (en) * 2020-11-10 2021-02-09 中国人民解放军海军航空大学 High-low orbit remote sensing satellite ship target track and point track correlation method and system
CN112686095A (en) * 2020-12-04 2021-04-20 中国人民解放军海军航空大学 Ship target track correlation method for stationary orbit staring satellite remote sensing image
CN113393497A (en) * 2021-07-07 2021-09-14 中国人民解放军海军航空大学 Ship target tracking method, device and equipment of sequence remote sensing image under condition of broken clouds
CN114494327A (en) * 2022-01-19 2022-05-13 三亚海兰寰宇海洋信息科技有限公司 Method, device and equipment for processing flight path of target object
CN114898213A (en) * 2022-05-19 2022-08-12 中国人民解放军国防科技大学 AIS knowledge assistance-based remote sensing image rotating ship target detection method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LINLIN ZHANG 等: "A Novel Detector for Adaptive Detection of Weak and Small Ships in Compact Polarimetric SAR", 《IEEE JOURNAL ON MINIATURIZATION FOR AIR AND SPACE SYSTEMS ( VOLUME: 3, ISSUE: 3, SEPTEMBER 2022)》 *
岳丽军;卫强;王玉菊;莫钦华;: "卫星探测舰船能力仿真研究", 舰船电子工程, no. 12 *
董凯旋;张宇琦;李政;: "基于遥感图像的舰船目标检测与参数估计方法", 电子科技, no. 02 *

Also Published As

Publication number Publication date
CN116188519B (en) 2023-10-03

Similar Documents

Publication Publication Date Title
Alonso et al. Accurate global localization using visual odometry and digital maps on urban environments
CN105352509B (en) Unmanned plane motion target tracking and localization method under geography information space-time restriction
CN106127787B (en) A kind of camera calibration method based on Inverse projection
CN106373159A (en) Simplified unmanned aerial vehicle multi-target location method
CN108151713A (en) A kind of quick position and orientation estimation methods of monocular VO
CN110246194A (en) Method for quickly calibrating rotation relation between camera and inertia measurement unit
Bhamidipati et al. SLAM-based integrity monitoring using GPS and fish-eye camera
CN111915651A (en) Visual pose real-time estimation method based on digital image map and feature point tracking
CN106289156B (en) The method of photography point solar elevation is obtained when a kind of satellite is imaged with any attitude
Huang et al. 360vo: Visual odometry using a single 360 camera
CN105303518A (en) Region feature based video inter-frame splicing method
CN113295171B (en) Monocular vision-based attitude estimation method for rotating rigid body spacecraft
CN116188519B (en) Ship target motion state estimation method and system based on video satellite
Pan et al. An optical flow-based integrated navigation system inspired by insect vision
Lin et al. PVO: Panoramic visual odometry
CN117173215A (en) Inland navigation ship whole-course track identification method and system crossing cameras
Bryson et al. A comparison of feature and pose-based mapping using vision, inertial and GPS on a UAV
Kim et al. Target detection and position likelihood using an aerial image sensor
CN114821494B (en) Ship information matching method and device
Van Hamme et al. Robust visual odometry using uncertainty models
CN114119752A (en) Indoor and outdoor linked robot positioning method based on GNSS and vision
Kaiser et al. Position and orientation of an aerial vehicle through chained, vision-based pose reconstruction
CN111127319A (en) Ground pixel resolution calculation method for push-broom in motion imaging
CN112577463A (en) Attitude parameter corrected spacecraft monocular vision distance measuring method
Grelsson et al. HorizonNet for visual terrain navigation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant