CN116755169B - Small target detection method and system based on star map identification and brightness priori information - Google Patents

Small target detection method and system based on star map identification and brightness priori information Download PDF

Info

Publication number
CN116755169B
CN116755169B CN202310696825.0A CN202310696825A CN116755169B CN 116755169 B CN116755169 B CN 116755169B CN 202310696825 A CN202310696825 A CN 202310696825A CN 116755169 B CN116755169 B CN 116755169B
Authority
CN
China
Prior art keywords
image
coordinate system
star
candidate target
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310696825.0A
Other languages
Chinese (zh)
Other versions
CN116755169A (en
Inventor
汪玲
俞锦程
刘寒寒
张翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202310696825.0A priority Critical patent/CN116755169B/en
Publication of CN116755169A publication Critical patent/CN116755169A/en
Application granted granted Critical
Publication of CN116755169B publication Critical patent/CN116755169B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V8/00Prospecting or detecting by optical means
    • G01V8/10Detecting, e.g. by using light barriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geophysics (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a small target detection method and a system based on star map identification and brightness priori information, wherein the method comprises the steps of processing a remote image to obtain centroid and brightness information of a point cluster in the image; constructing a conversion matrix from a geocentric inertial coordinate system to a camera coordinate system, and projecting stars in a navigation star table to an image plane according to the conversion matrix and internal reference data of a camera; calculating the distance between the position of the projected star in the image and the mass center of the point cluster in the image by utilizing a star eliminating algorithm, identifying the star and the candidate target in the image, and verifying the candidate target through brightness prior information; and performing ROI frame selection on the current frame image, capturing the target in the frame selection according to established judgment logic, converting the pixel coordinates of the candidate target into a sight angle and outputting sight angle information of the candidate target. The invention improves the accuracy and the detection speed of algorithm detection and realizes the self-adaptive detection and angle measurement of the space point targets from far to near.

Description

Small target detection method and system based on star map identification and brightness priori information
Technical Field
The invention belongs to the field of space remote point target identification, and particularly relates to a small target detection method and system based on star map identification and brightness priori information.
Background
With the development of aerospace technology and the aggravation of space resource exploration and utilization of countries around the world, the scope of aerospace tasks is greatly expanded, diversification is presented, such as on-orbit service, on-orbit assembly, space debris removal, formation collaboration, deep space exploration, space countermeasure and the like, the operations of approaching, meeting and the like in the tasks bring higher requirements to the autonomous control capability of satellites, and the autonomous situation awareness capability is a precondition, and especially the autonomous identification of non-cooperative targets such as unknown satellites, invalid satellites, space debris and the like is more critical.
The space point target detection technology has been developed for forty years, and domestic and foreign scholars have achieved many achievements in the point target detection and identification technology, however, most of the researches are performed on the space point target, and the researches on the space point target detection are less, especially the space point target detection under a large visual field is less. The characteristics of the point target detection include space domain, time domain, spectrum and the like. The detection of point targets can be largely classified into stationary or slow moving spatial point target detection and moving spatial point target detection.
For a slow moving or static space point target, the target is slower in movement, and the target position change among image frames is smaller, so that stars in a long-distance image are identified mainly through a star map identification mode, and the target is found. The key of the detection method is to identify the star in the image by using a star map identification method. Numerous algorithms have been proposed for star map recognition, with the triangle algorithm being one of the most widely used at present. However, because the feature dimension of the triangle recognition algorithm is low, the measurement error is large, resulting in low target recognition rate. The grid algorithm proposed by Padgett and Delgado is also a currently used more identification method. The recognition rate of the grid algorithm is high, and particularly the robustness of the grid algorithm to the uncertainty of the star position, the uncertainty of the star, and the like is good, but the algorithm requires more observation satellites in the field of view. This therefore limits the scope of application of the grid algorithm to a large extent.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: the small target detection method and system based on star map recognition and brightness priori information are provided, when a slow moving or static target is recognized, the number of stars in a field of view is not required to be considered, brightness recognition verification is added on the basis, and ROI frame selection is adopted, so that the applicability of the method is improved, false alarms and missing alarms are reduced, and the speed of the method is improved.
The invention adopts the following technical scheme for solving the corresponding engineering problems:
The invention provides a small target detection method based on star map identification and brightness priori information, which comprises the following steps:
s1, performing image preprocessing on the remote image to obtain centroid and brightness information of a point cluster in the image.
S2, constructing a transformation matrix from the geocentric inertial coordinate system to the camera coordinate system according to the initial position and the speed of the tracking star, the transformation attitude angles of the orbit coordinate system and the body coordinate system and the installation matrix of the camera.
S3, projecting the stars in the navigation star table to the image plane according to the conversion matrix from the geocentric inertial coordinate system to the camera coordinate system and the internal reference data of the camera.
S4, calculating the distance between the position of the projected star in the image and the mass center of the point cluster in the image by utilizing a star eliminating algorithm, identifying the star and the candidate target in the image, and verifying the candidate target through brightness prior information.
S5, after stably identifying candidate targets, performing ROI frame selection on the current frame image through the position information of the candidate targets identified by the previous frame for the subsequently input image, capturing the candidate targets in the selected frame according to established judgment logic, converting pixel coordinates of the candidate targets into view angles and outputting view angle information of the candidate targets.
Further, in step S1, the specific steps of preprocessing the image are as follows:
s101, calculating a global threshold value based on an iteration threshold value:
s1011, selecting an initial estimated value, namely the average gray level of the image, for the global threshold.
S1012, dividing the image by the initial estimated value to generate two groups of pixels, wherein the first group of pixels G1 consists of pixels with gray values larger than the initial estimated value, and the second group of pixels G2 consists of pixels with the gray values smaller than or equal to the initial estimated value.
At S1013, average gray values m1 and m2 of the G1 and G2 pixels are calculated, and a new threshold is generated, which is t= (m1+m2)/2.
S1014, repeating the steps S1012-S1013 until the difference between the thresholds after the continuous iteration is smaller than the predefined parameter 0.01, wherein the last threshold is the final global threshold.
The threshold value of the binarization operation can be automatically controlled according to the number of bright spots in the image through the global threshold value algorithm based on the iteration threshold value, manual change is not needed, and the self-adaption is stronger.
S102, after a final global threshold is obtained, performing binarization operation on the image to obtain a binarized image, wherein the specific formula is as follows:
Where g (x, y) represents a binarized image and f (x, y) represents a pixel point in the image.
S103, because the light spot energy distribution of clusters in the image is similar to Gaussian distribution, the connected domain division adopts a four-connected domain division mode. Dividing four connected domains: and taking each pixel point with the value of 1 as a center point, and if the pixel points with the value of 1 exist in the upper direction, the lower direction, the left direction and the right direction, considering the pixel points and the center point as the same area, namely the same object.
S104, extracting centroid points by adopting a centroid method: after the connected domain is divided, the object size is set to n pixel points, and the specific formula is as follows:
Where m i represents the gray value of the object corresponding to the ith pixel, (x i,yi) represents the coordinate of the object corresponding to the ith pixel in the image, and (x c,yc) represents the centroid point coordinate of the object.
Further, in step S2, the specific steps of constructing the transformation matrix from the geocentric inertial coordinate system to the camera coordinate system are as follows:
s201, because the star is far away from the earth, for the star azimuth, the centroid and the spacecraft centroid can be considered to coincide without considering errors caused by coordinate translation. Using the position-velocity vector of the satellite in which the camera is located, i.e. tracking the current moment of the satellite Converting from the geocentric inertial coordinate system to the orbit coordinate system to obtain a position vector under the orbit coordinate system, which is specifically expressed as:
Wherein, Representing a transformation matrix from an inertial coordinate system to an orbital coordinate system.
S202, according to the Euler theorem, rotating the orbit coordinate system for three times according to the sequence of 3-1-2 to obtain a spacecraft body coordinate system, wherein the three rotation angles are respectively a yaw angle, a roll angle and a pitch angle, and a position vector under the spacecraft body system is obtained, and is specifically expressed as:
Wherein θ represents a pitch angle, Represents a roll angle, ψ represents a yaw angle, R X、RY and R Z represent rotation matrices of counterclockwise rotation transformation of coordinate points about X, Y, Z axis, respectively,/>A rotation matrix representing an inertial coordinate system to a satellite body coordinate system.
S203, obtaining a conversion matrix from the geocentric inertial coordinate system to the camera coordinate system according to the installation matrix of the camera on the body coordinate system
Wherein R 3、R2 and R 1 represent rotation transformation matrices corresponding to rotations around Z, Y and X axes of the coordinate system, respectively.
Through matrixAnd projecting the fixed star in the inertial coordinate system into the camera coordinate system.
Further, in step S3, the specific content of projecting the star in the navigation star table to the image plane is:
In the camera coordinate system, the coordinates are The coordinates in the imaging coordinate system O R -xy are expressed as:
Wherein, Representing the abscissa in the imaging coordinate system,/>Representing the ordinate of the imaging coordinate system, f represents the angular separation of the camera.
The coordinates are calculated according to the geometric relationship without considering the distortion of the lensTransformed to a pixel coordinate system O P -uv, specifically expressed as:
Where u 0 and v 0 represent pixel plane center coordinates.
The initial velocity and position information of the tracking star are determined by the orbit and velocity information of the given tracking star, the attitude angle information is determined by the conversion relation of the satellite body and the orbit coordinate system, and the camera installation matrix is determined by the installation position of the camera on the satellite and the definition of the camera coordinate system.
Further, in step S4, the specific content is: calculating the distance between the position of the projected star in the image and the centroid of the point cluster in the image, and judging the cluster as the star when the distance value between the projected star and the point cluster in the image is smaller than a preset threshold value (the threshold value is the maximum pixel value of the point cluster to be judged as the constant star point); and when the distance value between the projection star and the cluster in the image is larger than a preset threshold value, judging that the cluster is a candidate target.
In practice, the unstable platform causes camera shake, so that the candidate target and the star position in the image deviate, and therefore, in consideration of the practical situation, the threshold is set in an iterative loop reduction mode, for the threshold which is set initially, when the candidate target cannot be detected, 1 pixel is reduced to perform star rejection again until the minimum threshold is reached, and if the magnet still cannot find the candidate target, the candidate target is judged to be absent in the image. The sidereal rejection considers the influence of the actual platform stability on candidate target identification, and the adaptable platform stability is 0.3 degrees/s, thereby reducing false alarm and false alarm.
After the candidate target is obtained, the candidate target is verified through the brightness priori information, and if the point cluster conforming to the brightness priori information and the candidate target are the same point cluster, the judgment algorithm identifies correctly.
Further, in step S5, the specific step of outputting the line-of-sight angle information of the candidate target is:
S501, determining the size of an ROI frame according to the field of view of an actual camera and the body jitter error of a satellite platform in a task, determining the maximum moving angle of a candidate target between image frames, and determining the size of the frame according to the field of view of the selected camera and the size S (pixel unit) of an image, wherein the specific formula is as follows:
Where Δω represents the maximum angle, α represents the field of view of the selected camera, and S represents the size of the resulting image.
S502, after the size of the selection frame is selected, taking the centroid coordinates of the candidate target of the previous frame as the center of the selection frame, performing image preprocessing in the selection frame to obtain all bright spots, and performing distance calculation and brightness value matching on the identified spots and the candidate target of the previous frame, wherein the spot with the minimum distance and brightness difference value between the identified spots and the candidate target of the previous frame is the candidate target.
Setting coordinates of a candidate object in a pixel plane asThe candidate target line of sight angle is expressed in vector form under the camera coordinate system as/>The vector is expressed as i p=[if,iy,iz after unitization, and the azimuth angle and the pitch angle under the camera coordinate are calculated, wherein the specific formula is as follows:
Where α represents azimuth and β represents pitch. The ROI algorithm is very suitable for the situation of candidate target slow motion, and has high algorithm speed and high recognition rate.
Furthermore, the invention provides a small target detection system based on star map identification and brightness priori information, which comprises
And the remote image preprocessing module is used for carrying out image preprocessing on the remote image to obtain centroid and brightness information of the point cluster in the image.
The transformation matrix construction module is used for constructing a transformation matrix from the geocentric inertial coordinate system to the camera coordinate system according to the initial position and the speed of the tracking star, the transformation attitude angles of the orbit coordinate system and the body coordinate system and the installation matrix of the camera.
And the star projection module is used for projecting the star in the navigation star table to the image plane according to the conversion matrix from the geocentric inertial coordinate system to the camera coordinate system and the internal reference data of the camera.
And the star and candidate target identification module is used for calculating the distance between the position of the projected star in the image and the mass center of the dot cluster in the image by utilizing a star rejection algorithm, identifying the star and the candidate target in the image, and verifying the candidate target through brightness priori information.
And the sight angle output module is used for performing ROI frame selection on the current frame image through the position information of the candidate target identified by the previous frame for the image input subsequently after stably identifying the candidate target, capturing the candidate target in the selected frame according to the established judgment logic, converting the pixel coordinate of the candidate target into the sight angle and outputting the sight angle information of the candidate target.
Furthermore, the invention provides an electronic device, comprising a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps of the small target detection method based on star chart identification and brightness priori information.
Further, the present invention proposes a computer readable storage medium storing a computer program which, when executed by a processor, performs the small object detection method based on star map recognition and luminance a priori information as described above.
Compared with the prior art, the invention adopts the technical proposal and has the following remarkable technical effects:
(1) And the energy distribution of the bright clusters in the remote image is considered, and the binarization threshold is solved by adopting the self-adaptive global threshold iteration, so that the self-adaptability of the algorithm is improved. The connected domain division adopts four connected domain divisions conforming to Gaussian distribution rules, and pixel points and edge points of bright clusters in the image can be effectively extracted. And finally, obtaining the mass center of the cluster by adopting a mass center method, wherein the extraction precision reaches the sub-pixel level, and the recognition of the subsequent target is facilitated.
(2) On the premise of stably identifying the target through sidereal rejection, the target can be effectively and accurately found by adopting the ROI to track the target, and the algorithm speed is high, so that the image processing speed of 20HZ is achieved.
Drawings
FIG. 1 is a flow chart of an overall implementation of the present invention.
FIG. 2 is a graph of angular error for point target recognition of a simulated remote image in an embodiment of the invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
The invention provides a small target detection method based on star map identification and brightness priori information, which is characterized in that a star map is reversely generated by utilizing SAO (Smithsonian Astrophysical Observatory, shi Misong celestial body physical table) star tables, internal and external parameters of a camera, tracking star position and speed information and conversion relations of various coordinate systems, and the ROI frame selection is carried out to identify candidate targets according to the stability of the candidate targets. The method is suitable for identifying the high-orbit non-cooperative target in the 15km-2km approaching task, and has high operation speed and high target detection accuracy.
The long-distance image in the embodiment comprises 10712 pictures in total of star distribution conditions in the process of approaching a target of 10.5km-500 m. As shown in fig. 1, the specific steps are as follows:
s1, camera internal parameters adopted by the remote image comprise focal length, image resolution, pixel size and camera field angle, and specific parameters are shown in table 1.
TABLE 1 Point target image simulation Camera parameters
Image preprocessing is carried out on the remote image to obtain centroid and brightness information of point clusters in the image, and the specific steps are as follows:
s101, calculating a global threshold value based on an iteration threshold value:
s1011, selecting an initial estimated value, namely the average gray level of the image, for the global threshold.
S1012, dividing the image by the initial estimated value to generate two groups of pixels, wherein the first group of pixels G1 consists of pixels with gray values larger than the initial estimated value, and the second group of pixels G2 consists of pixels with the gray values smaller than or equal to the initial estimated value.
At S1013, average gray values m1 and m2 of the G1 and G2 pixels are calculated, and a new threshold is generated, which is t= (m1+m2)/2.
S1014, repeating the steps S1012-S1013 until the difference between the thresholds after the continuous iteration is smaller than the predefined parameter 0.01, wherein the last threshold is the final global threshold.
S102, after a global threshold is obtained, performing binarization operation on the image to obtain a binarized image, wherein the specific formula is as follows:
Where g (x, y) represents a binarized image and f (x, y) represents a pixel point in the image.
S103, for simulating the dispersion effect generated by camera imaging and facilitating the subsequent improvement of centroid detection precision, the long-distance image simulation adopts dispersion light spots with the size of 5 multiplied by 5, so that the energy distribution of stars and targets is similar to Gaussian distribution.
Dividing four connected domains: and taking each pixel point with the value of 1 as a center point, and if the pixel points with the value of 1 exist in the upper direction, the lower direction, the left direction and the right direction, considering the pixel points and the center point as the same area, namely the same object.
S104, extracting centroid points by adopting a centroid method: after the connected domain is divided, the object size is set to n pixel points, and the specific formula is as follows:
Where m i represents the gray value of the object corresponding to the ith pixel, (x i,yi) represents the coordinate of the object corresponding to the ith pixel in the image, and (x c,yc) represents the centroid point coordinate of the object.
S2, constructing a conversion matrix from a geocentric inertial coordinate system to a camera coordinate system according to the initial position and the speed of the tracking star, the conversion attitude angle of the orbit coordinate system and the body coordinate system and the installation matrix of the camera, wherein the specific steps are as follows:
s201, tracking the position and speed vector of the current moment of the satellite by using the satellite of the camera Converting from the geocentric inertial coordinate system to the orbit coordinate system to obtain a position vector under the orbit coordinate system, which is specifically expressed as:
Wherein, Representing a transformation matrix from an inertial coordinate system to an orbital coordinate system.
In this embodiment, a satellite at 10.5km is selected, and the position (m) and the velocity vector (v/s) at the current time are respectively:
calculating to obtain rotation matrix from inertial coordinate system to orbit coordinate system
S202, according to the Euler theorem, rotating the orbit coordinate system for three times according to the sequence of 3-1-2 to obtain a spacecraft body coordinate system, wherein the three rotation angles are respectively a yaw angle, a roll angle and a pitch angle, and a position vector under the spacecraft body system is obtained, and is specifically expressed as:
Wherein θ represents a pitch angle, Represents a roll angle, ψ represents a yaw angle, R X、RY and R Z represent rotation matrices of counterclockwise rotation transformation of coordinate points about X, Y, Z axis, respectively,/>A rotation matrix representing an inertial coordinate system to a satellite body coordinate system.
The corresponding yaw, roll and pitch angles at 10.5km are respectively:
The rotation matrix calculated is:
s203, obtaining a conversion matrix from the geocentric inertial coordinate system to the camera coordinate system according to the installation matrix of the camera on the body coordinate system
Wherein R 3、R2 and R 1 represent rotation transformation matrices corresponding to rotations around Z, Y and X axes of the coordinate system, respectively.
Through matrixAnd projecting the fixed star in the inertial coordinate system into the camera coordinate system.
S3, projecting the stars in the navigation star table to an image plane according to a conversion matrix from the geocentric inertial coordinate system to the camera coordinate system and internal reference data of the camera, wherein the specific contents are as follows:
In the camera coordinate system, the coordinates are The coordinates in the imaging coordinate system O R -xy are expressed as:
Wherein, Representing the abscissa in the imaging coordinate system,/>Representing the ordinate of the imaging coordinate system; f represents the angular separation of the camera.
Based on the geometric relationship, coordinates are obtainedTransformed to a pixel coordinate system O P -uv, specifically expressed as:
Where u 0 and v 0 represent pixel plane center coordinates.
After the coordinate conversion is completed, the distance calculation is performed between the projection point and the centroid P (x i,yi) of the cluster in the image:
If the calculated value is smaller than the set threshold, the P point is fixed star, and if the calculated value is larger than the set threshold, the calculated value is target.
S4, calculating the distance between the position of the projected star in the image and the mass center of the point cluster in the image by utilizing a star eliminating algorithm, identifying the star and the candidate target in the image, and verifying the candidate target through brightness prior information, wherein the specific contents are as follows: calculating the distance between the position of the projected star in the image and the centroid of the point cluster in the image, and judging the cluster as the star when the distance value between the projected star and the point cluster in the image is smaller than a preset threshold value (the threshold value is the maximum pixel value of the point cluster to be judged as the constant star point); and when the distance value between the projection star and the cluster in the image is larger than a preset threshold value, judging that the cluster is a candidate target.
The method comprises the steps of setting a threshold value in an iterative loop reduction mode, setting an initial threshold value to be 9 pixels, reducing 1 pixel to perform sidereal elimination again when a candidate target cannot be detected until the minimum threshold value reaches 5 pixels, and judging that the candidate target does not exist in the image if the candidate target cannot be found at the moment. The sidereal rejection considers the influence of the actual platform stability on candidate target identification, and the adaptable platform stability is 0.3 degrees/s, thereby reducing false alarm and false alarm.
After the candidate target is obtained, the candidate target is verified through the brightness priori information, if the point cluster conforming to the brightness priori information and the candidate target are the same point cluster, the judgment algorithm identifies correctly, so that the pixel coordinates of the centroid of the candidate target cluster are converted into the line of sight angle, and the line of sight angle information of the candidate target is output.
S5, after stably identifying candidate targets, performing ROI frame selection on a current frame image through position information of the candidate targets identified by a previous frame for the subsequently input image, capturing the candidate targets in the selected frame according to established judgment logic, converting pixel coordinates of the candidate targets into line-of-sight angles and outputting line-of-sight angle information of the candidate targets, wherein the method comprises the following specific steps of:
S501, if 50 continuous images can accurately capture the target, the target is stably tracked, and the ROI tracking mode is entered. Determining the size of a selected frame of the ROI according to the field of view of an actual camera and the body jitter error of a satellite platform in a task, determining the maximum moving angle of a candidate target between image frames, and determining the size of the selected frame according to the field of view of the selected camera and the size S (pixel unit) of an obtained image, wherein the specific formula is as follows:
Where Δω represents the maximum angle, α represents the field of view of the selected camera, and S represents the size of the resulting image.
In this embodiment, Δω=8° α=65° s=2048 pixels, and the size of the frame is 252×252pixels, and since the previous frame is required to be in the center of the image, the size of the frame is finally determined to be 253×253pixels for the convenience of calculation.
S502, after the size of the selection frame is selected, taking the centroid coordinates of the candidate target of the previous frame as the center of the selection frame, performing image preprocessing in the selection frame to obtain all bright spots, and performing distance calculation and brightness value matching on the identified spots and the candidate target of the previous frame, wherein the spot with the minimum distance and brightness difference value between the identified spots and the candidate target of the previous frame is the candidate target.
Setting coordinates of a candidate object in a pixel plane asThe candidate target line of sight angle is expressed in vector form under the camera coordinate system as/>The vector is expressed as i p=[if,iy,iz after unitization, and the azimuth angle and the pitch angle under the camera coordinate are calculated, wherein the specific formula is as follows:
Where α represents azimuth and β represents pitch. The ROI algorithm is very suitable for the situation of candidate target slow motion, and has high algorithm speed and high recognition rate.
The comparison and error of the target goniometric data of the 10 frame image and the corresponding real data at about 10.5km are shown in table 2.
Table 2 algorithm test result examples
The case of angular errors is shown in detail in fig. 2. As can be seen from fig. 2, the algorithm is able to identify the target well during the approach of 10.5km-500m, and the angular error is less than 0.1 °. The algorithm speed is relatively high, and when the sidereal is rejected, the algorithm speed is about 3Hz, and the algorithm speed reaches 20Hz in the ROI tracking stage.
The specific properties of the present invention are shown in Table 3.
Table 3 algorithm test performance analysis
Through tests, the method provided by the invention is suitable for detecting targets in high rails. According to the position and speed information of the tracking star and different definitions of the coordinate systems, the parameters of coordinate transformation can be changed to achieve the purpose of mutual transformation among the coordinate systems. According to the different parameters inside and outside the camera and the different platform jitter errors, the size of the ROI frame can be changed accordingly, so that the aim of tracking and identifying the target is fulfilled.
The embodiment of the invention also provides a small target detection system based on star map recognition and brightness priori information, which comprises a remote image preprocessing module, a conversion matrix construction module, a star projection module, a star and candidate target recognition module, a sight angle output module of a candidate target and a computer program capable of running on a processor. It should be noted that each module in the above system corresponds to a specific step of the method provided by the embodiment of the present invention, and has a corresponding functional module and beneficial effect of executing the method. Technical details not described in detail in this embodiment may be found in the methods provided in the embodiments of the present invention.
The embodiment of the invention also provides an electronic device which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor. It should be noted that each module in the above system corresponds to a specific step of the method provided by the embodiment of the present invention, and has a corresponding functional module and beneficial effect of executing the method. Technical details not described in detail in this embodiment may be found in the methods provided in the embodiments of the present invention.
The embodiment of the invention also provides a computer readable storage medium, and the computer readable storage medium stores a computer program. It should be noted that each module in the above system corresponds to a specific step of the method provided by the embodiment of the present invention, and has a corresponding functional module and beneficial effect of executing the method. Technical details not described in detail in this embodiment may be found in the methods provided in the embodiments of the present invention.
While embodiments of the present invention have been shown and described, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention. Any other corresponding changes and modifications made in accordance with the technical idea of the present invention shall be included in the scope of the claims of the present invention.

Claims (9)

1. The small target detection method based on star map identification and brightness priori information is characterized by comprising the following steps of:
s1, preprocessing a remote image to obtain centroid and brightness information of a point cluster in the image;
S2, constructing a conversion matrix from a geocentric inertial coordinate system to a camera coordinate system according to the initial position and the speed of the tracking star, the conversion attitude angle of the orbit coordinate system and the body coordinate system and the installation matrix of the camera;
S3, projecting stars in the navigation star table to an image plane according to a conversion matrix from the geocentric inertial coordinate system to the camera coordinate system and internal reference data of the camera;
S4, calculating the distance between the position of the projected star in the image and the mass center of the point cluster in the image by utilizing a star eliminating algorithm, identifying the star and the candidate target in the image, and verifying the candidate target through brightness priori information;
S5, after stably identifying candidate targets, performing ROI frame selection on the current frame image through the position information of the candidate targets identified by the previous frame for the subsequently input image, capturing the candidate targets in the selected frame according to established judgment logic, converting pixel coordinates of the candidate targets into view angles and outputting view angle information of the candidate targets.
2. The small target detection method based on star map recognition and brightness prior information according to claim 1, wherein in step S1, the specific steps of preprocessing the image are as follows:
s101, calculating a global threshold value based on an iteration threshold value:
S1011, selecting an initial estimated value, namely the average gray level of the image, for the global threshold value;
s1012, dividing the image by using the initial estimated value to generate two groups of pixels, wherein the first group of pixels G1 consists of pixels with gray values larger than the initial estimated value, and the second group of pixels G2 consists of pixels with the gray values smaller than or equal to the initial estimated value;
s1013, calculating average gray values of the G1 and G2 pixels, and generating a new threshold value according to the following specific formula:
T=(m1+m2)/2;
wherein m1 and m2 are average gray values of the G1 and G2 pixels, respectively;
S1014, repeating the steps S1012-S1013 until the difference between the thresholds is smaller than the predefined parameter after the continuous iteration, wherein the last threshold is the final global threshold;
s102, after a final global threshold is obtained, performing binarization operation on the image to obtain a binarized image, wherein the specific formula is as follows:
wherein g (x, y) represents a binarized image, and f (x, y) represents a pixel point in the image;
S103, dividing four connected domains: taking each pixel point with the value of 1 as a center point, and if the pixel points with the value of 1 exist in the upper direction, the lower direction, the left direction and the right direction, considering the pixel points and the center point as the same area, namely the same object;
S104, extracting centroid points by adopting a centroid method: after the connected domain is divided, the object size is set to n pixel points, and the specific formula is as follows:
Where m i represents the gray value of the object corresponding to the ith pixel, (x i,yi) represents the coordinate of the object corresponding to the ith pixel in the image, and (x c,yc) represents the centroid point coordinate of the object.
3. The small target detection method based on star map recognition and brightness prior information according to claim 1, wherein in step S2, the specific steps of constructing a transformation matrix from a geocentric inertial coordinate system to a camera coordinate system are as follows:
s201, tracking the position and speed vector of the current moment of the satellite by using the satellite of the camera Converting from the geocentric inertial coordinate system to the orbit coordinate system to obtain a position vector under the orbit coordinate system, which is specifically expressed as:
Wherein, Representing a transformation matrix from an inertial coordinate system to an orbital coordinate system;
S202, according to the Euler theorem, rotating the orbit coordinate system for three times according to the sequence of 3-1-2 to obtain a spacecraft body coordinate system, wherein the three rotation angles are respectively a yaw angle, a roll angle and a pitch angle, and a position vector under the spacecraft body system is obtained, and is specifically expressed as:
Wherein θ represents a pitch angle, Represents a roll angle, ψ represents a yaw angle, R X、RY and R Z represent rotation matrices of counterclockwise rotation transformation of coordinate points about X, Y, Z axis, respectively,/>A rotation matrix representing an inertial coordinate system to a satellite body coordinate system;
s203, obtaining a conversion matrix from the geocentric inertial coordinate system to the camera coordinate system according to the installation matrix of the camera on the body coordinate system
Wherein R 3、R2 and R 1 represent rotation transformation matrices corresponding to rotations about Z, Y and X axes of the coordinate system, respectively;
Through matrix And projecting the fixed star in the inertial coordinate system into the camera coordinate system.
4. The small object detection method based on star map recognition and brightness priori information according to claim 1, wherein in step S3, the specific content of projecting the star in the navigation star table to the image plane is:
In the camera coordinate system, the coordinates are The coordinates in the imaging coordinate system O R -xy are expressed as:
Wherein, Representing the abscissa in the imaging coordinate system,/>Representing the ordinate of the imaging coordinate system, f representing the angular distance of the camera;
Based on the geometric relationship, coordinates are obtained Transformed to a pixel coordinate system O P -uv, specifically expressed as:
Where u 0 and v 0 represent pixel plane center coordinates.
5. The small target detection method based on star map recognition and brightness priori information according to claim 1, wherein in step S4, the specific contents are: calculating the distance between the position of the projected star in the image and the centroid of the point cluster in the image, and judging the cluster as the star when the distance value between the projected star and the point cluster in the image is smaller than a preset threshold value; when the distance value between the projection star and the point cluster in the image is larger than a preset threshold value, judging that the cluster is a candidate target;
Setting a threshold value in an iterative loop reducing mode, and for the threshold value which is set initially, reducing the threshold value by 1 pixel again to perform sidereal elimination when a candidate target cannot be detected until the minimum threshold value is reached, and judging that the candidate target does not exist in the image if the candidate target cannot be found at the moment;
After the candidate target is obtained, the candidate target is verified through the brightness priori information, and if the point cluster conforming to the brightness priori information and the candidate target are the same point cluster, the identification is judged to be correct.
6. The small target detection method based on star map recognition and luminance priori information according to claim 1, wherein in step S5, the specific step of outputting the line-of-sight angle information of the candidate target is:
s501, determining the frame selection size of the ROI according to the field of view of an actual camera and the body jitter error of a satellite platform in a task, and determining the maximum angle of the movement of a candidate target between image frames, wherein the specific formula is as follows:
Where Δω represents the maximum angle, α represents the field of view of the selected camera, and S represents the size of the resulting image;
S502, after the size of a frame is selected, taking the centroid coordinates of a previous frame candidate target as the center of the frame, performing image preprocessing in the frame to obtain all bright spots, and performing distance calculation and brightness value matching on the identified spots and the previous frame candidate target, wherein the spot with the smallest distance and brightness difference value between the identified spots and the previous frame candidate target point is the candidate target;
Setting coordinates of a candidate object in a pixel plane as The candidate target line of sight angle is expressed in vector form under the camera coordinate system as/>The vector is expressed as i p=[if,iy,iz after unitization, and the azimuth angle and the pitch angle under the camera coordinate are calculated, wherein the specific formula is as follows:
Where α represents azimuth and β represents pitch.
7. Small target detection system based on star map recognition and brightness priori information, which is characterized by comprising
The remote image preprocessing module is used for preprocessing the remote image to obtain centroid and brightness information of a point cluster in the image;
The conversion matrix construction module is used for constructing a conversion matrix from the geocentric inertial coordinate system to the camera coordinate system according to the initial position and the speed of the tracking star, the conversion attitude angle of the orbit coordinate system and the body coordinate system and the installation matrix of the camera;
the fixed star projection module is used for projecting fixed stars in the navigation star table to an image plane according to a conversion matrix from a geocentric inertial coordinate system to a camera coordinate system and internal reference data of a camera;
The star and candidate target recognition module is used for calculating the distance between the position of the projected star in the image and the mass center of the point cluster in the image by utilizing a star eliminating algorithm, recognizing the star and the candidate target in the image, and verifying the candidate target through brightness priori information;
And the sight angle output module is used for performing ROI frame selection on the current frame image through the position information of the candidate target identified by the previous frame for the image input subsequently after stably identifying the candidate target, capturing the candidate target in the selected frame according to the established judgment logic, converting the pixel coordinate of the candidate target into the sight angle and outputting the sight angle information of the candidate target.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of any one of claims 1 to 6 when the computer program is executed by the processor.
9. A computer-readable storage medium, having stored thereon a computer program, characterized in that the computer program, when executed by a processor, performs the method of any of claims 1 to 6.
CN202310696825.0A 2023-06-13 2023-06-13 Small target detection method and system based on star map identification and brightness priori information Active CN116755169B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310696825.0A CN116755169B (en) 2023-06-13 2023-06-13 Small target detection method and system based on star map identification and brightness priori information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310696825.0A CN116755169B (en) 2023-06-13 2023-06-13 Small target detection method and system based on star map identification and brightness priori information

Publications (2)

Publication Number Publication Date
CN116755169A CN116755169A (en) 2023-09-15
CN116755169B true CN116755169B (en) 2024-04-30

Family

ID=87960264

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310696825.0A Active CN116755169B (en) 2023-06-13 2023-06-13 Small target detection method and system based on star map identification and brightness priori information

Country Status (1)

Country Link
CN (1) CN116755169B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296726A (en) * 2016-07-22 2017-01-04 中国人民解放军空军预警学院 A kind of extraterrestrial target detecting and tracking method in space-based optical series image
CN106651904A (en) * 2016-12-02 2017-05-10 北京空间机电研究所 Wide-size-range multi-space target capture tracking method
CN114255263A (en) * 2021-12-24 2022-03-29 中国科学院光电技术研究所 Self-adaptive spatial dim-and-weak star recognition method based on background recognition
CN114719844A (en) * 2022-04-06 2022-07-08 北京信息科技大学 Space projection-based all-celestial star map identification method, device and medium
CN115638796A (en) * 2022-09-19 2023-01-24 北京控制工程研究所 Rapid star map identification method based on refraction star/non-refraction star information fusion and prediction
CN115908554A (en) * 2022-12-16 2023-04-04 江苏科技大学 High-precision sub-pixel simulation star map and sub-pixel extraction method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296726A (en) * 2016-07-22 2017-01-04 中国人民解放军空军预警学院 A kind of extraterrestrial target detecting and tracking method in space-based optical series image
CN106651904A (en) * 2016-12-02 2017-05-10 北京空间机电研究所 Wide-size-range multi-space target capture tracking method
CN114255263A (en) * 2021-12-24 2022-03-29 中国科学院光电技术研究所 Self-adaptive spatial dim-and-weak star recognition method based on background recognition
CN114719844A (en) * 2022-04-06 2022-07-08 北京信息科技大学 Space projection-based all-celestial star map identification method, device and medium
CN115638796A (en) * 2022-09-19 2023-01-24 北京控制工程研究所 Rapid star map identification method based on refraction star/non-refraction star information fusion and prediction
CN115908554A (en) * 2022-12-16 2023-04-04 江苏科技大学 High-precision sub-pixel simulation star map and sub-pixel extraction method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
亮星辅助下基于坐标转换的快速星图识别方法;刘先一 等;航空学报;第41卷(第08期);625360-1-623560-8 *
基于星图识别的空间目标检测算法研究;程军 等;光学技术;第36卷(第03期);439-444 *

Also Published As

Publication number Publication date
CN116755169A (en) 2023-09-15

Similar Documents

Publication Publication Date Title
Kolomenkin et al. Geometric voting algorithm for star trackers
CN111862126B (en) Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm
CN103697855B (en) A kind of hull horizontal attitude measuring method detected based on sea horizon
CN111862201B (en) Deep learning-based spatial non-cooperative target relative pose estimation method
CN111598952B (en) Multi-scale cooperative target design and online detection identification method and system
CN111829532B (en) Aircraft repositioning system and method
Pham et al. An autonomous star recognition algorithm with optimized database
CN110349212B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
Clouse et al. Small field-of-view star identification using Bayesian decision theory
Piazza et al. Monocular relative pose estimation pipeline for uncooperative resident space objects
CN114004977A (en) Aerial photography data target positioning method and system based on deep learning
CN113743385A (en) Unmanned ship water surface target detection method and device and unmanned ship
CN110617802A (en) Satellite-borne moving target detection and speed estimation method
Van Pham et al. Vision‐based absolute navigation for descent and landing
Zhu et al. Arbitrary-oriented ship detection based on retinanet for remote sensing images
Yuan et al. High Speed Safe Autonomous Landing Marker Tracking of Fixed Wing Drone Based on Deep Learning
Kaufmann et al. Shadow-based matching for precise and robust absolute self-localization during lunar landings
Koizumi et al. Development of attitude sensor using deep learning
Ozaki et al. DNN-based self-attitude estimation by learning landscape information
US20220017239A1 (en) Methods and systems for orbit estimation of a satellite
CN116755169B (en) Small target detection method and system based on star map identification and brightness priori information
Cassinis et al. Leveraging neural network uncertainty in adaptive unscented Kalman Filter for spacecraft pose estimation
Yan et al. Horizontal velocity estimation via downward looking descent images for lunar landing
CN115760984A (en) Non-cooperative target pose measurement method based on monocular vision by cubic star
Piazza et al. Deep learning-based monocular relative pose estimation of uncooperative spacecraft

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant