CN113838072A - High-dynamic star atlas image segmentation method - Google Patents

High-dynamic star atlas image segmentation method Download PDF

Info

Publication number
CN113838072A
CN113838072A CN202111283064.3A CN202111283064A CN113838072A CN 113838072 A CN113838072 A CN 113838072A CN 202111283064 A CN202111283064 A CN 202111283064A CN 113838072 A CN113838072 A CN 113838072A
Authority
CN
China
Prior art keywords
star
region
pixels
pixel
star point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111283064.3A
Other languages
Chinese (zh)
Other versions
CN113838072B (en
Inventor
魏新国
江洁
李苏祺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Jicui Intelligent Photoelectric System Research Institute Co ltd
Original Assignee
Jiangsu Jicui Intelligent Photoelectric System Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Jicui Intelligent Photoelectric System Research Institute Co ltd filed Critical Jiangsu Jicui Intelligent Photoelectric System Research Institute Co ltd
Priority to CN202111283064.3A priority Critical patent/CN113838072B/en
Publication of CN113838072A publication Critical patent/CN113838072A/en
Application granted granted Critical
Publication of CN113838072B publication Critical patent/CN113838072B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20101Interactive definition of point of interest, landmark or seed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a high dynamic star map image segmentation method, belonging to the technical field of star sensors in the field of astronomical navigation, and comprising the following steps of: the linear features of the fuzzy star points are utilized to carry out linear feature segmentation on the fuzzy star points, neighborhood pixels of results of the linear feature segmentation are clustered and divided into a star point part and a background part, and then the separated star point regions are fused through two-step region growth based on cluster analysis to complete the extraction of the fuzzy star points. The high-dynamic star map image segmentation method provided by the invention can completely extract the fuzzy star points, has high extraction precision, greatly improves the success rate of star map identification and star tracking, realizes accurate matching identification and attitude calculation of the star map, has simple and quick calculation process, and has important practical significance for improving the performance of the star sensor.

Description

High-dynamic star atlas image segmentation method
Technical Field
The invention belongs to the technical field of star sensors in the field of astronomical navigation, and particularly relates to a high-dynamic star map image segmentation method.
Background
The attitude information of the spacecraft plays an important role in astronomical navigation, and the star sensor is an important technical means for measuring the attitude of the spacecraft. The star sensor works by utilizing a star map shot by a CCD or CMOS camera at a certain moment, extracting the position (mass center) information of star points in the star map, finding corresponding matching of observed stars in a navigation star library through a star map recognition algorithm, and finally calculating the three-axis attitude of the star sensor by utilizing the direction vector information of the matched star pairs so as to determine the space attitude of the spacecraft. With the development of space technology, the dynamic performance gradually becomes the bottleneck of the star sensor. Under the dynamic condition, the star point is dragged on the star sensor image surface to form a long linear image, the energy is dispersed in the linear direction, the energy distribution does not follow the ideal Gaussian distribution any more, meanwhile, the energy obtained by a unit pixel is reduced, the brightness of the star point is reduced when the brightness of the star point is static, so that the signal-to-noise ratio (SNR) is reduced, the dark star is broken due to the influence of noise, and the difficulty of star point segmentation is increased. The traditional star point extraction method can divide star points into a plurality of areas, which can cause incomplete extraction of the star points, even false detection or missing detection, easily generate false star point extraction and directly influence the success rate of star map identification and star tracking and the precision of attitude settlement.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention aims to provide a high-dynamic star atlas image segmentation method.
In order to achieve the purpose and achieve the technical effect, the invention adopts the technical scheme that:
a high dynamic star atlas image segmentation method comprises the following steps: the linear features of the fuzzy star points are utilized to carry out linear feature segmentation on the fuzzy star points, neighborhood pixels of results of the linear feature segmentation are clustered and divided into a star point part and a background part, and then the separated star point regions are fused through two-step region growth based on cluster analysis to complete the extraction of the fuzzy star points.
Further, the linear feature segmentation of the fuzzy star points is realized by adopting the following steps:
11) calculating according to a gradient calculation template to obtain the gradient of each pixel:
let i (x, y) be the image gray scale at (x, y), the gradient value g in the x directionx(x, y) andgradient value g in y directiony(x, y) are obtained according to equation (1) and equation (2), respectively:
Figure BDA0003331832140000011
Figure BDA0003331832140000021
therefore, the gray-scale contour angle is calculated according to the formula (3):
Figure BDA0003331832140000022
12) the gradient magnitude G (x, y) of each pixel is calculated according to equation (4), and then the pixels are sorted by gradient magnitude:
Figure BDA0003331832140000023
13) region growing according to pixels with gradient direction in a range
Starting to grow from the pixel with the maximum gradient value as a seed point, searching for the pixel with the neighborhood larger than the gradient threshold value, and if the gray contour line angle of the pixel is equal to the angle theta of the regionregionIf the phase difference is within a certain tolerance, adding the pixel into the fuzzy star point region, and detecting the neighborhood pixels of the newly added pixel until no pixel is added into the region; angle theta of the region every time a pixel is added to the regionregionUpdating according to formula (5):
Figure BDA0003331832140000024
where P represents the set of pixels that have been added to the region, (x)p,yp) Is the pixel coordinate of a p pixel.
Further, the fusion of the separated star point regions is realized by adopting the following steps:
firstly, boundary pixels of a segmentation result obtained after linear feature segmentation is carried out on fuzzy star points are extracted, the pixels nearest to the boundary comprise a star point region and a background region, the gray level of the star point region is higher than that of the background region, the gray level distribution of the pixels nearest to the boundary is divided into two types, namely a star point part with higher gray level and a background part with lower gray level, the gray level of the pixels nearest to the boundary is divided by adopting a clustering division method, the parts divided into the star points are fused in the star point region, then, the region growth of point-by-point self-adaptive iterative correction is carried out on the unretracted star point region according to the division result, and finally, the separated star point region is completely fused.
Further, the boundary pixels are subjected to region growing based on cluster analysis by adopting the following steps:
performing cluster analysis on pixels in the set, wherein the neighborhood pixel set is known to be divided into a background area and a star point area, the maximum value and the minimum value in the set are respectively used as two initial cluster centers, the distance from the gray level of the rest neighborhood pixels to the two cluster centers is calculated, the cluster represented by the cluster center closest to the cluster center is classified into the cluster centers respectively, the cluster centers of the two clusters are updated, if the cluster center of each cluster is changed from the previous cluster, the distance from the neighborhood pixels to the two cluster centers is recalculated until the cluster center of each cluster is not changed any more, the division of the neighborhood pixels is completed, and the cluster centers are used as a judgment standard for the next point-by-point area growth.
Further, the fusion and separation of the star point regions is realized by adopting the following steps:
if the nearest pixels of the boundary are divided into star points, the nearest pixels of the boundary are fused into a star point region, and because the nearest pixels of the boundary cannot contain all unextracted star point regions, neighborhood pixels divided into the star point region, namely the undivided star point pixels, need to be judged point by point and grown, and meanwhile, the result of the previous step of clustering is corrected;
sequentially judging unmarked neighborhood pixels of each pixel marked as a star point region, judging that the pixel belongs to the star point region or the background region according to the distance between the gray level of the pixel and the clustering center, and marking; and updating the clustering center of the star point region or the background region, re-clustering the star point region and the background region which are divided at the last time, and correcting the result in the last step until all neighborhood pixels have background marks or star point marks, so that the star point region segmentation is finished.
Compared with the prior art, the invention has the beneficial effects that:
the invention discloses a high dynamic star atlas image segmentation method, which comprises the following steps: the linear features of the fuzzy star points are utilized to carry out linear feature segmentation on the fuzzy star points, neighborhood pixels of results of the linear feature segmentation are clustered and divided into a star point part and a background part, and then the separated star point regions are fused through two-step region growth based on cluster analysis to complete the extraction of the fuzzy star points. The high-dynamic star map image segmentation method provided by the invention can completely extract the fuzzy star points, has high extraction precision, greatly improves the success rate of star map identification and star tracking, realizes accurate matching identification and attitude calculation of the star map, has simple and quick calculation process, and has important practical significance for improving the performance of the star sensor.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a block diagram of gradient computation according to the present invention;
FIG. 3 is a gray scale contour field of the fuzzy stars of the present invention;
FIG. 4 is a flow chart of the present invention;
FIGS. 5 to 9 are schematic views of the extraction process in example 1 of the present invention, respectively;
fig. 10 is a diagram showing the star point extraction result in embodiment 1 of the present invention.
Detailed Description
The present invention is described in detail below with reference to the accompanying drawings so that the advantages and features of the present invention can be more easily understood by those skilled in the art, and thus the scope of the present invention can be clearly and clearly defined.
The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.
Example 1
As shown in fig. 1-10, a high dynamic star atlas image segmentation method includes the following steps:
under dynamic conditions, no matter bright stars, dark stars or fuzzy star points with broken gray scales in an actual star map have common characteristics, namely linear characteristics. The invention firstly utilizes the line characteristics of the fuzzy star points to carry out linear characteristic segmentation on the fuzzy star points, and as some pixels in the star points are not extracted, the separated star point regions need to be fused by two-step region growing based on cluster analysis to complete the extraction of the fuzzy star points, in particular to a high dynamic star map image segmentation method, which comprises the following steps:
1) module 1: performing linear feature segmentation on fuzzy star points
11) The gradient of each pixel is calculated first, and the calculation is performed according to a gradient calculation template of 2 × 2 as shown in fig. 2:
let i (x, y) be the image gray scale at (x, y), the gradient value g in the x directionxGradient values g in the (x, y) and y directionsy(x, y) are obtained according to equation (1) and equation (2), respectively:
Figure BDA0003331832140000041
Figure BDA0003331832140000042
therefore, the gray-scale contour angle is calculated according to the formula (3):
Figure BDA0003331832140000043
12) the gradient magnitude G (x, y) of each pixel is calculated according to equation (4), and then the pixels are sorted by gradient magnitude:
Figure BDA0003331832140000044
13) region growing according to pixels with gradient direction in a range
Because the fuzzy star point has a linear characteristic, the gray contour line direction of partial pixels in the star point is similar to the star point fuzzy direction. Starting to grow from the pixel with the maximum gradient value as a seed point, searching for the pixel with the neighborhood larger than the gradient threshold value, and if the gray contour line angle of the pixel is equal to the angle theta of the regionregionIf the difference is within a certain tolerance tau and tau is 22.5 degrees, adding the pixel into the fuzzy star point region, and detecting the neighborhood pixels of the newly added pixel until no pixel is added into the region. Angle theta of the region every time a pixel is added to the regionregionUpdating according to formula (5):
Figure BDA0003331832140000045
where P represents the set of pixels that have been added to the region, (x)p,yp) Is the pixel coordinate of a p pixel.
The result of the segmentation by linear features is shown in fig. 3, and the gray frame is a fuzzy star point region segmented by linear features. Because of the gray contour line angle and theta of part of pixels in the star pointregionThe difference exceeds the tolerance τ, so the pixels of the portion are not extracted. In order to complete the star point segmentation, the invention fuses the separated star point regions in the following step 2) by two-step region growing based on cluster analysis.
2) And (3) module 2: fusing separate star point regions
The boundary pixels of the segmentation result in the step 1) are extracted first, as shown in fig. 5a, which is light gray, because the pixels nearest to the boundary include a star point region and a background region, and the gray level of the star point region is higher than that of the background region, the gray level distribution of the pixels nearest to the boundary can be divided into two types, which are a star point part with higher gray level and a background part with lower gray level, respectively, so the invention adopts a cluster division method to divide the gray level of the pixels nearest to the boundary, and the parts divided into the star points are fused to the star point region. As the nearest pixels of the boundary can not contain all the unextracted star point regions, then carrying out region growth of point-by-point self-adaptive iterative correction on the unextracted star point regions according to the division result, and finally completely fusing the separated star point regions;
clustering analysis is to divide a data set into a plurality of different sub-classes according to a certain specific standard, so that the similarity of samples in the same class is as large as possible, and the difference of samples in different classes is also as large as possible. The current clustering algorithm can be divided into a hierarchical clustering algorithm, a divided clustering algorithm and a clustering algorithm based on density and grids. According to the characteristics that the data volume of the star point target is small and the gray scales of the star point area and the background area are different, the invention adopts a mean value clustering algorithm in the division clustering.
Let X be { X ═ X1,x2,...,xnAnd represents the nearest pixel set of the boundary of the star point area with the linear feature, wherein the set comprises the star point area and the background area. As shown in fig. 6, the dark gray in fig. 6a is the region of the line feature segmentation, and the light gray portion is the neighborhood pixels of the region. In the three-dimensional diagrams of fig. 6-9, the X and Y coordinate axes represent pixel coordinates, the Z coordinate axis represents pixel gray scale, the X in the diagrams represents gray scale values and positions of different neighborhood pixels, and o represents a cluster center. The clustering algorithm of the present invention minimizes the following indicators:
Figure BDA0003331832140000051
wherein I (x) is the gray value of the pixel, QjSet of pixels, μ, representing class jjRepresenting the mean of the pixel intensities in class j.
The minimization process is carried out byTo QjIterative realization of the set, the pixel set of the class j after the ith iteration is recorded as
Figure BDA0003331832140000052
The mean of the pixel grayscales in class j is
Figure BDA0003331832140000053
After k iterations, there are
Figure BDA0003331832140000054
At which point the algorithm converges.
In the step 2), the method comprises the following steps:
21) region growing based on cluster analysis for boundary pixels
And (3) carrying out cluster analysis on the pixels in the X set, wherein the known neighborhood pixel set can be divided into a background area and a star point area, so that the maximum value and the minimum value in the set are used as initial cluster centers, as shown in fig. 6b, the distances from the gray levels of the rest neighborhood pixels to 2 cluster centers are calculated, the cluster centers which are represented by the cluster center closest to the gray levels of the rest neighborhood pixels are respectively classified into the cluster center, and the cluster centers of the 2 clusters are updated. If the cluster center of each cluster changes from the last time, the distance from the neighborhood pixels to 2 cluster centers is recalculated until the cluster center of each cluster does not change any more, and the division of the neighborhood pixels is completed, as shown in fig. 6a-6b, wherein a dark gray area and a light gray area in the graph are a star point area and a background area which are subjected to cluster analysis respectively. Taking the clustering center as a judgment standard for next point-by-point region growth;
22) fusing separate star point regions
If the nearest pixels of the boundary are divided into star points, the nearest pixels of the boundary are fused into a star point region, and as the nearest pixels of the boundary cannot contain all unextracted star point regions, as shown by the light gray region pixels in the middle of fig. 7, the neighborhood pixels divided into the star point region, namely the undivided star point pixels, need to be judged point by point and grown, and meanwhile, the result of the previous clustering is corrected.
And sequentially judging unmarked neighborhood pixels of each pixel marked as a star point region, judging that the pixel belongs to the star point region or the background region according to the distance between the gray level of the pixel and the clustering center, and marking. And updating the clustering center of the star point region or the background region. And re-clustering the star point region and the background region which are divided last time, and correcting the result in the last step. As shown in fig. 8, the middle light gray area newly added in fig. 7 is divided into a star point area, and after the clustering center is updated, the pixel update mark of the star point area on the right side in fig. 8 is a background area, so as to perform a correction function. Until all neighborhood pixels have a background mark or a star point mark, the star point region segmentation is completed, as shown in fig. 9.
The flow chart of the star point extraction of the invention is shown in fig. 10, firstly, linear feature segmentation is carried out on fuzzy star points by utilizing linear features of the star points, as shown in fig. 10a, then, the neighborhood pixels of the result of the linear feature segmentation are clustered and divided into a star point part and a background part, and the star point part and the result of the linear feature segmentation are fused, as shown in fig. 10 b. Analyzing the class to which the neighborhood pixels of the star point region belong, continuing to grow the star point region, and correcting the division result of the previous step, as shown in fig. 10c, until the neighborhood pixels are all divided into the star point region or the background region, completing the division of the star point region, as shown in fig. 10 d.
The parts or structures of the invention which are not described in detail can be the same as those in the prior art or the existing products, and are not described in detail herein.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (5)

1. A high dynamic star atlas image segmentation method is characterized by comprising the following steps: the linear features of the fuzzy star points are utilized to carry out linear feature segmentation on the fuzzy star points, neighborhood pixels of results of the linear feature segmentation are clustered and divided into a star point part and a background part, and then the separated star point regions are fused through two-step region growth based on cluster analysis to complete the extraction of the fuzzy star points.
2. The high-dynamic star atlas image segmentation method according to claim 1 is characterized in that the linear feature segmentation of the fuzzy star points is realized by adopting the following steps:
11) calculating according to a gradient calculation template to obtain the gradient of each pixel:
let i (x, y) be the image gray scale at (x, y), the gradient value g in the x directionxGradient values g in the (x, y) and y directionsy(x, y) are obtained according to equation (1) and equation (2), respectively:
Figure FDA0003331832130000011
Figure FDA0003331832130000012
therefore, the gray-scale contour angle is calculated according to the formula (3):
Figure FDA0003331832130000013
12) the gradient magnitude G (x, y) of each pixel is calculated according to equation (4), and then the pixels are sorted by gradient magnitude:
Figure FDA0003331832130000014
13) region growing according to pixels with gradient direction in a range
Starting to grow from the pixel with the maximum gradient value as a seed point, searching for the pixel with the neighborhood larger than the gradient threshold value, and if the gray contour line angle of the pixel is equal to the angle theta of the regionregionThe phase difference is within oneIf the pixel is within the certain tolerance, adding the pixel into the fuzzy star point region, and detecting the neighborhood pixels of the newly added pixel until no pixel is added into the region; angle theta of the region every time a pixel is added to the regionregionUpdating according to formula (5):
Figure FDA0003331832130000015
where P represents the set of pixels that have been added to the region, (x)p,yp) Is the pixel coordinate of a p pixel.
3. The high-dynamic star atlas image segmentation method according to claim 1 is characterized in that the fusion of separated star point regions is realized by adopting the following steps:
firstly, boundary pixels of a segmentation result obtained after linear feature segmentation is carried out on fuzzy star points are extracted, the pixels nearest to the boundary comprise a star point region and a background region, the gray level of the star point region is higher than that of the background region, the gray level distribution of the pixels nearest to the boundary is divided into two types, namely a star point part with higher gray level and a background part with lower gray level, the gray level of the pixels nearest to the boundary is divided by adopting a clustering division method, the parts divided into the star points are fused in the star point region, then, the region growth of point-by-point self-adaptive iterative correction is carried out on the unretracted star point region according to the division result, and finally, the separated star point region is completely fused.
4. The high dynamic star atlas image segmentation method according to claim 3 is characterized in that the boundary pixels are subjected to cluster analysis based region growing by the following steps:
performing cluster analysis on pixels in the set, wherein the neighborhood pixel set is known to be divided into a background area and a star point area, the maximum value and the minimum value in the set are respectively used as two initial cluster centers, the distance from the gray level of the rest neighborhood pixels to the two cluster centers is calculated, the cluster represented by the cluster center closest to the cluster center is classified into the cluster centers respectively, the cluster centers of the two clusters are updated, if the cluster center of each cluster is changed from the previous cluster, the distance from the neighborhood pixels to the two cluster centers is recalculated until the cluster center of each cluster is not changed any more, the division of the neighborhood pixels is completed, and the cluster centers are used as a judgment standard for the next point-by-point area growth.
5. The high-dynamic star atlas image segmentation method according to claim 3 is characterized in that the fusion of separated star point regions is realized by adopting the following steps:
if the nearest pixels of the boundary are divided into star points, the nearest pixels of the boundary are fused into a star point region, and because the nearest pixels of the boundary cannot contain all unextracted star point regions, neighborhood pixels divided into the star point region, namely the undivided star point pixels, need to be judged point by point and grown, and meanwhile, the result of the previous step of clustering is corrected;
sequentially judging unmarked neighborhood pixels of each pixel marked as a star point region, judging that the pixel belongs to the star point region or the background region according to the distance between the gray level of the pixel and the clustering center, and marking; and updating the clustering center of the star point region or the background region, re-clustering the star point region and the background region which are divided at the last time, and correcting the result in the last step until all neighborhood pixels have background marks or star point marks, so that the star point region segmentation is finished.
CN202111283064.3A 2021-11-01 2021-11-01 High-dynamic star map image segmentation method Active CN113838072B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111283064.3A CN113838072B (en) 2021-11-01 2021-11-01 High-dynamic star map image segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111283064.3A CN113838072B (en) 2021-11-01 2021-11-01 High-dynamic star map image segmentation method

Publications (2)

Publication Number Publication Date
CN113838072A true CN113838072A (en) 2021-12-24
CN113838072B CN113838072B (en) 2023-08-04

Family

ID=78966712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111283064.3A Active CN113838072B (en) 2021-11-01 2021-11-01 High-dynamic star map image segmentation method

Country Status (1)

Country Link
CN (1) CN113838072B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115239966A (en) * 2022-05-30 2022-10-25 中国地质大学(武汉) Latent substrate ancient rift valley recognition and extraction method and system based on image processing

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853333A (en) * 2010-05-26 2010-10-06 中国科学院遥感应用研究所 Method for picking marks in medical robot navigation positioning images
CN104899892A (en) * 2015-06-30 2015-09-09 西安电子科技大学 Method for quickly extracting star points from star images
CN105046713A (en) * 2015-08-12 2015-11-11 北京航空航天大学 Morphology-based robot star point segmentation method and FPGA realization method
CN106056614A (en) * 2016-06-03 2016-10-26 武汉大学 Building segmentation and contour line extraction method of ground laser point cloud data
CN109029425A (en) * 2018-06-25 2018-12-18 中国科学院长春光学精密机械与物理研究所 A kind of fuzzy star chart restored method filtered using region
CN112465712A (en) * 2020-11-09 2021-03-09 华中光电技术研究所(中国船舶重工集团公司第七一七研究所) Motion blur star map restoration method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853333A (en) * 2010-05-26 2010-10-06 中国科学院遥感应用研究所 Method for picking marks in medical robot navigation positioning images
CN104899892A (en) * 2015-06-30 2015-09-09 西安电子科技大学 Method for quickly extracting star points from star images
CN105046713A (en) * 2015-08-12 2015-11-11 北京航空航天大学 Morphology-based robot star point segmentation method and FPGA realization method
CN106056614A (en) * 2016-06-03 2016-10-26 武汉大学 Building segmentation and contour line extraction method of ground laser point cloud data
CN109029425A (en) * 2018-06-25 2018-12-18 中国科学院长春光学精密机械与物理研究所 A kind of fuzzy star chart restored method filtered using region
CN112465712A (en) * 2020-11-09 2021-03-09 华中光电技术研究所(中国船舶重工集团公司第七一七研究所) Motion blur star map restoration method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115239966A (en) * 2022-05-30 2022-10-25 中国地质大学(武汉) Latent substrate ancient rift valley recognition and extraction method and system based on image processing
CN115239966B (en) * 2022-05-30 2024-04-09 中国地质大学(武汉) Hidden substrate ancient rift valley identification extraction method and system based on image processing

Also Published As

Publication number Publication date
CN113838072B (en) 2023-08-04

Similar Documents

Publication Publication Date Title
CN106530347B (en) Stable high-performance circle feature detection method
CN109977997B (en) Image target detection and segmentation method based on convolutional neural network rapid robustness
CN108446634B (en) Aircraft continuous tracking method based on combination of video analysis and positioning information
CN112489096B (en) Remote sensing image change detection method based on graph matching model under low registration precision
CN112084871B (en) High-resolution remote sensing target boundary extraction method based on weak supervised learning
CN111340855A (en) Road moving target detection method based on track prediction
CN113223068A (en) Multi-modal image registration method and system based on depth global features
CN108428220A (en) Satellite sequence remote sensing image sea island reef region automatic geometric correction method
CN112164117A (en) V-SLAM pose estimation method based on Kinect camera
CN116228754B (en) Surface defect detection method based on deep learning and global difference information
CN112364881B (en) Advanced sampling consistency image matching method
CN112365497A (en) High-speed target detection method and system based on Trident Net and Cascade-RCNN structures
CN111553945B (en) Vehicle positioning method
CN112862881A (en) Road map construction and fusion method based on crowd-sourced multi-vehicle camera data
CN111259808A (en) Detection and identification method of traffic identification based on improved SSD algorithm
CN115909079A (en) Crack detection method combining depth feature and self-attention model and related equipment
CN116597244A (en) Small sample target detection method based on meta-learning method
CN113838072B (en) High-dynamic star map image segmentation method
CN115861352A (en) Monocular vision, IMU and laser radar data fusion and edge extraction method
CN112381730B (en) Remote sensing image data amplification method
CN111325184A (en) Intelligent interpretation and change information detection method for remote sensing image
CN110246165A (en) It improves visible images and SAR image matches the method and system of Quasi velosity
CN112070811A (en) Image registration method based on continuous domain ant colony algorithm improvement
CN116665097A (en) Self-adaptive target tracking method combining context awareness
CN117079272A (en) Bullet bottom socket mark feature identification method combining manual features and learning features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant