CN117710245B - Astronomical telescope error rapid detection method - Google Patents

Astronomical telescope error rapid detection method Download PDF

Info

Publication number
CN117710245B
CN117710245B CN202410160981.XA CN202410160981A CN117710245B CN 117710245 B CN117710245 B CN 117710245B CN 202410160981 A CN202410160981 A CN 202410160981A CN 117710245 B CN117710245 B CN 117710245B
Authority
CN
China
Prior art keywords
image
point
gray
measured
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410160981.XA
Other languages
Chinese (zh)
Other versions
CN117710245A (en
Inventor
孙世林
黄欣
苏嘉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cas Nanjing Nairc Photoelectric Instrument Co ltd
Original Assignee
Cas Nanjing Nairc Photoelectric Instrument Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cas Nanjing Nairc Photoelectric Instrument Co ltd filed Critical Cas Nanjing Nairc Photoelectric Instrument Co ltd
Priority to CN202410160981.XA priority Critical patent/CN117710245B/en
Publication of CN117710245A publication Critical patent/CN117710245A/en
Application granted granted Critical
Publication of CN117710245B publication Critical patent/CN117710245B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention relates to the technical field of image denoising, in particular to a fast error detection method for an astronomical telescope. The method comprises the following steps: acquiring a star point gray level image, and determining a highlight continuous factor of a pixel point according to a gray level value; the region growing process is carried out, and a region to be detected is obtained through screening according to the gray value and the highlight continuous factor of the pixel point in each growing region; determining diffraction expressive force according to gray value changes of all preset directions; further determining a filtering adjustment coefficient; and determining a target filtering window according to the filtering adjustment coefficient, filtering the image to be detected to obtain a target image, and performing error detection on the target image of all star point gray level images to obtain a detection result. According to the invention, continuous multi-frame star point gray images with the same focal length are combined for analysis, white noise is effectively removed, and feature analysis is performed according to diffraction effects, so that self-adaptive filtering denoising is performed, the filtering denoising effect is enhanced, and the accuracy and reliability of subsequent detection results are improved.

Description

Astronomical telescope error rapid detection method
Technical Field
The invention relates to the technical field of image denoising, in particular to a fast error detection method for an astronomical telescope.
Background
The error detection mode of the astronomical telescope is usually a star point analysis PSF or a focus analysis method, but due to various interference reasons, such as electromagnetic interference, readout noise, background light noise and the like, a certain degree of noise interference exists in the commonly acquired star point image, and the existence of the noise interference can influence the analysis of error adjustment.
In the related art, an image acquired by an astronomical telescope is acquired, and filtering denoising processing is performed by using a uniform filtering window, in this way, because the focal length of the astronomical telescope is different, the size, gray scale and the like of star points in the image are changed, and noise points are randomly distributed, therefore, the filtering denoising mode in the related art is poor in effect, and the accuracy and reliability of subsequent error detection are affected.
Disclosure of Invention
In order to solve the technical problems that the filtering denoising mode in the related art is poor in effect and thus affects the accuracy and reliability of subsequent error detection, the invention provides a fast astronomical telescope error detection method, which adopts the following technical scheme:
the invention provides a fast error detection method of an astronomical telescope, which comprises the following steps:
acquiring continuous at least two frames of star point gray level images with the same focal length, taking any frame of star point gray level image as an image to be measured, taking a pixel point at any position in the image to be measured as a point to be measured, and determining a highlight continuous factor of the point to be measured according to the numerical value and the distribution of gray level values of the pixel points at the same position as the point to be measured in all frames of star point gray level images;
taking each pixel point in the image to be detected as a starting point, carrying out region growth treatment according to the highlight continuous factors of each pixel point in the image to be detected to obtain a growth region, and screening the growth region according to the gray value and the highlight continuous factors of the pixel points in each growth region to obtain a region to be detected containing star points; determining a central pixel point of each region to be detected, taking the central pixel point as a starting point, taking pixel points in different preset directions in the region to be detected as direction pixel points, and determining diffraction expression level of the region to be detected where the central pixel point is positioned according to gray value changes of the direction pixel points in all preset directions;
determining the diffraction stability of each region to be measured in the image to be measured according to the diffraction expression degree of all regions to be measured of all frame star gray images, and determining the filter adjustment coefficient of the image to be measured according to the diffraction stability of all regions to be measured in the image to be measured, the number of pixels of all regions to be measured and the highlight continuous factor of the central pixels of all regions to be measured;
and adjusting a preset initial filter window of the image to be detected according to the filter adjustment coefficient to obtain a target filter window, filtering the image to be detected according to the target filter window to obtain a target image, and performing error detection according to the target images of all star point gray images to obtain a detection result.
Further, determining the highlight persistence factor of the point to be measured according to the numerical values and the distribution of the gray values of the pixel points at the same position as the point to be measured in the gray images of all the star points of the frame, including:
taking pixel points at the same position as the point to be detected in the gray level images of all the frame star points as co-location points;
calculating the variance of gray values of all co-sites as co-located variance;
calculating the difference value between the gray maximum value of all pixel points and the gray value mean value of the same point, and carrying out normalization treatment to obtain gray weight;
and calculating the product of the parity variance and the gray weight, mapping the correlation and normalizing to obtain the highlight continuous factor of the to-be-measured point.
Further, the performing area growth processing according to the highlight persistence factor of each pixel point in the image to be detected by taking each pixel point in the image to be detected as a starting point to obtain a growth area, including:
taking any pixel point in the image to be detected as a growing point, and taking the absolute value of the difference value of the highlight continuous factors of other pixel points and the growing point in a preset neighborhood range taking the growing point as the center as the highlight difference of the other pixel points and the growing point;
performing region growing treatment on the growing points based on a region growing algorithm, wherein the region growing condition is that the highlight difference is smaller than a preset difference threshold;
traversing all the images to be detected, and dividing the images to be detected into at least two different growing areas.
Further, the step of screening the growth areas according to the gray value and the highlight persistence factor of the pixel points in each growth area to obtain the area to be detected including the star points includes:
calculating the gray value average value of all pixel points in each growth area to obtain gray indexes;
calculating the average value of the highlight continuous factors of all the pixel points in each growth area to obtain a highlight continuous index;
and taking the growth area with the gray index larger than a preset gray threshold and the highlight continuous index larger than the preset continuous threshold as an area to be measured.
Further, determining the diffraction expression level of the region to be measured where the center pixel point is located according to the gray value changes of the pixel points in all preset directions includes:
calculating the absolute value of the gray value difference between each direction pixel point and the corresponding center pixel point in the same preset direction to obtain the center gray difference of each direction pixel point;
according to the fact that the distance between each direction pixel point and the corresponding center pixel point in the same preset direction is an abscissa, the center gray difference value is an ordinate, a two-dimensional coordinate system is constructed, and coordinate points of all the direction pixel points in the same preset direction in the two-dimensional coordinate system are determined;
performing straight line fitting on the coordinate points to obtain a fitting straight line corresponding to the preset direction, and taking the slope of the fitting straight line as a fitting slope;
and calculating inverse proportion normalization values of variances of all fitting slopes to obtain diffraction expressive degree of the region to be measured.
Further, determining the diffraction stability of each region to be measured in the image to be measured according to the diffraction expressive degrees of all regions to be measured of all frame star gray images includes:
taking any one region to be detected in the image to be detected as a reference region, taking a central pixel point of the reference region as a reference pixel point, and determining a region to be detected, which is closest to the reference pixel point in each frame of star point gray level image, as a target region corresponding to each frame of star point gray level image;
calculating the variance of diffraction expressive degrees of the reference area and the target area of all frame star point gray images as diffraction variance;
and calculating a product normalization value of the negative number of the diffraction variance and the diffraction expression degree of the reference area to obtain the diffraction stability of the reference area, and replacing the reference area to obtain the diffraction stability of each area to be detected in the image to be detected.
Further, the determining the filter adjustment coefficient of the image to be measured according to the diffraction stability of all the areas to be measured in the image to be measured, the number of pixels of all the areas to be measured and the highlight persistence factor of the central pixels of all the areas to be measured includes:
calculating inverse proportion normalization values of the average values of the diffraction stability of the reference area and the absolute values of the difference values of the diffraction stability of all other areas to be detected respectively to obtain stability coefficients of the reference area;
calculating the normalized value of the number of pixel points in the reference area to obtain the number coefficient of the reference area;
determining a filter factor of the reference area according to the stability coefficient, the quantity coefficient and the highlight duration factor of the central pixel point of the reference area, wherein the stability coefficient, the quantity coefficient and the highlight duration factor of the central pixel point of the reference area are in positive correlation with the filter factor, and the value of the filter factor is a normalized value;
and calculating the average value of the filter factors of all the areas to be measured as the filter adjustment coefficient of the image to be measured.
Further, the adjusting the preset initial filtering window of the image to be measured according to the filtering adjustment coefficient to obtain a target filtering window includes:
and calculating the product of the filter adjustment coefficient and the side length of the preset initial filter window of the image to be measured, and taking the odd value upwards as the side length of the target filter window to obtain the target filter window.
Further, the filtering the image to be detected according to the target filtering window to obtain a target image includes:
and performing Gaussian filtering on the image to be detected based on the target filtering window by using a Gaussian filtering algorithm to obtain a target image.
Further, the performing error detection according to the target images of all the star point gray scale images to obtain a detection result includes:
calculating the average value of gray gradients of all pixel points in all target images to obtain a gradient average value, carrying out normalization processing on the gradient average value to obtain image definition, and taking the image definition as a detection result.
The invention has the following beneficial effects:
according to the method, the characteristics of white noise of a star point gray image and the characteristics of the star point are combined, continuous at least two frames of star point gray images with the same focal length are obtained, the value and distribution of gray values of pixels at the same position are combined, the highlight persistence factor of a to-be-measured point is determined, so that the random distribution of the white noise is analyzed, the highlight persistence condition of the star point is determined, the subsequent area growing processing according to the highlight persistence factor is facilitated, a growing area is obtained, the white noise, a background area and the area where the star point is located can be effectively distinguished, then the growing area is subjected to characteristic analysis, and the to-be-measured area containing the star point is obtained. According to the invention, continuous multi-frame star point gray images with the same focal length are combined for analysis, white noise in the star point gray images is effectively removed, and feature analysis is performed according to diffraction effects, so that self-adaptive filtering denoising is performed, the filtering denoising effect is enhanced, and the accuracy and reliability of subsequent detection results are improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for fast detecting errors of astronomical telescope according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a fitting straight line according to an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following description refers to the specific implementation, structure, characteristics and effects of an astronomical telescope error rapid detection method according to the invention by combining the accompanying drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the fast error detection method for astronomical telescope provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a method for quickly detecting an error of an astronomical telescope according to an embodiment of the present invention is shown, where the method includes:
s101: and acquiring continuous at least two frames of star point gray level images with the same focal length, taking any frame of star point gray level image as an image to be measured, taking a pixel point at any position in the image to be measured as a point to be measured, and determining the highlight continuous factor of the point to be measured according to the numerical value and the distribution of gray level values of the pixel points at the same position as the point to be measured in all frames of star point gray level images.
It can be understood that the astronomical telescope needs to be focused before use, and when the focal length is not adjusted to be optimal, the corresponding star points in the image are in a fuzzy state, and the surrounding has a larger halation effect, so that the gray scale characteristics of the image need to be analyzed to determine the optimal focal length. In the embodiment of the invention, the corresponding white noise is inevitably generated when the astronomical telescope is used for collecting the image, and the white noise can influence the analysis of star points due to the presence of the dots, so that the image needs to be denoised so as to facilitate the subsequent error detection of the astronomical telescope.
In the embodiment of the invention, a plurality of frames of original images can be continuously shot under the same focal length, and then, the original images are subjected to image graying processing to obtain star point gray images, wherein the image graying processing can be specifically, for example, mean graying processing, or other high-precision graying processing modes can be selected according to actual detection requirements, and the method is not limited.
In the embodiment of the invention, the white noise is characterized by random distribution, and the white noise cannot be effectively analyzed only by a single image, so that the invention combines multiple frames of star point gray images under the same focal length, thereby realizing accurate identification of the white noise.
Further, in some embodiments of the present invention, determining the highlight persistence factor of the point to be measured according to the numerical values and the distribution of the gray values of the pixel points at the same position as the point to be measured in the gray images of all the star points of the frame includes: taking pixel points at the same position as the point to be detected in the gray level images of all the frame star points as co-location points; calculating the variance of gray values of all co-sites as co-located variance; calculating the difference value between the gray maximum value of all pixel points and the gray value mean value of the same point, and carrying out normalization treatment to obtain gray weight; and calculating the product of the parity variance and the gray weight, mapping the opposite correlation, and normalizing to obtain the highlight persistence factor of the to-be-measured point.
The highlight persistence factor, namely the probability coefficient that the pixel points are the stars which are continuously highlight, changes the gray scale corresponding to the noise points at the same position of different images due to the random distribution characteristic of the noise points, and the embodiment of the invention can determine the highlight persistence factor of each pixel point through the gray scale confusion degree and the deviation degree of the gray scale value.
In the embodiment of the invention, the variance of the gray value of the pixel point at the same position as the point to be measured in the gray image of all the frame star points is taken as the parity variance, the parity variance represents the degree of confusion of the gray value of the pixel point at the same position, and the larger the parity variance at the corresponding position of the point to be measured is, the more the gray value at the same position of the point to be measured in the gray image of all the frame star points is disordered, and the corresponding data point representing the more possible white noise of the point to be measured is.
In the embodiment of the invention, as the gray value of the position of the point to be measured is larger, the position of the point to be measured is more likely to be a highlight pixel point of the region where the star point is located, and the difference between the gray value of the position of the point to be measured and the maximum value of all the pixel points is larger, the point to be measured is more likely to be the point corresponding to the background region, therefore, the difference between the gray maximum value of all the pixel points and the gray value average value of the same point can be calculated, the gray weight is obtained through normalization processing, and the larger the gray weight is, the larger the difference between the maximum value of the corresponding gray value and the gray value average value of the position of the point to be measured is, namely the more likely to be the pixel point of the background.
Therefore, the invention combines the parity variance and the gray weight to determine the highlight persistence factor, namely, the product of the parity variance and the gray weight is calculated, and the highlight persistence factor of the to-be-measured point is obtained through inverse mapping and normalization processing, wherein the inverse Guan Yingshe represents the mapping relation that the dependent variable is reduced along with the increase of the independent variable and the dependent variable is increased along with the decrease of the independent variable, and the mapping relation can be a subtraction relation, a division relation and the like and is determined by practical application. The invention can calculate the reciprocal of the product of the parity variance and the gray weight and normalize the reciprocal to obtain the highlight continuous factor, or can calculate the negative number of the product of the parity variance and the gray weight and normalize the reciprocal to obtain the highlight continuous factor. And analyzing the pixel points at each position through the multi-frame star point gray level images, so as to determine the highlight continuous factor of each pixel point, and facilitate the subsequent noise analysis.
S102: taking each pixel point in the image to be detected as a starting point, carrying out region growth treatment according to the highlight continuous factors of each pixel point in the image to be detected to obtain a growth region, and screening the growth region according to the gray value and the highlight continuous factors of the pixel points in each growth region to obtain a region to be detected containing star points; determining a central pixel point of each region to be detected, taking the central pixel point as a starting point, taking the pixel points corresponding to different preset directions in the region to be detected as direction pixel points, and determining the diffraction expression level of the region to be detected where the central pixel point is located according to the gray value changes of the direction pixel points under all preset directions.
In the embodiment of the invention, the highlight continuous factor of each pixel point can be used as the judging parameter of the region growth, so that the region growth analysis can be carried out on the star point gray level image of each frame.
Further, in some embodiments of the present invention, with each pixel point in the image to be measured as a starting point, performing an area growth process according to a highlight persistence factor of each pixel point in the image to be measured to obtain a growth area, including: taking any pixel point in the image to be detected as a growing point, and taking the absolute value of the difference value of the highlight continuous factors of other pixel points and the growing point in a preset neighborhood range taking the growing point as the center as the highlight difference of the other pixel points and the growing point; performing region growing treatment on the growing points based on a region growing algorithm, wherein the region growing condition is that the highlight difference is smaller than a preset difference threshold; traversing all the images to be detected, and dividing the images to be detected into at least two different growing areas.
The preset difference threshold is a threshold value of a highlight difference, and optionally, the preset difference threshold may specifically be, for example, 0.1, and the condition for setting the region growth in the embodiment of the present invention is that the highlight difference is less than 0.1, that is, when the highlight difference is less than 0.1, the region growth condition is determined to be satisfied, and when the highlight difference is greater than or equal to 0.1, the region growth condition is determined to be not satisfied, and if the highlight difference is greater than or equal to 0.1, the region growth condition is determined not to be satisfied, the region growth condition is not regarded as a pixel point of the same region. It should be noted that, since region growing is a well-known prior art in the field, pixel points with similar highlight persistence factors and close distance can be used as the same region through region growing, and since star points have the characteristic of high aggregation, the high aggregation of star points is analyzed, so that an image to be measured is divided into at least two different growing regions.
It can be understood that, since the growth area further includes an area corresponding to the starry sky background and an area where the gray scale characteristics of the stars are not obvious and interference is easy to occur, the starry sky background and the unobvious stars are screened out.
Further, in some embodiments of the present invention, the method for screening a growth area according to a gray value and a highlight persistence factor of a pixel point in each growth area to obtain a to-be-detected area including a star point includes: calculating the gray value average value of all pixel points in each growth area to obtain gray indexes; calculating the average value of the highlight continuous factors of all the pixel points in each growth area to obtain a highlight continuous index; and taking the growth area with the gray index larger than the preset gray threshold and the highlight continuous index larger than the preset continuous threshold as the area to be measured.
The preset gray threshold is a threshold value of a gray index, the preset continuous threshold is a threshold value of a highlight continuous index, and in the embodiment of the invention, the preset gray threshold can be set to be 200, the preset continuous threshold is set to be 0.85, that is, when the average gray value of the growing area is greater than 200 and the highlight continuous index is greater than 0.85, the corresponding growing area is taken as the area to be detected, and the area to be detected is the area corresponding to the star point and the area corresponding to the strong white noise.
The region to be detected in the embodiment of the invention preliminarily screens the region with unobvious white noise, starry sky background and star point gray scale characteristics, so that the region corresponding to the stars with strong characteristics and the white noise is reserved, and the subsequent analysis of the focusing effect of the region to be detected is facilitated.
It can be understood that in the star point gray level image, the corresponding star point has a diffraction effect, that is, the area corresponding to the star point can carry out brightness diffraction from the center to the outside, the halation characteristic is presented, and the white noise area is imaging white noise and does not have a diffraction effect, so that the invention carries out specific analysis according to the diffraction effect.
In the embodiment of the invention, the pixel closest to the coordinate mean value can be determined as the central pixel by calculating the coordinate mean value of all the pixel points in the region to be detected, or the center of gravity of the region to be detected can be taken as the central pixel without limitation.
In the embodiment of the present invention, the direction in which the central pixel point and the surrounding eight neighboring pixel points are respectively connected may be used as the preset direction, or the preset direction may be determined according to the actual detection requirement, which is not limited.
Further, in some embodiments of the present invention, determining the diffraction performance of the area to be measured where the center pixel is located according to the gray value changes of the pixels in all the preset directions includes: calculating the absolute value of the gray value difference between each direction pixel point and the corresponding center pixel point in the same preset direction to obtain the center gray difference of each direction pixel point; according to the fact that the distance between each direction pixel point and the corresponding center pixel point in the same preset direction is an abscissa, the center gray difference value is an ordinate, a two-dimensional coordinate system is constructed, and coordinate points of all the direction pixel points in the same preset direction in the two-dimensional coordinate system are determined; performing straight line fitting on the coordinate points to obtain a fitting straight line corresponding to the preset direction, and taking the slope of the fitting straight line as a fitting slope; and calculating inverse proportion normalization values of variances of all fitting slopes to obtain diffraction expressive degree of the region to be measured. As shown in fig. 2, fig. 2 is a schematic diagram of a fitted straight line provided by an embodiment of the present invention, and a specific analysis is performed in conjunction with fig. 2, where as the distance increases, the corresponding central gray scale difference value increases.
In the embodiment of the invention, the diffraction effect characterization has a gray level decreasing effect in all preset directions, the embodiment calculates the absolute value of the gray level difference between each direction pixel point and the corresponding center pixel point in the same preset direction to obtain the center gray level difference of each direction pixel point, wherein the center gray level difference can characterize the gray level relationship between the direction pixel point and the center pixel point in the preset direction, in the normal diffraction effect, the larger the distance from the center is, the larger the corresponding gray level attenuation is, namely the gray level gradually becomes smaller, and the gray level suddenly drops when white noise is in the non-center pixel point, and based on the characteristics, a two-dimensional rectangular coordinate system is constructed, and the slope of a fitting straight line in each preset direction is used as an analysis index.
In the embodiment of the invention, because the white noise distribution is random, the gray distribution of the pixel points around the white noise is complex, so that the variance value of the corresponding fitting slope is larger, and the gray distribution of the pixel points around the star point is consistent, namely the variance value of the fitting slope is smaller because the area where the star point is located has a certain diffraction effect. The invention carries out inverse proportion normalization processing on the variances of all fitting slopes to obtain diffraction expressive degree, wherein the inverse proportion normalization processing is inverse correlation mapping and normalization processing, optionally, diffraction expressive degree is obtained by calculating the negative number of the variances of all fitting slopes and carrying out maximum and minimum normalization processing, and the method is not limited to the above.
S103: the diffraction stability of each region to be measured in the image to be measured is determined according to the diffraction expressive degree of all regions to be measured of all frame star gray images, and the filter adjustment coefficient of the image to be measured is determined according to the diffraction stability of all regions to be measured in the image to be measured, the number of pixels of all regions to be measured and the highlight persistence factor of the central pixels of all regions to be measured.
Further, in some embodiments of the present invention, determining the diffraction stability of each region to be measured in the image to be measured according to the diffraction expressions of all regions to be measured of all frame star gray images includes: taking any region to be measured in the image to be measured as a reference region, taking a central pixel point of the reference region as a reference pixel point, and determining the region to be measured, which is closest to the reference pixel point in each frame of star point gray level image, as a target region corresponding to each frame of star point gray level image; calculating the variance of diffraction expressive force of the reference area and the target area of all frame star gray images as diffraction variance; and calculating a product normalization value of the negative number of the diffraction variance and the diffraction expression degree of the reference area to obtain the diffraction stability of the reference area, and replacing the reference area to obtain the diffraction stability of each area to be detected in the image to be detected.
The diffraction stability represents the stabilizing effect of the diffraction phenomenon corresponding to the region to be detected, and is influenced by the focal length of the star point gray level image, so that the diffraction stability is specifically analyzed by combining all the frame star point gray level images.
In the embodiment of the invention, since the region to be detected is a region obtained by screening, specific analysis is performed through the characteristics of the region to be detected in the multi-frame star point gray level images, and the region to be detected which is closest to the reference pixel point in each frame star point gray level image is taken as the target region corresponding to each frame star point gray level image, it can be understood that the corresponding shutter usually takes 10-30 seconds when the star sky photography is performed because the multi-frame star point gray level image is an image obtained by shooting at the adjacent moment, namely, the same meaning is represented at the same position of the star point gray level image of the adjacent frame, but due to the existence of white noise, the region to be detected which is closest to the reference pixel point in each frame star point gray level image is set as the target region corresponding to each frame star point gray level image, so that the region to be detected corresponding to the white noise cannot be identified is avoided.
In the embodiment of the invention, the variance of the diffraction expression of the reference area and the target area of all frame star gray images is calculated as the diffraction variance, the larger the diffraction variance is, the larger the probability of representing the corresponding white noise distribution is, so that the diffraction expression generates a phenomenon of complex numerical distribution due to the white noise, and the smaller the corresponding diffraction variance is, the smaller the probability of representing the white noise is, therefore, the invention takes the negative number of the diffraction variance as the weight of the corresponding reference area, carries out weighted analysis on the diffraction expression of the reference area, and determines the diffraction stability of the reference area.
The diffraction stability in the embodiment of the invention represents the diffraction effect of the corresponding region to be measured, and the embodiment of the invention can determine the filtering adjustment coefficient according to the diffraction stability, wherein the filtering adjustment coefficient is the adjustment parameter for filtering, and the filtering adjustment coefficient of the embodiment of the invention aims at carrying out self-adaptive adjustment on the existing filtering method so as to realize more accurate and effective filtering effect.
Further, in some embodiments of the present invention, determining a filter adjustment coefficient of an image to be measured according to a diffraction stability of all areas to be measured in the image to be measured, a number of pixels of all areas to be measured, and a highlight persistence factor of a center pixel of all areas to be measured includes: calculating inverse proportion normalization values of the average values of the diffraction stability of the reference area and the absolute values of the difference values of the diffraction stability of all other areas to be detected respectively to obtain stability coefficients of the reference area; calculating the normalized value of the number of pixel points in the reference area to obtain the number coefficient of the reference area; determining a filter factor of the reference area according to the stability coefficient, the quantity coefficient and the highlight duration factor of the central pixel point of the reference area, wherein the stability coefficient, the quantity coefficient and the highlight duration factor of the central pixel point of the reference area are in positive correlation with the filter factor, and the value of the filter factor is a normalized value; and calculating the average value of the filter factors of all the areas to be measured as a filter adjustment coefficient of the image to be measured.
If the reference area is the area corresponding to the noise point, the diffraction stability is lower, and in order to specifically analyze a whole gray level image to be detected, the white noise analysis of all the areas to be detected is realized by determining the stability coefficient of the reference area.
In the embodiment of the invention, the larger the average value of the diffraction stability of the reference area and the absolute values of the diffraction stability difference of all other areas to be detected is, the larger the diffraction stability difference of the corresponding reference area and all other areas to be detected is, the more likely the reference area is the area corresponding to the noise point, and the inverse proportion normalization processing is carried out on the reference area to obtain the stability coefficient of the reference area.
In the embodiment of the invention, it can be understood that the point corresponding to the white noise is usually a single pixel point, and the star point texture is a diffraction local area, so that the invention can analyze based on the number of the pixel points in the area to be detected, calculate the normalized value of the number of the pixel points in the reference area, and obtain the number coefficient of the reference area, that is, the larger the number of the pixel points is, the larger the corresponding number coefficient is.
The invention combines stable coefficient, quantity coefficient and highlight continuous factor of regional central pixel point to determine the filter factor of reference region, because the higher the stable coefficient is, the less likely the corresponding reference region is the region corresponding to noise point, and the higher the quantity coefficient is, the more the reference region accords with the texture feature corresponding to star point, therefore, the invention can calculate the product of stable coefficient, quantity coefficient and highlight continuous factor of regional central pixel point, and normalize to obtain the filter adjustment coefficient, and is convenient for the subsequent embodiment to filter the filter adjustment coefficient.
S104: and adjusting a preset initial filter window of the image to be detected according to the filter adjustment coefficient to obtain a target filter window, filtering the image to be detected according to the target filter window to obtain a target image, and performing error detection according to the target images of all star point gray images to obtain a detection result.
Further, in some embodiments of the present invention, adjusting a preset initial filtering window of an image to be measured according to a filtering adjustment coefficient to obtain a target filtering window includes: and calculating the product of the filter adjustment coefficient and the side length of the preset initial filter window of the image to be measured, and taking the odd value upwards as the side length of the target filter window to obtain the target filter window.
The preset initial filter window is a window for filtering the image to be detected in the prior art, and because the uniform filter window is used for analyzing different focal lengths in the prior art, errors can be caused in focal length judgment, the adaptive filter window under each focal length is set, alternatively, the side length of the preset initial filter window can be specifically 13×13, or can be set according to the specification of an actual astronomical telescope, and the method is not limited.
In the embodiment of the invention, the filter adjustment coefficient is multiplied by the side length of the preset initial filter window, and the odd value is taken upwards as the side length of the target filter window, so that the side length of the filter window is reduced in a self-adaptive manner, and the accuracy of image processing is improved.
Further, in some embodiments of the present invention, filtering an image to be measured according to a target filtering window to obtain a target image includes: and performing Gaussian filtering on the image to be detected based on the target filtering window by using a Gaussian filtering algorithm to obtain a target image.
The Gaussian filtering algorithm is a filtering algorithm well known in the art, and white noise in an image can be effectively removed through Gaussian filtering.
Further, in some embodiments of the present invention, performing error detection according to the target image of all the star point gray scale images to obtain a detection result, including: and calculating the average value of gray gradients of all pixel points in all target images to obtain a gradient average value, carrying out normalization processing on the gradient average value to obtain image definition, and taking the image definition as a detection result.
The image definition is obtained through gradient value analysis, and is used as a detection result, so that definition detection of the star point gray level image under the focal length is realized.
In the embodiment of the invention, under the condition of optimal focal length adjustment effect, the corresponding star point edge is clearer, namely the gray value of the star point edge is more changed, and when the focal length adjustment effect is poorer, the corresponding diffraction effect is generated, and the star point edge is blurred, so that the definition judgment can be performed based on gradient information, and the larger the gradient mean value is, the higher the corresponding image definition is.
Of course, the invention can also use various other arbitrary possible implementation manners to perform error detection on the target image, for example, the target image is input into a pre-trained neural network model, and error detection processing is performed on the target image through the pre-trained neural network model, so as to obtain a detection result, thereby being convenient for subsequent adjustment of focal length of the astronomical telescope according to the detection result.
According to the invention, continuous at least two frames of star point gray images with the same focal length are obtained, and the value and distribution of gray values of pixel points at the same position are combined, so that the highlight persistence factor of each to-be-measured point is determined, the random distribution of white noise is analyzed, the highlight persistence condition of the star point is determined, the subsequent area growing processing according to the highlight persistence factor is facilitated, a growing area is obtained, the areas where the white noise, a background area and the star point are located can be effectively distinguished, then, the growing area is subjected to characteristic analysis, the to-be-measured area containing the star point is obtained, and the to-be-measured area contains the area corresponding to the star point and the white noise area. According to the invention, continuous multi-frame star point gray images with the same focal length are combined for analysis, white noise in the star point gray images is effectively removed, and feature analysis is performed according to diffraction effects, so that self-adaptive filtering denoising is performed, the filtering denoising effect is enhanced, and the accuracy and reliability of subsequent detection results are improved.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.

Claims (8)

1. A method for rapidly detecting errors of an astronomical telescope, comprising the steps of:
acquiring continuous at least two frames of star point gray level images with the same focal length, taking any frame of star point gray level image as an image to be measured, taking a pixel point at any position in the image to be measured as a point to be measured, and determining a highlight continuous factor of the point to be measured according to the numerical value and the distribution of gray level values of the pixel points at the same position as the point to be measured in all frames of star point gray level images;
taking each pixel point in the image to be detected as a starting point, carrying out region growth treatment according to the highlight continuous factors of each pixel point in the image to be detected to obtain a growth region, and screening the growth region according to the gray value and the highlight continuous factors of the pixel points in each growth region to obtain a region to be detected containing star points; determining a central pixel point of each region to be detected, taking the central pixel point as a starting point, taking pixel points in different preset directions in the region to be detected as direction pixel points, and determining diffraction expression level of the region to be detected where the central pixel point is positioned according to gray value changes of the direction pixel points in all preset directions;
determining the diffraction stability of each region to be measured in the image to be measured according to the diffraction expression degree of all regions to be measured of all frame star gray images, and determining the filter adjustment coefficient of the image to be measured according to the diffraction stability of all regions to be measured in the image to be measured, the number of pixels of all regions to be measured and the highlight continuous factor of the central pixels of all regions to be measured;
adjusting a preset initial filter window of the image to be detected according to the filter adjustment coefficient to obtain a target filter window, filtering the image to be detected according to the target filter window to obtain a target image, and performing error detection according to the target images of all star point gray images to obtain a detection result;
the determining the diffraction stability of each region to be measured in the image to be measured according to the diffraction expressive force of all regions to be measured of all frame star gray images comprises the following steps:
taking any one region to be detected in the image to be detected as a reference region, taking a central pixel point of the reference region as a reference pixel point, and determining a region to be detected, which is closest to the reference pixel point in each frame of star point gray level image, as a target region corresponding to each frame of star point gray level image;
calculating the variance of diffraction expressive degrees of the reference area and the target area of all frame star point gray images as diffraction variance;
calculating a product normalization value of the negative number of the diffraction variance and the diffraction expression degree of the reference area to obtain the diffraction stability of the reference area, and replacing the reference area to obtain the diffraction stability of each area to be detected in the image to be detected;
the determining a filtering adjustment coefficient of the image to be measured according to the diffraction stability of all the areas to be measured in the image to be measured, the number of pixels of all the areas to be measured and the highlight persistence factor of the center pixels of all the areas to be measured comprises:
calculating inverse proportion normalization values of the average values of the diffraction stability of the reference area and the absolute values of the difference values of the diffraction stability of all other areas to be detected respectively to obtain stability coefficients of the reference area;
calculating the normalized value of the number of pixel points in the reference area to obtain the number coefficient of the reference area;
determining a filter factor of the reference area according to the stability coefficient, the quantity coefficient and the highlight duration factor of the central pixel point of the reference area, wherein the stability coefficient, the quantity coefficient and the highlight duration factor of the central pixel point of the reference area are in positive correlation with the filter factor, and the value of the filter factor is a normalized value;
and calculating the average value of the filter factors of all the areas to be measured as the filter adjustment coefficient of the image to be measured.
2. The method for quickly detecting errors of astronomical telescope according to claim 1, wherein the determining the highlight persistence factor of the point to be detected according to the numerical value and distribution of gray values of pixel points at the same position as the point to be detected in all frame star point gray images comprises:
taking pixel points at the same position as the point to be detected in the gray level images of all the frame star points as co-location points;
calculating the variance of gray values of all co-sites as co-located variance;
calculating the difference value between the gray maximum value of all pixel points and the gray value mean value of the same point, and carrying out normalization treatment to obtain gray weight;
and calculating the product of the parity variance and the gray weight, mapping the correlation and normalizing to obtain the highlight continuous factor of the to-be-measured point.
3. The method for rapidly detecting errors of an astronomical telescope according to claim 1, wherein the step of performing a region growing process based on a highlight persistence factor of each pixel point in the image to be detected with each pixel point in the image to be detected as a starting point to obtain a growing region comprises:
taking any pixel point in the image to be detected as a growing point, and taking the absolute value of the difference value of the highlight continuous factors of other pixel points and the growing point in a preset neighborhood range taking the growing point as the center as the highlight difference of the other pixel points and the growing point;
performing region growing treatment on the growing points based on a region growing algorithm, wherein the region growing condition is that the highlight difference is smaller than a preset difference threshold;
traversing all the images to be detected, and dividing the images to be detected into at least two different growing areas.
4. The method for rapidly detecting errors of astronomical telescope according to claim 1, wherein the step of screening the growing areas according to gray values and highlight sustaining factors of pixel points in each growing area to obtain a region to be detected including star points comprises the steps of:
calculating the gray value average value of all pixel points in each growth area to obtain gray indexes;
calculating the average value of the highlight continuous factors of all the pixel points in each growth area to obtain a highlight continuous index;
and taking the growth area with the gray index larger than a preset gray threshold and the highlight continuous index larger than the preset continuous threshold as an area to be measured.
5. The method for rapidly detecting errors of an astronomical telescope according to claim 1, wherein determining diffraction performance of an area to be detected where the center pixel is located according to gray value changes of pixels in a direction under all preset directions comprises:
calculating the absolute value of the gray value difference between each direction pixel point and the corresponding center pixel point in the same preset direction to obtain the center gray difference of each direction pixel point;
according to the fact that the distance between each direction pixel point and the corresponding center pixel point in the same preset direction is an abscissa, the center gray difference value is an ordinate, a two-dimensional coordinate system is constructed, and coordinate points of all the direction pixel points in the same preset direction in the two-dimensional coordinate system are determined;
performing straight line fitting on the coordinate points to obtain a fitting straight line corresponding to the preset direction, and taking the slope of the fitting straight line as a fitting slope;
and calculating inverse proportion normalization values of variances of all fitting slopes to obtain diffraction expressive degree of the region to be measured.
6. The method for quickly detecting an error of an astronomical telescope according to claim 1, wherein the step of adjusting a preset initial filter window of the image to be detected according to the filter adjustment coefficient to obtain a target filter window includes:
and calculating the product of the filter adjustment coefficient and the side length of the preset initial filter window of the image to be measured, and taking the odd value upwards as the side length of the target filter window to obtain the target filter window.
7. The method for quickly detecting errors of astronomical telescope according to claim 1, wherein filtering the image to be detected according to the target filtering window to obtain a target image comprises:
and performing Gaussian filtering on the image to be detected based on the target filtering window by using a Gaussian filtering algorithm to obtain a target image.
8. The method for quickly detecting errors of astronomical telescope according to claim 1, wherein the error detection is performed according to the target images of all star point gray level images to obtain detection results, comprising:
calculating the average value of gray gradients of all pixel points in all target images to obtain a gradient average value, carrying out normalization processing on the gradient average value to obtain image definition, and taking the image definition as a detection result.
CN202410160981.XA 2024-02-05 2024-02-05 Astronomical telescope error rapid detection method Active CN117710245B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410160981.XA CN117710245B (en) 2024-02-05 2024-02-05 Astronomical telescope error rapid detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410160981.XA CN117710245B (en) 2024-02-05 2024-02-05 Astronomical telescope error rapid detection method

Publications (2)

Publication Number Publication Date
CN117710245A CN117710245A (en) 2024-03-15
CN117710245B true CN117710245B (en) 2024-04-12

Family

ID=90144613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410160981.XA Active CN117710245B (en) 2024-02-05 2024-02-05 Astronomical telescope error rapid detection method

Country Status (1)

Country Link
CN (1) CN117710245B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6731819B1 (en) * 1999-05-21 2004-05-04 Olympus Optical Co., Ltd. Optical information processing apparatus capable of various types of filtering and image processing
CN102132495A (en) * 2008-05-15 2011-07-20 皇家飞利浦电子股份有限公司 Method, apparatus, and computer program product for compression and decompression of an image dataset

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10290082B2 (en) * 2016-08-23 2019-05-14 Canon Kabushiki Kaisha Image processing apparatus, imaging apparatus, image processing method, and storage medium for performing a restoration process for an image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6731819B1 (en) * 1999-05-21 2004-05-04 Olympus Optical Co., Ltd. Optical information processing apparatus capable of various types of filtering and image processing
CN102132495A (en) * 2008-05-15 2011-07-20 皇家飞利浦电子股份有限公司 Method, apparatus, and computer program product for compression and decompression of an image dataset

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
折/衍混合大相对孔径星敏感器光学系统设计;闫佩佩;樊学武;何建伟;;红外与激光工程;20111231(第12期);全文 *

Also Published As

Publication number Publication date
CN117710245A (en) 2024-03-15

Similar Documents

Publication Publication Date Title
CN113313641B (en) CT image denoising method with self-adaptive median filtering
JP4460839B2 (en) Digital image sharpening device
CN111986120A (en) Low-illumination image enhancement optimization method based on frame accumulation and multi-scale Retinex
CN114742727B (en) Noise processing method and system based on image smoothing
CN105913404A (en) Low-illumination imaging method based on frame accumulation
CN115661669B (en) Method and system for monitoring illegal farmland occupancy based on video monitoring
CN114240789A (en) Infrared image histogram equalization enhancement method based on optimized brightness keeping
CN114119436A (en) Infrared image and visible light image fusion method and device, electronic equipment and medium
CN116342433B (en) Image intelligent denoising method for 3D industrial camera
CN113643214A (en) Image exposure correction method and system based on artificial intelligence
CN112053302A (en) Denoising method and device for hyperspectral image and storage medium
CN113674231B (en) Method and system for detecting iron scale in rolling process based on image enhancement
CN113658067B (en) Water body image enhancement method and system in air tightness detection based on artificial intelligence
CN117218026B (en) Infrared image enhancement method and device
CN107911599B (en) Infrared image global automatic focusing method and device
CN106709890A (en) Method and device for processing low-light video image
CN116740579B (en) Intelligent collection method for territorial space planning data
CN117710245B (en) Astronomical telescope error rapid detection method
CN116342891B (en) Structured teaching monitoring data management system suitable for autism children
CN116309189B (en) Image processing method for emergency transportation classification of ship burn wounded person
CN111089586B (en) All-day star sensor star point extraction method based on multi-frame accumulation algorithm
CN110378271B (en) Gait recognition equipment screening method based on quality dimension evaluation parameters
CN110136085B (en) Image noise reduction method and device
CN111445435B (en) Multi-block wavelet transform-based reference-free image quality evaluation method
CN112819710A (en) Unmanned aerial vehicle jelly effect self-adaptive compensation method and system based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant