CN117152027A - Intelligent telescope based on image processing and artificial intelligent recognition - Google Patents
Intelligent telescope based on image processing and artificial intelligent recognition Download PDFInfo
- Publication number
- CN117152027A CN117152027A CN202311421540.2A CN202311421540A CN117152027A CN 117152027 A CN117152027 A CN 117152027A CN 202311421540 A CN202311421540 A CN 202311421540A CN 117152027 A CN117152027 A CN 117152027A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- training
- module
- calibration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004927 fusion Effects 0.000 claims abstract description 81
- 238000003384 imaging method Methods 0.000 claims abstract description 19
- 238000005070 sampling Methods 0.000 claims description 42
- 239000006185 dispersion Substances 0.000 claims description 23
- 238000000605 extraction Methods 0.000 claims description 14
- 238000013473 artificial intelligence Methods 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 4
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 7
- 238000000034 method Methods 0.000 description 6
- 230000009471 action Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B23/00—Telescopes, e.g. binoculars; Periscopes; Instruments for viewing the inside of hollow bodies; Viewfinders; Optical aiming or sighting devices
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B23/00—Telescopes, e.g. binoculars; Periscopes; Instruments for viewing the inside of hollow bodies; Viewfinders; Optical aiming or sighting devices
- G02B23/12—Telescopes, e.g. binoculars; Periscopes; Instruments for viewing the inside of hollow bodies; Viewfinders; Optical aiming or sighting devices with means for image conversion or intensification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4084—Scaling of whole images or parts thereof, e.g. expanding or contracting in the transform domain, e.g. fast Fourier transform [FFT] domain scaling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Astronomy & Astrophysics (AREA)
- Optics & Photonics (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the technical field of telescopes, and particularly discloses an intelligent telescope based on image processing and artificial intelligent recognition, wherein an imaging module acquires a first reference image before a shading cylinder is installed and a second reference image after the shading cylinder is installed, an image processing module determines a calibration image according to the first reference image and the second reference image, the image processing module performs light-removing pollution processing on an initial observation image acquired according to the calibration image to obtain a target observation image, a target recognition module determines target fusion weight according to the calibration image, performs feature fusion on the initial observation image and the target observation image according to the target fusion weight in a target recognition model, and recognizes target objects through the fused features.
Description
Technical Field
The invention belongs to the technical field of telescopes, and particularly relates to an intelligent telescope based on image processing and artificial intelligent recognition.
Background
The telescope is an optical instrument for observing remote objects by using a concave lens and a convex lens, and the light rays passing through the lens are refracted or reflected by the concave lens to enter a small hole and are converged to form images, and then the images are seen through a magnifying eyepiece.
Astronomical telescopes are sensitive to light pollution caused by urban light, in cities with light pollution, the telescope is affected by urban ground light to form a fuzzy image, astronomical lovers generally need to carry the telescope to the environments with less light pollution such as suburban areas, mountain tops and the like for astronomical observation in order to pursue good observation environments, or the astronomical observation is carried out when light pollution is low in late night, so that the astronomical telescope is very inconvenient.
Disclosure of Invention
The embodiment of the invention aims to provide an intelligent telescope based on image processing and artificial intelligent recognition, and aims to solve the problem that the telescope cannot resist light pollution in the background technology, so that the telescope is inconvenient to use.
In order to achieve the above object, the embodiment of the present invention provides the following technical solutions:
an intelligent telescope based on image processing and artificial intelligent recognition comprises an imaging module, an image processing module, a target recognition module and a shading barrel, wherein the shading barrel is detachably arranged on an objective lens of the intelligent telescope;
the imaging module is used for acquiring a first reference image and a second reference image, wherein the first reference image is an image acquired when the light shielding barrel is not installed on the objective lens, and the second reference image is an image acquired when the light shielding barrel is installed on the objective lens;
the image processing module is used for determining a calibration image according to the first reference image and the second reference image;
the imaging module is also used for acquiring an initial observation image when the objective lens is not provided with the shading cylinder;
the image processing module is also used for carrying out light pollution removal processing on the initial observation image according to the calibration image to obtain a target observation image;
the target recognition module is used for determining target fusion weights according to the calibration image, inputting the initial observation image and the target observation image into a preset target recognition model, carrying out feature fusion on the initial observation image and the target observation image according to the target fusion weights, and recognizing a target object through the fused features.
As a further definition of an embodiment of the present invention, the imaging module specifically comprises the following units:
the sampling point setting unit is used for determining a plurality of elevation angles of the intelligent telescope and setting a plurality of sampling points at each elevation angle;
the first reference image acquisition unit is used for acquiring images at a plurality of sampling points when the objective lens is not provided with the shading cylinder, so as to obtain a first reference image;
and the second reference image acquisition unit is used for acquiring images at a plurality of sampling points when the objective lens is provided with the shading barrel, so as to obtain a second reference image.
As a further limitation of the embodiments of the present invention, the image processing module specifically includes the following units:
an image graying unit for graying the first reference image and the second reference image, and calculating a gray value of the first reference image and a gray value of the second reference image;
the gray difference value calculation unit is used for calculating the difference value of the gray values of the first reference image and the second reference image of each sampling point to obtain a first gray difference value;
the calibration image generation unit is used for generating a calibration image with preset size and resolution, determining a corresponding reference pixel point of the sampling point in the calibration image according to the distribution of the sampling point, and adjusting the gray value of the reference pixel point to a first gray difference value of the sampling point;
and the calibration image interpolation unit is used for carrying out interpolation processing on the gray level value of the reference pixel point to obtain the gray level value of a non-reference pixel point, wherein the non-reference pixel point is a pixel point except the reference pixel point in the calibration image.
As a further limitation of the embodiment of the present invention, the image processing module specifically further includes the following units:
the image scaling unit is used for scaling the initial observation image to obtain an initial observation image with the same size and resolution as the calibration image;
the gray value calibration unit is used for calculating a second gray difference value of gray values of pixel points with the same pixel coordinates in the scaled initial observation image and the calibration image;
and the gray value adjusting unit is used for adjusting the gray value of each pixel point of the scaled initial observation image to the second gray difference value to obtain a target observation image.
As a further limitation of the embodiment of the present invention, the object recognition module specifically includes the following units:
the target fusion weight determining unit is used for determining the dispersion of the gray values of the calibration image;
the target fusion weight matching unit is used for searching the weight matched with the dispersion in a preset dispersion weight table to be used as a target fusion weight;
the target recognition unit is used for inputting the target fusion weight, the initial observation image and the target observation image into a preset target recognition model, carrying out feature fusion on the initial observation image and the target observation image according to the target fusion weight, and recognizing a target object through the fused features.
As a further limitation of the embodiment of the present invention, the target fusion weight determining unit specifically includes the following subunits:
an average gray value calculation subunit, configured to calculate an average value of absolute values of gray values of pixel points in the calibration image;
and the dispersion calculating subunit is used for calculating the difference value between the absolute value of the gray value of each pixel point and the average value, and calculating the average value of the difference value as the dispersion.
As a further definition of an embodiment of the present invention, the object recognition model is trained by the following modules:
the training image acquisition module is used for acquiring a first training image and a second training image, wherein the second training image is an image obtained by preprocessing the first training image through a training calibration image, and the first training image is marked with a first object;
the model construction module is used for constructing a target recognition model and initializing fusion weights;
the recognition module is used for inputting the first training image and the second training image into the target recognition model, extracting first image features of the first training image and extracting second image features of the second training image, fusing the first image features and the second image features according to the fusion weight to obtain fusion features, and recognizing a second object according to the fusion features;
the model updating module is used for updating the fusion weight according to the first object and the second object;
and the training condition judging module is used for judging whether a preset training condition is met, if yes, stopping training the target recognition module, storing the training calibration image and the fusion weight, and if not, returning to the recognition module.
As a further limitation of the embodiment of the present invention, the object recognition model includes a first feature extraction sub-model, a second feature extraction sub-model, a fusion sub-model, and a recognition sub-model, and the recognition module is specifically configured to:
extracting first image features of a first training image in a first feature extraction sub-model;
extracting second image features of the second training image in the second feature extraction sub-model;
fusing the first image features and the second image features in the feature fusion sub-model according to the fusion weights to obtain fusion features;
a second object in the fused feature is identified in the identification sub-model.
As a further definition of an embodiment of the present invention, it further comprises:
and the image display module is used for displaying the target observation image.
As a further definition of an embodiment of the present invention, it further comprises:
and the target information display module is used for displaying target information on the target observation image.
Compared with the prior art, the invention has the beneficial effects that:
the telescope provided by the embodiment of the invention comprises an imaging module, an image processing module, a target recognition module and a shading barrel, wherein a calibration image is determined by installing a first reference image and a second reference image before and after the shading barrel before observation, and the initial observation image is subjected to light removal pollution treatment through the calibration image to obtain a target observation image, and further a target fusion weight is determined according to the calibration image, so that the initial observation image and the target observation image are subjected to feature fusion according to the target fusion weight in a target recognition model, and a target object is recognized through the fused features.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the present invention.
FIG. 1 illustrates a system block diagram of an intelligent telescope based on image processing and artificial intelligence recognition in accordance with one embodiment of the present invention;
FIG. 2 shows a schematic view of an embodiment of the present invention before and after installation of a shade cartridge;
FIG. 3 is a schematic diagram showing the elevation angle versus sampling point distribution in an embodiment of the present invention;
FIG. 4 is a block diagram showing the configuration of an image processing module in the embodiment of the present invention;
FIG. 5 is a schematic diagram of a reference pixel point of a calibration image in an embodiment of the invention;
FIG. 6 is a schematic diagram of a model structure of an object recognition model in an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Fig. 1 shows a system block diagram of an intelligent telescope based on image processing and artificial intelligent recognition according to an embodiment of the present invention, and as shown in fig. 1, the intelligent telescope based on image processing and artificial intelligent recognition according to an embodiment of the present invention includes an imaging module 1, an image processing module 2, a target recognition module 3, and a light shielding barrel 4, where the light shielding barrel 4 is detachably mounted on an objective lens of the intelligent telescope.
The telescope of the present embodiment is an optical telescope, where the imaging module 1 may include an objective lens, an eyepiece, an imaging sightseeing device (CCD or CMOS, etc.), where the objective lens, the eyepiece, the imaging sightseeing device, etc. may be disposed in a lens barrel, the image processing module 2 may be a processor for processing an image output by the imaging module 1, the image processing module 2 may be used for removing light pollution from an acquired observation image, and the object recognition module 3 may be a processor loaded with an object recognition model, where the object recognition model may recognize an object in the observation image, such as when the telescope is an astronomical telescope, may recognize a category of an observed celestial body, recognize a feature on an celestial body, etc. The light shielding cylinder 4 can be a circular cylindrical body with two ends penetrating, and the light shielding cylinder 4 is detachably arranged on an objective lens of the telescope, and is shown in a schematic view before the light shielding cylinder 4 is arranged and after the light shielding cylinder 4 is arranged in fig. 2.
In this embodiment, the imaging module 1 is configured to collect a first reference image and a second reference image, where the first reference image is an image collected when the objective lens is not provided with the light shielding barrel 4, the second reference image is an image collected when the objective lens is provided with the light shielding barrel 4, the image processing module 2 is configured to determine a calibration image according to the first reference image and the second reference image, the imaging module 1 is further configured to collect an initial observation image when the objective lens is not provided with the light shielding barrel 4, the image processing module 2 is further configured to perform a deglazing process on the initial observation image according to the calibration image to obtain a target observation image, the target recognition module 3 is configured to determine a target fusion weight according to the calibration image, input the initial observation image and the target observation image into a preset target recognition model, perform feature fusion on the initial observation image and the target observation image according to the target fusion weight, and recognize a target object through the fused features.
Before observation, the telescope of this embodiment firstly aligns to the direction to be observed before installing the shading tube, the first reference image is an image of the direction to be observed of light pollution, after installing the shading tube, the second reference image is aligned to the direction to be observed, as shown in fig. 2, since the view angles of the telescope before installing the shading tube and after installing the shading tube are reduced, when the shading tube is slender, light pollution is radiated into the air due to light on the ground in a city, light pollution entering the shading tube is less or even no light pollution enters the shading tube, the second reference image can be approximately considered to be an image acquired when no light pollution exists, the image processing module can obtain a calibration image by processing the first reference image and the second reference image, the calibration image can be an image of light pollution intensity distribution in the direction to be observed, after the initial observation image is acquired, the target observation image after removing light pollution can be obtained by processing the calibration image, and further according to the calibration image, the target fusion weight can be the target fusion weight which can be the target recognition feature of the final observation image when the initial observation image and the target fusion weight are fused.
In one embodiment, the imaging module may include a sampling point setting unit, a first reference image acquisition unit, and a second reference image acquisition unit, where the sampling point setting unit is configured to determine a plurality of elevation angles of the intelligent telescope, and set a plurality of sampling points at each elevation angle, the first reference image acquisition unit is configured to acquire a first reference image at the plurality of sampling points when the objective lens is not equipped with the light shielding tube, and the second reference image acquisition unit is configured to acquire an image at the plurality of sampling points when the objective lens is equipped with the light shielding tube, so as to acquire a second reference image.
Specifically, as shown in fig. 3, the elevation angles a0, a1, a2, a3 during sampling may be set, then 4 sampling points in the horizontal direction are set when the telescope is at each elevation angle, and the total includes 16 sampling points, when the objective lens is not provided with the light shielding tube, the first reference image is obtained by collecting images at the 16 sampling points, 16 first reference images are obtained, when the objective lens is provided with the light shielding tube, the 16 second reference images are obtained by collecting images at the 16 sampling points, and of course, in practical application, a person skilled in the art may set more elevation angles with different angles and different numbers of sampling points, the more the calibration image indicates that the light pollution intensity distribution is more accurate, and the setting mode of the sampling points is not limited in this embodiment.
As shown in fig. 4, the image processing module 2 of this embodiment may include an image graying unit, a gray difference calculating unit, a calibration image generating unit, and a calibration image interpolating unit, where the image graying unit is configured to graying the first reference image and the second reference image and calculate gray values of the first reference image and the second reference image, the gray difference calculating unit is configured to calculate a difference between the gray values of the first reference image and the second reference image of each sampling point to obtain a first gray difference, the calibration image generating unit is configured to generate a calibration image of a preset size and a preset resolution, determine a reference pixel point corresponding to the sampling point in the calibration image according to a distribution of the sampling points, and adjust the gray value of the reference pixel point to the first gray difference of the sampling point, and the calibration image interpolating unit is configured to interpolate the gray values of the reference pixel point to obtain a gray value of a non-reference pixel point, where the non-reference pixel point is a pixel point other than the reference pixel point in the calibration image.
The gray value of the first reference image may be an average value of gray values of all pixels in the first reference image, the gray value of the second reference image may be an average value of gray values of all pixels in the second reference image, of course, the gray value of the reference image may also be an average value of the gray values of the pixels after clustering to obtain a plurality of clustering centers, the average value of the clustering centers is calculated as the gray value of the reference image, and further, the difference value of the gray values of the first reference image and the second reference image of each sampling point is calculated to obtain a first gray difference value, and the first gray difference value of the sampling point p1 is an exemplary difference value of the gray values of the first reference image and the second reference image collected by the sampling point p1, and the first gray difference value of the sampling point p2 is an analog of the difference value of the gray values of the first reference image and the second reference image collected by the sampling point p 2.
Referring to fig. 5, a schematic diagram of a calibration image is shown, after the first reference image in fig. 3 is scaled to be the same as the size and resolution of the calibration image, each of the sampling points P1-P16 is mapped into the calibration image to obtain a plurality of reference pixel points O1-O16, and the gray value of the reference pixel point in the calibration image is adjusted to be the first gray difference value of the corresponding sampling point, which is exemplary, the gray value of the reference pixel point O1 is equal to the first gray difference value of the sampling point P1, the gray value of the reference pixel point O2 is equal to the first gray difference value of the sampling point P2, and so on.
After the gray value of each reference pixel point is determined in the calibration image, interpolation calculation can be performed by linear interpolation, nonlinear interpolation, median interpolation and other methods to obtain the gray value of the non-reference pixel point in the calibration image, and a final calibration image is obtained, wherein the calibration image represents the light pollution intensity distribution in the direction to be observed.
As shown in fig. 4, the image processing module 2 of this embodiment may further include an image scaling unit, a gray value calibration unit, and a gray value adjustment unit, where the image scaling unit is configured to scale the initial observation image to obtain an initial observation image with a size and a resolution identical to those of the calibration image, the gray value calibration unit is configured to calculate a second gray difference value of a gray value of a pixel point with a pixel coordinate identical to that of the pixel point in the scaled initial observation image and the calibration image, and the gray value adjustment unit is configured to adjust the gray value of each pixel point of the scaled initial observation image to the second gray difference value to obtain the target observation image.
Specifically, after the imaging module 1 collects the initial observation image, the initial observation image may be scaled to an image consistent with the size and resolution of the calibration image, so that each pixel point of the initial observation image and each pixel point of the calibration image are in one-to-one correspondence, then, a difference value calculation is performed on gray values of the pixel points in the initial observation image consistent with x and y pixel coordinates in the calibration image, a second gray difference value is obtained, and gray values of all pixel points in the initial observation image are adjusted to the corresponding second gray difference value, so that a target observation image is obtained, wherein the target observation image is the image after the light pollution interference is removed.
As shown in fig. 6, in one embodiment, the target recognition module 3 specifically includes a target fusion weight determining unit, a target fusion weight matching unit, and a target recognition unit, where the target fusion weight determining unit is configured to determine a dispersion of a gray value of the calibration image, the target fusion weight matching unit is configured to search a preset dispersion weight table for a weight matched with the dispersion as a target fusion weight, and the target recognition unit is configured to input the target fusion weight, the initial observation image, and the target observation image into a preset target recognition model, perform feature fusion on the initial observation image and the target observation image according to the target fusion weight, and recognize a target object according to the fused feature.
Specifically, the target fusion weight determining unit specifically includes an average gray value calculating subunit and a dispersion calculating subunit, where the average gray value calculating subunit is used to calculate an average value of the absolute values of the gray values of the pixels in the calibration image, and the dispersion calculating subunit is used to calculate a difference between the absolute value of the gray value of each pixel and the average value, and calculate the average value of the differences as a dispersion, and of course, in practical application, it is also possible to calculate the standard deviation of the absolute values of the gray values of all the pixels in the calibration image as a dispersion, and the embodiment does not limit the manner of calculating the dispersion.
In this embodiment, the target fusion weight is determined by calibrating the dispersion of the gray values of all the pixel points in the image, the larger the dispersion is, the less weight should be given when the light pollution distribution in the direction to be observed is uneven, otherwise, the smaller the dispersion is, the more uniform or no light pollution is given when the light pollution distribution in the direction to be observed is uneven, and the greater weight should be given when the light pollution distribution in the direction to be observed is fused, so as to extract more detail features from the initial observation image.
In one embodiment, the target recognition model is trained by a training image acquisition module, a model construction module, a recognition module, a model updating module and a training condition judging module, wherein the training image acquisition module is used for acquiring a first training image and a second training image, the second training image is an image obtained by preprocessing a first training image through a training calibration image, the first training image is marked with a first object, and the model construction module is used for constructing the target recognition model and initializing fusion weights; the recognition module is used for inputting the first training image and the second training image into the target recognition model, extracting the first image feature of the first training image and the second image feature of the second training image, fusing the first image feature and the second image feature according to the fusion weight to obtain fusion features, recognizing the second object according to the fusion features, and the model updating module is used for updating the fusion weight according to the first object and the second object; the training condition judging module is used for judging whether the preset training condition is met, if yes, stopping training the target recognition module, storing the training calibration image and the fusion weight, and if not, returning to the recognition module.
The built target recognition model comprises a first feature extraction sub-model, a second feature extraction sub-model, a fusion sub-model and a recognition sub-model, wherein the first feature extraction sub-model, the second feature extraction sub-model and the recognition sub-model can be trained models, fusion weights in the fusion sub-model are required to be trained, then first image features of a first training image can be extracted in the first feature extraction sub-model, second image features of a second training image in the second feature extraction sub-model, fusion features are obtained by fusing the first image features and the second image features according to the fusion weights in the feature fusion sub-model, and second objects in the fusion features are recognized in the recognition sub-model.
And calculating a loss rate through the marked first object and the identified second object, wherein the loss rate can be calculated through a loss function such as mean square error, cosine similarity and the like, so that the fusion weight in the feature fusion submodel can be adjusted through the loss rate until the loss rate is smaller than a preset threshold or the number of times of overlapping training reaches a preset number of times, then storing the training calibration image and the fusion weight, specifically, calculating the dispersion of gray values of all pixels in the training calibration image, and storing the dispersion and the fusion weight. The training calibration image is an image adopted when the first training image is subjected to the light pollution removal treatment.
When the target is identified, the target fusion weight is matched through the calibration image, and after the target fusion weight is loaded in the target identification model, the initial observation image and the target observation image are input into the target identification model so as to identify the target object.
In one embodiment, the intelligent telescope based on image processing and artificial intelligent recognition of the present embodiment further includes an image display module and a target information display module, where the image display module is configured to display a target observation image, and the target information display module is configured to display target information on the target observation image, so as to display information of the target observation image after removing light pollution and the target identified in the image, such as information of a name of the target, detailed description, a distance from the telescope, and the like, on an electronic display screen of the telescope or an upper computer connected to the telescope.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (10)
1. The intelligent telescope is characterized by comprising an imaging module, an image processing module, a target identification module and a shading barrel, wherein the shading barrel is detachably arranged on an objective lens of the intelligent telescope;
the imaging module is used for acquiring a first reference image and a second reference image, wherein the first reference image is an image acquired when the light shielding barrel is not installed on the objective lens, and the second reference image is an image acquired when the light shielding barrel is installed on the objective lens;
the image processing module is used for determining a calibration image according to the first reference image and the second reference image;
the imaging module is also used for acquiring an initial observation image when the objective lens is not provided with the shading cylinder;
the image processing module is also used for carrying out light pollution removal processing on the initial observation image according to the calibration image to obtain a target observation image;
the target recognition module is used for determining target fusion weights according to the calibration image, inputting the initial observation image and the target observation image into a preset target recognition model, carrying out feature fusion on the initial observation image and the target observation image according to the target fusion weights, and recognizing a target object through the fused features.
2. The intelligent telescope based on image processing and artificial intelligence recognition according to claim 1, wherein the imaging module comprises the following units:
the sampling point setting unit is used for determining a plurality of elevation angles of the intelligent telescope and setting a plurality of sampling points at each elevation angle;
the first reference image acquisition unit is used for acquiring images at a plurality of sampling points when the objective lens is not provided with the shading cylinder, so as to obtain a first reference image;
and the second reference image acquisition unit is used for acquiring images at a plurality of sampling points when the objective lens is provided with the shading barrel, so as to obtain a second reference image.
3. The intelligent telescope based on image processing and artificial intelligence recognition according to claim 2, wherein the image processing module comprises the following units:
an image graying unit for graying the first reference image and the second reference image, and calculating a gray value of the first reference image and a gray value of the second reference image;
the gray difference value calculation unit is used for calculating the difference value of the gray values of the first reference image and the second reference image of each sampling point to obtain a first gray difference value;
the calibration image generation unit is used for generating a calibration image with preset size and resolution, determining a corresponding reference pixel point of the sampling point in the calibration image according to the distribution of the sampling point, and adjusting the gray value of the reference pixel point to a first gray difference value of the sampling point;
and the calibration image interpolation unit is used for carrying out interpolation processing on the gray level value of the reference pixel point to obtain the gray level value of a non-reference pixel point, wherein the non-reference pixel point is a pixel point except the reference pixel point in the calibration image.
4. The intelligent telescope based on image processing and artificial intelligence recognition according to claim 3, wherein the image processing module comprises the following units:
the image scaling unit is used for scaling the initial observation image to obtain an initial observation image with the same size and resolution as the calibration image;
the gray value calibration unit is used for calculating a second gray difference value of gray values of pixel points with the same pixel coordinates in the scaled initial observation image and the calibration image;
and the gray value adjusting unit is used for adjusting the gray value of each pixel point of the scaled initial observation image to the second gray difference value to obtain a target observation image.
5. The intelligent telescope based on image processing and artificial intelligence recognition according to any one of claims 1-4, wherein the object recognition module comprises in particular the following units:
the target fusion weight determining unit is used for determining the dispersion of the gray values of the calibration image;
the target fusion weight matching unit is used for searching the weight matched with the dispersion in a preset dispersion weight table to be used as a target fusion weight;
the target recognition unit is used for inputting the target fusion weight, the initial observation image and the target observation image into a preset target recognition model, carrying out feature fusion on the initial observation image and the target observation image according to the target fusion weight, and recognizing a target object through the fused features.
6. The intelligent telescope based on image processing and artificial intelligence recognition according to claim 5, wherein the target fusion weight determining unit specifically comprises the following sub-units:
an average gray value calculation subunit, configured to calculate an average value of absolute values of gray values of pixel points in the calibration image;
and the dispersion calculating subunit is used for calculating the difference value between the absolute value of the gray value of each pixel point and the average value, and calculating the average value of the difference value as the dispersion.
7. The intelligent telescope based on image processing and artificial intelligence recognition according to claim 5, wherein the object recognition model is trained by:
the training image acquisition module is used for acquiring a first training image and a second training image, wherein the second training image is an image obtained by preprocessing the first training image through a training calibration image, and the first training image is marked with a first object;
the model construction module is used for constructing a target recognition model and initializing fusion weights;
the recognition module is used for inputting the first training image and the second training image into the target recognition model, extracting first image features of the first training image and extracting second image features of the second training image, fusing the first image features and the second image features according to the fusion weight to obtain fusion features, and recognizing a second object according to the fusion features;
the model updating module is used for updating the fusion weight according to the first object and the second object;
and the training condition judging module is used for judging whether a preset training condition is met, if yes, stopping training the target recognition module, storing the training calibration image and the fusion weight, and if not, returning to the recognition module.
8. The intelligent telescope based on image processing and artificial intelligence recognition according to claim 7, wherein the object recognition model comprises a first feature extraction sub-model, a second feature extraction sub-model, a fusion sub-model and a recognition sub-model, the recognition module being specifically configured to:
extracting first image features of a first training image in a first feature extraction sub-model;
extracting second image features of the second training image in the second feature extraction sub-model;
fusing the first image features and the second image features in the feature fusion sub-model according to the fusion weights to obtain fusion features;
a second object in the fused feature is identified in the identification sub-model.
9. The intelligent telescope based on image processing and artificial intelligence recognition according to any one of claims 1-4, further comprising:
and the image display module is used for displaying the target observation image.
10. The intelligent telescope based on image processing and artificial intelligence recognition according to claim 9, further comprising:
and the target information display module is used for displaying target information on the target observation image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311421540.2A CN117152027B (en) | 2023-10-31 | 2023-10-31 | Intelligent telescope based on image processing and artificial intelligent recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311421540.2A CN117152027B (en) | 2023-10-31 | 2023-10-31 | Intelligent telescope based on image processing and artificial intelligent recognition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117152027A true CN117152027A (en) | 2023-12-01 |
CN117152027B CN117152027B (en) | 2024-02-09 |
Family
ID=88906520
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311421540.2A Active CN117152027B (en) | 2023-10-31 | 2023-10-31 | Intelligent telescope based on image processing and artificial intelligent recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117152027B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111209907A (en) * | 2019-12-20 | 2020-05-29 | 广西柳州联耕科技有限公司 | Artificial intelligent identification method for product characteristic image in complex light pollution environment |
CN113099206A (en) * | 2021-04-01 | 2021-07-09 | 苏州科达科技股份有限公司 | Image processing method, device, equipment and storage medium |
WO2021169723A1 (en) * | 2020-02-27 | 2021-09-02 | Oppo广东移动通信有限公司 | Image recognition method and apparatus, electronic device, and storage medium |
CN113610695A (en) * | 2021-05-07 | 2021-11-05 | 浙江兆晟科技股份有限公司 | Infrared telescope full-frame imaging output method and system |
CN114862723A (en) * | 2022-05-31 | 2022-08-05 | 中国科学院上海天文台 | Astronomical telescope image field distortion calibration method based on measurement of dense star field |
WO2022222788A1 (en) * | 2021-04-19 | 2022-10-27 | 北京字跳网络技术有限公司 | Image processing method and apparatus |
CN115272952A (en) * | 2022-06-21 | 2022-11-01 | 重庆市科源能源技术发展有限公司 | Safety monitoring method, device and system for new energy capital construction and storage medium |
CN116958021A (en) * | 2022-11-16 | 2023-10-27 | 腾讯科技(深圳)有限公司 | Product defect identification method based on artificial intelligence, related device and medium |
-
2023
- 2023-10-31 CN CN202311421540.2A patent/CN117152027B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111209907A (en) * | 2019-12-20 | 2020-05-29 | 广西柳州联耕科技有限公司 | Artificial intelligent identification method for product characteristic image in complex light pollution environment |
WO2021169723A1 (en) * | 2020-02-27 | 2021-09-02 | Oppo广东移动通信有限公司 | Image recognition method and apparatus, electronic device, and storage medium |
CN113099206A (en) * | 2021-04-01 | 2021-07-09 | 苏州科达科技股份有限公司 | Image processing method, device, equipment and storage medium |
WO2022222788A1 (en) * | 2021-04-19 | 2022-10-27 | 北京字跳网络技术有限公司 | Image processing method and apparatus |
CN113610695A (en) * | 2021-05-07 | 2021-11-05 | 浙江兆晟科技股份有限公司 | Infrared telescope full-frame imaging output method and system |
CN114862723A (en) * | 2022-05-31 | 2022-08-05 | 中国科学院上海天文台 | Astronomical telescope image field distortion calibration method based on measurement of dense star field |
CN115272952A (en) * | 2022-06-21 | 2022-11-01 | 重庆市科源能源技术发展有限公司 | Safety monitoring method, device and system for new energy capital construction and storage medium |
CN116958021A (en) * | 2022-11-16 | 2023-10-27 | 腾讯科技(深圳)有限公司 | Product defect identification method based on artificial intelligence, related device and medium |
Non-Patent Citations (1)
Title |
---|
门涛 等: "平流层飞艇光电探测望远镜关键技术分析", 《航空兵器》, no. 03, pages 57 - 60 * |
Also Published As
Publication number | Publication date |
---|---|
CN117152027B (en) | 2024-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107229930B (en) | Intelligent identification method for numerical value of pointer instrument | |
CN108537191B (en) | Three-dimensional face recognition method based on structured light camera | |
CN112818988B (en) | Automatic identification reading method and system for pointer instrument | |
CN111476827B (en) | Target tracking method, system, electronic device and storage medium | |
CN109540113B (en) | Total station and star map identification method thereof | |
CN110889829A (en) | Monocular distance measurement method based on fisheye lens | |
CN112016497A (en) | Single-view Taijiquan action analysis and assessment system based on artificial intelligence | |
CN107609547B (en) | Method and device for quickly identifying stars and telescope | |
CN114905512B (en) | Panoramic tracking and obstacle avoidance method and system for intelligent inspection robot | |
CN110555408A (en) | Single-camera real-time three-dimensional human body posture detection method based on self-adaptive mapping relation | |
CN112132900B (en) | Visual repositioning method and system | |
CN112634368A (en) | Method and device for generating space and OR graph model of scene target and electronic equipment | |
CN117036300A (en) | Road surface crack identification method based on point cloud-RGB heterogeneous image multistage registration mapping | |
CN114170537A (en) | Multi-mode three-dimensional visual attention prediction method and application thereof | |
CN115854895A (en) | Non-contact stumpage breast diameter measurement method based on target stumpage form | |
CN117113284B (en) | Multi-sensor fusion data processing method and device and multi-sensor fusion method | |
CN117152027B (en) | Intelligent telescope based on image processing and artificial intelligent recognition | |
CN112633222B (en) | Gait recognition method, device, equipment and medium based on countermeasure network | |
CN112924037A (en) | Infrared body temperature detection system and detection method based on image registration | |
CN115761265A (en) | Method and device for extracting substation equipment in laser radar point cloud | |
CN110705533A (en) | AI recognition and grabbing system for inspection report | |
CN213582205U (en) | Compound eye image extraction and map construction system for harbor SLAM | |
CN116385502B (en) | Image registration method based on region search under geometric constraint | |
Wang et al. | A Pointer Instrument Reading Approach Based On Mask R-CNN Key Points Detection | |
CN118089674B (en) | Distance and azimuth measurement system based on night image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |