CN116389901B - On-orbit intelligent exposure and focusing method and system for space camera and electronic equipment - Google Patents

On-orbit intelligent exposure and focusing method and system for space camera and electronic equipment Download PDF

Info

Publication number
CN116389901B
CN116389901B CN202310294801.2A CN202310294801A CN116389901B CN 116389901 B CN116389901 B CN 116389901B CN 202310294801 A CN202310294801 A CN 202310294801A CN 116389901 B CN116389901 B CN 116389901B
Authority
CN
China
Prior art keywords
space
image
focusing
camera
exposure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310294801.2A
Other languages
Chinese (zh)
Other versions
CN116389901A (en
Inventor
武奥迪
万雪
舒磊正
左健宏
张晟洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Technology and Engineering Center for Space Utilization of CAS
Original Assignee
Technology and Engineering Center for Space Utilization of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Technology and Engineering Center for Space Utilization of CAS filed Critical Technology and Engineering Center for Space Utilization of CAS
Priority to CN202310294801.2A priority Critical patent/CN116389901B/en
Publication of CN116389901A publication Critical patent/CN116389901A/en
Application granted granted Critical
Publication of CN116389901B publication Critical patent/CN116389901B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/671Focus control based on electronic image sensor signals in combination with active ranging signals, e.g. using light or sound signals emitted toward objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application relates to the technical field of space cameras, in particular to an on-orbit intelligent exposure and focusing method, a system and electronic equipment of a space camera, wherein the method comprises the following steps: extracting a target detection frame and the confidence coefficient of the target detection frame from an image shot by the space camera by using a deep-learning space target detection model, judging whether the confidence coefficient of the target detection frame is larger than a preset confidence coefficient threshold value, and determining a space target region in different modes; calculating to obtain the exposure time of the space camera for shooting the next frame of image by utilizing a self-adaptive step length adjusting algorithm; judging whether the laser radar distance corresponding to the current frame image is within a preset range or not, and obtaining a focusing gear of the space camera for shooting the next frame image by utilizing different focusing modes; and controlling the space camera to shoot the next frame of image according to the exposure time and the focusing gear of the next frame of image, so that a normally exposed and clear space target image can be obtained, and the method is suitable for an on-orbit scene.

Description

On-orbit intelligent exposure and focusing method and system for space camera and electronic equipment
Technical Field
The application relates to the technical field of space cameras, in particular to an on-orbit intelligent exposure and focusing method, system and electronic equipment of a space camera.
Background
The space camera with low power consumption, small volume and high reliability is increasingly applied in the field of space, and can finish on-orbit key process records, space intersection butt joint, non-cooperative space target vision relative navigation and other key applications. Because of the particularity of the aerospace task, a telemetry invisible arc section exists when the spacecraft operates, and the space illumination changes rapidly and the manual intervention has hysteresis, the aerospace camera is required to automatically analyze and adjust camera parameters rapidly according to the acquired images, and further clear images are obtained.
The on-orbit intelligent exposure and focusing of the space camera is to control the exposure time and gain of the camera so as to control the brightness of the acquired image, obtain the image with small information loss and normal exposure, and control the camera to focus a large gear so as to obtain the image with clear focusing. The space environment is different from the ground environment, a large number of background areas with large contrast ratio with space targets exist in the visual field of a camera, and the image has the characteristic of high dynamic range, so that the method is not suitable for global exposure and focusing, but the space target areas are required to be determined firstly, and then the space targets are subjected to local exposure and focusing, and meanwhile, the method is required to be quickly adjusted to a reasonable state due to short action time of space missions and less imaging opportunities.
The core problems of the on-orbit intelligent exposure and focusing technology of the space camera can be summarized as the following three:
1) Accurately finding the position of a space target in an image, and realizing exposure and focusing facing the space target;
2) The rapid camera exposure time and gain adjustment algorithm enable imaging to converge to an ideal exposure interval as soon as possible;
3) The rapid and stable camera automatic focusing technology ensures that imaging is clear and stable;
the core technology of the automatic exposure method of the camera is a photometry method and a camera parameter adjusting method. Specifically:
1) The photometry method is mainly divided into 4 types, and specifically comprises a method based on average gray scale, a method based on information entropy, a method based on partition weight average value and a method based on brightness histogram:
(1) the average gray level-based method directly uses the average gray level of the whole image as a photometry result.
The disadvantages of the average gray scale based method are: the method can not achieve good effects under various backgrounds, and is not suitable for scenes with large dynamic range.
(2) The method based on the information entropy judges whether the most suitable exposure is obtained or not by calculating the image information entropy.
The method based on information entropy has the following defects: the noise immunity is poor, the information entropy is a non-monotonic curve, the process of searching for the extremum is slower or is not converged in comparison, and the phenomenon of image brightness oscillation is easy to occur.
(3) The method based on the partition weight average value can set image partitions according to different positions of a camera region, set a main object as a region of interest, and set other regions as background regions.
The disadvantages of the partition weight average based approach are: the region of interest needs to be selected manually, whereas the region of the spatial target in the scene is not fixed and cannot be selected manually.
(4) The luminance histogram based method sets several different luminance bins, with pixels of different bins taking different weights.
The disadvantages of the luminance histogram based method are: the fixed threshold is adopted in different areas, and the space illumination condition is greatly changed, so that the method can not be completely used for various illumination conditions.
2) The camera parameter adjusting methods are mainly divided into 3 types, including an iterative method, a direct method and a fuzzy control method, and specifically:
(1) the iteration method adjusts parameters such as exposure time, camera gain and the like by judging the size relation between the photometric value and the ideal exposure value, wherein the adjusting step length can be set to be a fixed step length or a dynamic step length based on a table lookup, the dynamic step length method obtains the step length by the table lookup of the difference between the photometric value and the ideal exposure value, and the larger the difference is, the larger the step length is, the more iterative adjustment is not required to be performed once to an ideal exposure interval.
The iterative method has the following defects: the iteration speed is low when the step length of the fixed step length method is small, later-stage oscillation of exposure adjustment is easy to occur when the step length is large, the convergence speed of the dynamic step length method is high, but the fixed step length method is still a lookup table designed in a segmented mode, the step length range is limited, and the adjustment speed is low when the underexposure and overexposure degrees of images are high.
(2) The direct method requires that the current image photometry result is used for determining the only proper exposure parameters, and the exposure is directly adjusted to an ideal exposure interval through one-time camera parameter adjustment.
The disadvantages of the direct process are: a lot of experiments on the ground are required, and it is difficult to apply to a spatial scene because of the complex relationship between exposure time, camera gain and image brightness and the different relationship in different imaging environments.
(3) The fuzzy control method is to design a fuzzy controller to adjust camera parameters to achieve an ideal exposure interval by comparing a designed photometric value with an ideal exposure value.
The fuzzy control method is determined as follows: the complex fuzzy controller is required to be arranged to complete the control of the camera parameters, and the whole scheme is complex.
The camera auto-focusing method mainly comprises active focusing and passive focusing, in particular:
1) Active focusing includes two algorithms, ranging focusing based on triangulation and ranging focusing based on signal reflection:
(1) the distance measurement focusing based on the triangulation method utilizes two reflectors, one of the reflectors is driven to rotate by a motor, the distance between a camera and a target is calculated according to the installation relation and the rotation swing angle between the reflectors when light paths are overlapped, and focusing is completed by searching a relation table of the distance and a focusing gear.
The disadvantages of ranging and focusing based on the triangulation method are as follows: the mechanical result of ranging and focusing based on the triangulation method is complex, and the volume and the weight of the camera are increased.
(2) And transmitting signals such as ultrasonic waves, radars, lasers and the like to the target based on signal reflection ranging focusing, calculating the distance between the camera and the target through signal echo, and checking up the table to finish focusing.
The disadvantage of focusing based on signal reflection ranging is: the laser radar sensor is unsuitable for ultrasonic waves in a space scene, single-point laser needs to track and aim at a target, and the design is complex, so that the laser radar sensor commonly used by a spacecraft in the process of executing on-orbit service is wide in field of view and high in precision, but the sensor is limited in working distance and high in power, and a ranging result can be obtained only in a part of intervals.
2) Passive focusing includes focus detection methods and image processing-based methods:
(1) focus detection methods can be further classified into contrast detection methods and phase detection methods. The contrast detection method includes setting two photosensitive devices in the same distance before and after the imaging film, driving the focusing lens with motor, calculating the contrast of the two photosensitive devices, and indicating that the film is in the focus position when the contrast is the same. The phase detection method is to arrange an automatic focusing module in the camera, judge the phase difference through the peak positions of the two automatic focusing sensors, and focus successfully when the phase difference is zero.
The focus detection method has the following disadvantages: the contrast detection method in the focus detection method needs additional devices, noise is easy to introduce in order to improve the light sensitivity in a low-light environment, the phase detection method needs additional modules, the mechanism is complex, and the precision is low.
(2) An image definition evaluation function is designed based on an image processing method, whether focusing is carried out or not is judged directly according to the function value, definition is good in the normal focus, the image is blurred in the defocus, and a core technology is a definition evaluation algorithm and a definition function extremum searching algorithm. The definition evaluation algorithm comprises a Brenner function, an SMD function, a Tenengard function and the like, and the extremum searching algorithm comprises a traversal method, a mountain climbing method and the like.
The disadvantages of image processing methods are: the sharpness evaluation index has specificity to the scene, and an index suitable for the spatial scene needs to be selected. The mountain climbing method of the extremum searching algorithm has high dependence on the result of the preamble frame, is easy to sink into a local extremum, and the reliability of the aerospace task is required to be high, so the stability of the method is not satisfied, the proper focusing large gear is required to be set by the traversing method, the searching speed is low when the number of the steps is too large, and the focusing precision is low when the number of the steps is too small.
The algorithms have respective defects and cannot be fully applied to space scenes with high dynamic range, complex illumination change and high stability requirements.
Disclosure of Invention
The application aims to solve the technical problem of overcoming the defects of the prior art and provides an on-orbit intelligent exposure and focusing method, system and electronic equipment of a space camera.
The technical scheme of the on-orbit intelligent exposure and focusing method of the space camera is as follows:
acquiring a current frame image shot by a space camera and a laser radar distance corresponding to the current frame image;
extracting a target detection frame and the confidence coefficient of the target detection frame from an image shot by the space camera by using a deep learning space target detection model, and judging whether the confidence coefficient of the target detection frame is larger than a preset confidence coefficient threshold value or not to obtain a first judgment result; when the first judgment result is yes, determining a region selected by a target detection frame in the current frame image as a space target region; when the first judgment result is negative, extracting a space target area from the current frame image by using a foreground and background segmentation algorithm of a space scene average threshold value; performing photometry on the space target area to obtain a photometry result, inputting the photometry result into a self-adaptive step length adjustment algorithm, and calculating to obtain the exposure time of the space camera for shooting the next frame of image;
judging whether the laser radar distance corresponding to the current frame image is in a preset range or not to obtain a second judging result, when the second judging result is yes, using active focusing to obtain a focusing gear of the space camera for shooting the next frame image, and when the second judging result is no, using passive focusing to obtain the focusing gear of the space camera for shooting the next frame image;
and controlling the space camera to shoot the next frame of image according to the exposure time and the focusing gear of the next frame of image.
The technical scheme of the on-orbit intelligent exposure and focusing system of the space camera is as follows:
the device comprises a first acquisition module, an exposure time calculation module, a focusing determination module and a shooting control module;
the first acquisition module is used for: acquiring a current frame image shot by a space camera and a laser radar distance corresponding to the current frame image;
the exposure time calculation module is used for: extracting a target detection frame and the confidence coefficient of the target detection frame from an image shot by the space camera by using a deep learning space target detection model, and judging whether the confidence coefficient of the target detection frame is larger than a preset confidence coefficient threshold value or not to obtain a first judgment result; when the first judgment result is yes, determining a region selected by a target detection frame in the current frame image as a space target region; when the first judgment result is negative, extracting a space target area from the current frame image by using a foreground and background segmentation algorithm of a space scene average threshold value;
performing photometry on the space target area to obtain a photometry result, inputting the photometry result into a self-adaptive step length adjustment algorithm, and calculating to obtain the exposure time of the space camera for shooting the next frame of image;
the focusing determination module is used for: judging whether the laser radar distance corresponding to the current frame image is in a preset range or not to obtain a second judging result, when the second judging result is yes, using active focusing to obtain a focusing gear of the space camera for shooting the next frame image, and when the second judging result is no, using passive focusing to obtain the focusing gear of the space camera for shooting the next frame image;
the control shooting module is used for: and controlling the space camera to shoot the next frame of image according to the exposure time and the focusing gear of the next frame of image.
The storage medium of the application stores instructions, and when the instructions are read by a computer, the computer is caused to execute the on-orbit intelligent exposure and focusing method of any one of the space camera.
An electronic device of the present application includes a processor and the storage medium described above, where the processor executes instructions in the storage medium.
The beneficial effects of the application are as follows:
1) Aiming at the problems that the partition weight method needs to manually select the possible positions of targets in the visual field and does not meet the on-orbit autonomy requirement, the application acquires the space target area in the visual field of a camera in real time based on the space target detection model of deep learning, and realizes a space target-oriented photometry algorithm and an image definition measurement algorithm.
2) Aiming at the problems that a fixed threshold value needs to be set according to a brightness histogram method and the illumination change of a space scene is large and the fixed threshold value is possibly inapplicable on orbit, the application provides a foreground and background segmentation algorithm based on a space scene average threshold value, and the algorithm aims at the characteristic that most of the space scene is a darker area and can achieve the purpose of efficiently and rapidly extracting a space target foreground under various exposure conditions. And when the space target detection algorithm fails, the extracted foreground region is used as a photometry and image definition measuring region.
3) Aiming at the problems of discontinuous dynamic step length lookup table, high efficiency and low efficiency in underexposure and overexposure degree in a camera exposure parameter adjustment iteration method, the self-adaptive step length adjustment algorithm is designed, and the convergence speed of the automatic exposure algorithm can be improved aiming at continuous dynamic step length of any light measuring value.
4) Aiming at the problems that the active focusing laser radar has a limited range of action and the passive focusing speed is low, an active focusing and passive focusing combined hybrid focusing algorithm is designed, and a definition evaluation function suitable for a space scene is selected through experiments, so that a stable and rapid camera automatic focusing technology is realized.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings in which:
FIG. 1 is a schematic flow chart of an on-orbit intelligent exposure and focusing method of an aerospace camera according to an embodiment of the application;
FIG. 2 is a second flow chart of an on-orbit intelligent exposure and focusing method for an aerospace camera according to an embodiment of the application;
FIG. 3 is an example of a preset spatial target data set according to an embodiment of the present application;
FIG. 4 is a graph showing the results of a spatial target region obtained by a foreground-background segmentation algorithm based on a spatial scene average threshold under different exposure conditions;
FIG. 5 is a graph showing the image and photometric values obtained at different exposure times;
fig. 6 is a schematic structural diagram of an on-orbit intelligent exposure and focusing system of an aerospace camera according to an embodiment of the application.
Detailed Description
As shown in fig. 1 and fig. 2, the on-orbit intelligent exposure and focusing method for the space camera according to the embodiment of the application comprises the following steps:
s1, acquiring data of a current frame image, and specifically:
acquiring a current frame image shot by a space camera and a laser radar distance corresponding to the current frame image;
s2, determining exposure time of the next frame of image, specifically:
extracting a target detection frame and the confidence coefficient of the target detection frame from an image shot by the space camera by using a deep-learning space target detection model, and judging whether the confidence coefficient of the target detection frame is larger than a preset confidence coefficient threshold value or not to obtain a first judgment result; when the first judgment result is yes, determining a region selected by the target detection frame in the current frame image as a space target region; when the first judgment result is negative, extracting a space target area from the current frame image by using a foreground and background segmentation algorithm of a space scene average threshold value;
the preset confidence threshold is 0.8 or 0.85, etc., and can also be set according to actual situations. Performing photometry on a space target area to obtain a photometry result, inputting the photometry result into a self-adaptive step length adjustment algorithm, and calculating to obtain the exposure time of the space camera for shooting the next frame of image;
and (3) carrying out photometry on the space target area to obtain a photometry result, wherein the specific process is as follows: and calculating the average brightness value of the pixels of the space target area as a photometry result.
S3, determining a focusing gear of the next frame of image, and specifically:
judging whether the laser radar distance corresponding to the current frame image is in a preset range or not to obtain a second judging result, when the second judging result is yes, using active focusing to obtain a focusing gear of the space camera for shooting the next frame image, and when the second judging result is no, using passive focusing to obtain a focusing gear of the space camera for shooting the next frame image;
the preset range can be determined according to the maximum detection range of the laser radar.
S4, controlling the space camera to shoot the next frame of image, and specifically:
and controlling the space camera to shoot the next frame of image according to the exposure time and the focusing gear of the obtained next frame of image.
Optionally, in the above technical solution, the obtaining process of the deep learning spatial target detection model includes:
based on an ImageNet data set and a Satellite Dataset general space target data set proposed by Adelaide university, the deep learning model is pre-trained to obtain a pre-training model, and based on the preset space target data set, the pre-training model is trained to obtain a deep learning space target detection model for extracting a target detection frame. Specifically:
the deep learning model can be yolov5s, other neural networks can be selected as the deep learning model according to actual conditions, training of the model for extracting the target detection frame is performed, and the yolov5s has the advantages of being high in speed and accuracy and suitable for on-orbit scenes.
The training strategy is as follows: the method adopts the universal space target data set of Satellite Dataset proposed by ImageNet data set and Adelaide university to pretrain, and adopts the migration learning scheme of pretraining the preset space target data set to conduct secondary training, so that the trained model, namely the space target detection model for deep learning, has higher robustness on the identification of specific space targets and has certain identification capacity on other space targets.
The preset space target data set consists of two parts, wherein one part is an image acquired by using a structural member, and the two parts comprise a laboratory turntable rotating part and an outdoor propelling part; the part is an image rendered by sticking partial textures on the surface of the spacecraft by using a simulation engine, comprises a propelling part and a winding part, and simulates various illumination conditions. The final co-annotation data 1794 is shown in FIG. 3 for an example dataset.
The foreground-background segmentation algorithm for the spatial scene average threshold is explained as follows:
it is desirable to be able to extract the spatial target region when no target detection frame is detected or the confidence of the set target detection frame is low, i.e. the confidence of the target detection frame is less than a preset confidence threshold. A simple idea is to use a fixed threshold for pixel brightness, pixels below which are space backgrounds, and pixels above which are space target areas, however this approach has three problems:
1) Threshold selection is difficult to determine based on ground experience;
2) When the image is wholly overexposed, the pixel brightness of the space background may exceed a fixed threshold;
3) When the image is underexposed as a whole, the pixel brightness of the spatial target region may be below a threshold.
Therefore, when the target detection frame is not detected or the confidence of the target detection frame is lower, namely the confidence of the target detection frame is smaller than a preset confidence threshold, the foreground and background segmentation algorithm of the spatial scene average threshold provided by the application is used.
The main thought of the foreground and background segmentation algorithm of the spatial scene average threshold value is as follows:
in any illumination condition and exposure condition, the space target area is brighter than the space background, so that the threshold value is selected as the average value of the overall brightness of the image, and pixels above the threshold value are used as the space target area. The result of using the method is shown in figure 4, and the space target area can be better extracted under the condition of overexposure and underexposure. Optionally, in the above technical solution, the process of calculating the exposure time of the space camera to shoot the next frame of image by using the adaptive step length adjustment algorithm includes:
calculating the exposure time of the next frame of image by using a first formula, wherein the first formula is as follows:wherein s is t+1 Representing the exposure time of the next frame of image s t Represents the exposure time of the current frame image, m represents the median value of the correct exposure interval, v t And representing the photometry result of the current frame image. Wherein, the brightness average value of the pixels of the space target area is calculated as the photometry result. Measuring the light measurement result of the current frame image, and calculating the exposure time of the next frame image to make the exposure of the next frame image more connectedNear normal exposure. Fig. 5 shows image results (left) obtained at different exposure times and a corresponding photometric curve (right) of different photometric modes under the condition of unchanged illumination conditions, wherein bbox represents a photometric result of a spatial target area detected by a spatial target detection model based on deep learning, pixel represents a photometric result of a spatial target area of a foreground segmentation algorithm based on a spatial scene average threshold, and the horizontal axis represents a photometric value as the horizontal axis goes to the right.
As can be seen from the figure, the photometric curve is substantially monotonically increasing, and in actual use, the correct exposure interval of the spatial target region detected by the spatial target detection model based on the deep learning is set to [75, 95], where m is 85, and the correct exposure interval of the spatial target region based on the foreground-background segmentation algorithm of the spatial scene average threshold is set to [90, 110], where m is 100. If the photometric value of the image is in the normal exposure range, the exposure time of the next frame is the same as the exposure time of the current frame, and the automatic exposure success variable is set to be true; if the current frame photometric value is lower than the correct exposure time, the exposure time of the next frame is increased, otherwise, the exposure time of the next frame is decreased, and the more the photometric result deviates from the correct exposure interval, the larger the adjustment step size is. The specific calculation formula is a first formula.
The specific process of active focusing is as follows: and directly looking up a table based on the focusing gear relation between the laser radar distance corresponding to the current frame image and the space camera, and obtaining a correct focusing gear.
The specific process of passive focusing is as follows:
in order to ensure the reliability of the algorithm, the passive focusing adopts a traversal method to search all focusing gears of the space camera, the definition of the image of the space target area is evaluated by utilizing a definition evaluation function, and the gear corresponding to the clearest image is selected as the focusing gear of the space camera for shooting the next frame of image.
The image definition evaluation function selects the Brenner index, and the Brenner index is selected because the Brenner index is more robust in the performance of each distance of 10m-200m compared with Tenengarde index, SMD index, energy index and the like through the experimental test of the ground construction simulation space environment.
The beneficial effects of the application are as follows:
1) Aiming at the problems that the partition weight method needs to manually select the possible positions of targets in the visual field and does not meet the on-orbit autonomy requirement, the application acquires the space target area in the visual field of a camera in real time based on the space target detection model of deep learning, and realizes a space target-oriented photometry algorithm and an image definition measurement algorithm.
2) Aiming at the problems that a fixed threshold value needs to be set according to a brightness histogram method and the illumination change of a space scene is large and the fixed threshold value is possibly inapplicable on orbit, the application provides a foreground and background segmentation algorithm based on a space scene average threshold value, and the algorithm aims at the characteristic that most of the space scene is a darker area and can achieve the purpose of efficiently and rapidly extracting a space target foreground under various exposure conditions. And when the space target detection algorithm fails, the extracted foreground region is used as a photometry and image definition measuring region.
3) Aiming at the problems of discontinuous dynamic step length lookup table, high efficiency and low efficiency in underexposure and overexposure degree in a camera exposure parameter adjustment iteration method, the self-adaptive step length adjustment algorithm is designed, and the convergence speed of the automatic exposure algorithm can be improved aiming at continuous dynamic step length of any light measuring value.
4) Aiming at the problems that the active focusing laser radar has a limited range of action and the passive focusing speed is low, an active focusing and passive focusing combined hybrid focusing algorithm is designed, and a definition evaluation function suitable for a space scene is selected through experiments, so that a stable and rapid camera automatic focusing technology is realized. In the above embodiments, although steps S1, S2, etc. are numbered, only specific embodiments of the present application are given, and those skilled in the art may adjust the execution sequence of S1, S2, etc. according to the actual situation, which is also within the scope of the present application, and it is understood that some embodiments may include some or all of the above embodiments.
As shown in fig. 6, an on-orbit intelligent exposure and focusing system 200 of an aerospace camera according to an embodiment of the application includes a first acquisition module 210, an exposure time calculation module 220, a focusing determination module 230 and a control shooting module 240;
the first acquisition module 210 is configured to: acquiring a current frame image shot by a space camera and a laser radar distance corresponding to the current frame image;
the exposure time calculation module 220 is configured to: extracting a target detection frame from an image shot by the space camera by using a space target detection model for deep learning, judging whether the confidence coefficient of the target detection frame is larger than a preset confidence coefficient threshold value, and obtaining a first judgment result; when the first judgment result is yes, determining a region selected by the target detection frame in the current frame image as a space target region; when the first judgment result is negative, extracting a space target area from the current frame image by using a foreground and background segmentation algorithm of a space scene average threshold value;
performing photometry on a space target area to obtain a photometry result, inputting the photometry result into a self-adaptive step length adjustment algorithm, and calculating to obtain the exposure time of the space camera for shooting the next frame of image;
the focus determination module 230 is configured to: judging whether the laser radar distance corresponding to the current frame image is in a preset range or not to obtain a second judging result, when the second judging result is yes, using active focusing to obtain a focusing gear of the space camera for shooting the next frame image, and when the second judging result is no, using passive focusing to obtain a focusing gear of the space camera for shooting the next frame image;
the control shooting module 240 is configured to: and controlling the space camera to shoot the next frame of image according to the exposure time and the focusing gear of the obtained next frame of image.
Optionally, in the above technical solution, the device further includes a second acquisition module, where the second acquisition module is configured to:
based on an ImageNet data set and a Satellite Dataset general space target data set proposed by Adelaide university, the deep learning model is pre-trained to obtain a pre-training model, and based on the preset space target data set, the pre-training model is trained to obtain a deep learning space target detection model for extracting a target detection frame.
Optionally, in the above technical solution, the process of calculating, by the exposure time calculating module 220, the exposure time of the space camera to capture the next frame of image by using the adaptive step adjustment algorithm includes:
calculating the exposure time of the next frame of image by using a first formula, wherein the first formula is as follows:wherein s is t+1 Representing the exposure time of the next frame of image s t Represents the exposure time of the current frame image, m represents the median value of the correct exposure interval, v t And representing the photometry result of the current frame image.
Optionally, in the above technical solution, the deep learning model is yolov5s.
The steps for implementing the corresponding functions of each parameter and each unit module in the on-orbit intelligent exposure and focusing system 200 for an aerospace camera according to the present application may refer to each parameter and each step in the embodiment of the on-orbit intelligent exposure and focusing method for an aerospace camera described above, which are not described herein.
The storage medium of the embodiment of the application stores instructions, and when the instructions are read by a computer, the computer is caused to execute the on-orbit intelligent exposure and focusing method of any one of the above space cameras.
An electronic device according to an embodiment of the present application includes a processor and the above-described storage medium, where the processor executes instructions in the storage medium. The electronic device may be selected from TX2, FPGA board, and other edge computing devices.
Those skilled in the art will appreciate that the present application may be implemented as a system, method, or computer program product.
Accordingly, the present disclosure may be embodied in the following forms, namely: either entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or entirely software, or a combination of hardware and software, referred to herein generally as a "circuit," module "or" system. Furthermore, in some embodiments, the application may also be embodied in the form of a computer program product in one or more computer-readable media, which contain computer-readable program code.
Any combination of one or more computer readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (a non-exhaustive list) of the computer-readable storage medium include an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination thereof. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (10)

1. An on-orbit intelligent exposure and focusing method for a space camera is characterized by comprising the following steps:
acquiring a current frame image shot by a space camera and a laser radar distance corresponding to the current frame image;
extracting a target detection frame and the confidence coefficient of the target detection frame from an image shot by the space camera by using a deep learning space target detection model, and judging whether the confidence coefficient of the target detection frame is larger than a preset confidence coefficient threshold value or not to obtain a first judgment result; when the first judgment result is yes, determining a region selected by a target detection frame in the current frame image as a space target region; when the first judgment result is negative, extracting a space target area from the current frame image by using a foreground and background segmentation algorithm of a space scene average threshold value;
performing photometry on the space target area to obtain a photometry result, inputting the photometry result into a self-adaptive step length adjustment algorithm, and calculating to obtain the exposure time of the space camera for shooting the next frame of image;
judging whether the laser radar distance corresponding to the current frame image is in a preset range or not to obtain a second judging result, when the second judging result is yes, using active focusing to obtain a focusing gear of the space camera for shooting the next frame image, and when the second judging result is no, using passive focusing to obtain the focusing gear of the space camera for shooting the next frame image;
and controlling the space camera to shoot the next frame of image according to the exposure time and the focusing gear of the next frame of image.
2. The method for intelligently exposing and focusing on orbit of a space camera according to claim 1, wherein the process for obtaining the space target detection model for deep learning comprises the following steps:
based on the ImageNet data set and the Satellite Dataset general space target data set, the deep learning model is pre-trained to obtain a pre-training model, and based on the preset space target data set, the pre-training model is trained to obtain a deep learning space target detection model for extracting a target detection frame.
3. The on-orbit intelligent exposure and focusing method of the space camera according to claim 1, wherein the process of calculating the exposure time of the space camera for shooting the next frame of image by utilizing the self-adaptive step length adjustment algorithm comprises the following steps:
calculating the exposure time of the next frame of image by using a first formula:wherein s is t+1 Representing the exposure time of the next frame of image s t Represents the exposure time of the current frame image, m represents the median value of the correct exposure interval, v t And representing the photometry result of the current frame image.
4. An on-orbit intelligent exposure and focusing method for an aerospace camera according to any one of claims 1 to 3, wherein the deep learning model is yolov5s.
5. An on-orbit intelligent exposure and focusing system of a space camera is characterized by comprising a first acquisition module, an exposure time calculation module, a focusing determination module and a shooting control module;
the first acquisition module is used for: acquiring a current frame image shot by a space camera and a laser radar distance corresponding to the current frame image;
the exposure time calculation module is used for: extracting a target detection frame and the confidence coefficient of the target detection frame from an image shot by the space camera by using a deep learning space target detection model, and judging whether the confidence coefficient of the target detection frame is larger than a preset confidence coefficient threshold value or not to obtain a first judgment result; when the first judgment result is yes, determining a region selected by a target detection frame in the current frame image as a space target region; when the first judgment result is negative, extracting a space target area from the current frame image by using a foreground and background segmentation algorithm of a space scene average threshold value;
performing photometry on the space target area to obtain a photometry result, inputting the photometry result into a self-adaptive step length adjustment algorithm, and calculating to obtain the exposure time of the space camera for shooting the next frame of image;
the focusing determination module is used for: judging whether the laser radar distance corresponding to the current frame image is in a preset range or not to obtain a second judging result, when the second judging result is yes, using active focusing to obtain a focusing gear of the space camera for shooting the next frame image, and when the second judging result is no, using passive focusing to obtain the focusing gear of the space camera for shooting the next frame image;
the control shooting module is used for: and controlling the space camera to shoot the next frame of image according to the exposure time and the focusing gear of the next frame of image.
6. The intelligent exposure and focusing system for an aerospace camera according to claim 5, further comprising a second acquisition module for:
based on the ImageNet data set and the Satellite Dataset general space target data set, the deep learning model is pre-trained to obtain a pre-training model, and based on the preset space target data set, the pre-training model is trained to obtain a deep learning space target detection model for extracting a target detection frame.
7. The on-orbit intelligent exposure and focusing system of an aerospace camera according to claim 5, wherein the process of calculating the exposure time of the aerospace camera for shooting the next frame of image by the exposure time calculation module through the adaptive step length adjustment algorithm comprises the following steps:
calculating the exposure time of the next frame of image by using a first formula:wherein s is t+1 Representing the exposure time of the next frame of image s t Represents the exposure time of the current frame image, m represents the median value of the correct exposure interval, v t And representing the photometry result of the current frame image.
8. An on-orbit intelligent exposure and focusing system for a space camera according to any one of claims 5 to 7, wherein the deep learning model is yolov5s.
9. A storage medium having instructions stored therein that, when read by a computer, cause the computer to perform an on-orbit intelligent exposure and focusing method for an aerospace camera according to any one of claims 1 to 4.
10. An electronic device comprising a processor and the storage medium of claim 9, the processor executing instructions in the storage medium.
CN202310294801.2A 2023-03-23 2023-03-23 On-orbit intelligent exposure and focusing method and system for space camera and electronic equipment Active CN116389901B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310294801.2A CN116389901B (en) 2023-03-23 2023-03-23 On-orbit intelligent exposure and focusing method and system for space camera and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310294801.2A CN116389901B (en) 2023-03-23 2023-03-23 On-orbit intelligent exposure and focusing method and system for space camera and electronic equipment

Publications (2)

Publication Number Publication Date
CN116389901A CN116389901A (en) 2023-07-04
CN116389901B true CN116389901B (en) 2023-11-21

Family

ID=86964989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310294801.2A Active CN116389901B (en) 2023-03-23 2023-03-23 On-orbit intelligent exposure and focusing method and system for space camera and electronic equipment

Country Status (1)

Country Link
CN (1) CN116389901B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116709035B (en) * 2023-08-07 2023-11-21 深圳市镭神智能系统有限公司 Exposure adjustment method and device for image frames and computer storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104301619A (en) * 2014-02-24 2015-01-21 凯迈(洛阳)测控有限公司 Fast camera exposure time automatic adjusting method and device
CN105635565A (en) * 2015-12-21 2016-06-01 华为技术有限公司 Shooting method and equipment
CN105872398A (en) * 2016-04-19 2016-08-17 大连海事大学 Space camera self-adaption exposure method
CN111225160A (en) * 2020-01-17 2020-06-02 中国科学院西安光学精密机械研究所 Automatic exposure control method based on image multi-threshold control
CN113347369A (en) * 2021-06-01 2021-09-03 中国科学院光电技术研究所 Deep space exploration camera exposure adjusting method, adjusting system and adjusting device thereof
CN115526790A (en) * 2022-08-26 2022-12-27 中国人民解放军军事科学院国防科技创新研究院 Spacecraft wreckage search and rescue identification tracking method and system based on neural network
CN115550558A (en) * 2022-09-29 2022-12-30 影石创新科技股份有限公司 Automatic exposure method and device for shooting equipment, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7298412B2 (en) * 2001-09-18 2007-11-20 Ricoh Company, Limited Image pickup device, automatic focusing method, automatic exposure method, electronic flash control method and computer program

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104301619A (en) * 2014-02-24 2015-01-21 凯迈(洛阳)测控有限公司 Fast camera exposure time automatic adjusting method and device
CN105635565A (en) * 2015-12-21 2016-06-01 华为技术有限公司 Shooting method and equipment
CN105872398A (en) * 2016-04-19 2016-08-17 大连海事大学 Space camera self-adaption exposure method
CN111225160A (en) * 2020-01-17 2020-06-02 中国科学院西安光学精密机械研究所 Automatic exposure control method based on image multi-threshold control
CN113347369A (en) * 2021-06-01 2021-09-03 中国科学院光电技术研究所 Deep space exploration camera exposure adjusting method, adjusting system and adjusting device thereof
CN115526790A (en) * 2022-08-26 2022-12-27 中国人民解放军军事科学院国防科技创新研究院 Spacecraft wreckage search and rescue identification tracking method and system based on neural network
CN115550558A (en) * 2022-09-29 2022-12-30 影石创新科技股份有限公司 Automatic exposure method and device for shooting equipment, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN116389901A (en) 2023-07-04

Similar Documents

Publication Publication Date Title
CN110455258B (en) Monocular vision-based unmanned aerial vehicle ground clearance measuring method
CN116389901B (en) On-orbit intelligent exposure and focusing method and system for space camera and electronic equipment
US12015757B2 (en) Obstacle detection method and apparatus and unmanned aerial vehicle
CN110443186B (en) Stereo matching method, image processing chip and mobile carrier
CN112361990B (en) Laser pattern extraction method and device, laser measurement equipment and system
CN103237175A (en) Automatic exposure method of digital camera based on human visual characteristics
CN111385459A (en) Automatic control, focusing and photometry method for unmanned aerial vehicle cradle head
CN115690161A (en) Infrared target search detection and tracking method and device
CN209991983U (en) Obstacle detection equipment and unmanned aerial vehicle
CN117319787A (en) Image focusing method, device, system, control equipment and storage medium
CN112995525B (en) Camera exposure method and device for self-walking equipment
CN106556958A (en) The auto focusing method of Range-gated Imager
CN105025219A (en) Image acquisition method
CN107462967B (en) Focusing method and system for laser ranging
Cortés-Pérez et al. A mirror-based active vision system for underwater robots: From the design to active object tracking application
CN116980757A (en) Quick focusing method, focusing map updating method, device and storage medium
CN116503566A (en) Three-dimensional modeling method and device, electronic equipment and storage medium
CN113674319A (en) Target tracking method, system, equipment and computer storage medium
CN114693799A (en) Parameter calibration method, target object tracking method, device and system
CN113992837A (en) Automatic focusing method based on object size and digital image acquisition device
WO2021026754A1 (en) Focus control method and apparatus for photography apparatus, and unmanned aircraft
CN110849351A (en) Method for constructing grid map by using depth camera and binocular camera
CN113126640B (en) Obstacle detection method and device for unmanned aerial vehicle, unmanned aerial vehicle and storage medium
CN117523428B (en) Ground target detection method and device based on aircraft platform
RU2778355C1 (en) Device and method for prediction autofocus for an object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant