CN110207671B - Space-based intelligent imaging system - Google Patents

Space-based intelligent imaging system Download PDF

Info

Publication number
CN110207671B
CN110207671B CN201910566120.0A CN201910566120A CN110207671B CN 110207671 B CN110207671 B CN 110207671B CN 201910566120 A CN201910566120 A CN 201910566120A CN 110207671 B CN110207671 B CN 110207671B
Authority
CN
China
Prior art keywords
target
image
module
detection
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910566120.0A
Other languages
Chinese (zh)
Other versions
CN110207671A (en
Inventor
赵岩
赵军锁
张衡
夏玉立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Software of CAS
Original Assignee
Institute of Software of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Software of CAS filed Critical Institute of Software of CAS
Publication of CN110207671A publication Critical patent/CN110207671A/en
Application granted granted Critical
Publication of CN110207671B publication Critical patent/CN110207671B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • G06T5/70
    • G06T5/73

Abstract

The invention provides a space-based intelligent imaging system, which relates to the technical field of satellite imaging and comprises the following components: the system comprises an image acquisition unit, a scene perception control unit and an algorithm unit which are connected; the image acquisition unit is used for calculating the information of various factors according to the task plan to obtain the configuration information of the camera and acquiring multi-frame image data acquired by the camera; the scene perception control unit is used for determining a background type matched with the task planning type and searching a target image with the background type; and the algorithm unit is used for preprocessing the target image, and carrying out target detection and identification on the preprocessed target image to obtain a detection and identification result. The embodiment of the invention can improve the efficiency of transmitting the detection identification result data, and reduce the pressure of data storage on the satellite and the pressure of data transmission between satellite-ground links.

Description

Space-based intelligent imaging system
Technical Field
The invention relates to the technical field of satellite imaging, in particular to a space-based intelligent imaging system.
Background
With the development of the space payload technology, the detection data acquired by various high-resolution detection instruments are greatly increased, and the development of the on-board data processing capability and the satellite-to-ground communication capability is relatively lagged. In addition, the data acquired by the load contains a lot of invalid data, which results in waste of space-based data storage, management and transmission resources. Because of being limited by satellite-borne computing capacity, the existing space-based imaging method cannot perform effective computation on a satellite, only performs simple computation or does not perform computation, and transmits data to the ground for processing.
Due to the limitation of satellite-borne computing capacity, effective computing cannot be performed on a satellite, and data acquired by a detector load contains a large amount of worthless data and is stored on an effective load, so that high requirements are provided for load storage capacity and data management capacity. For tasks requiring quick response, the task requirements cannot be met. For example, fire detection or battlefield situation perception in a military background is performed, a task of immediate response needs to be made according to an image processing result, and satellite-ground data transmission needs to occupy a large amount of transmission time, so that the effectiveness of the task is influenced. Even for the tasks of monitoring the environmental change and monitoring crops with low real-time requirements, the satellite-borne mass data generates huge pressure on a satellite-ground data transmission link, and higher requirements are provided for ground data storage.
Disclosure of Invention
In view of the above, the present invention provides a space-based intelligent imaging system to alleviate the relative delay in the development of the on-board data processing capability and the satellite-to-ground communication capability in the prior art. In addition, the data acquired by the load contains a lot of invalid data, which results in waste of space-based data storage, management and transmission resources. Because of being limited by satellite-borne computing capacity, the existing space-based imaging method cannot perform effective computation on a satellite, only performs simple computation or does not perform computation, and transmits data to the ground for processing. Due to the limitation of satellite-borne computing capacity, effective computing cannot be carried out on a satellite, and data acquired by a detector load comprises a large amount of worthless data and is stored on an effective load, so that the technical problem of high requirements on load storage capacity and data management capacity is solved.
In a first aspect, an embodiment of the present invention provides a space-based intelligent imaging system, which includes an image acquisition unit, a scene sensing control unit, and an algorithm unit, which are connected to each other;
the image acquisition unit is used for calculating the information of various factors according to the task plan to obtain the configuration information of the camera and acquiring multi-frame image data acquired by the camera;
the scene perception control unit is used for determining a background type matched with a task planning type and searching a target image with the background type;
and the algorithm unit is used for preprocessing the target image, and carrying out target detection and identification on the preprocessed target image to obtain a detection and identification result.
With reference to the first aspect, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the image acquiring unit includes: the device comprises a calculation module, a configuration module, a judgment module and a receiving module;
the computing module is used for computing the information of various factors according to the task planning to obtain the configuration information of the camera;
the configuration module is used for configuring the camera according to the camera configuration information;
the judging module is used for judging whether the camera configuration is successful or not;
the receiving module is used for acquiring multi-frame image data acquired by the camera according to a preset time sequence under the condition that the camera is successfully configured.
With reference to the first aspect, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where the scene awareness control unit includes: the device comprises an analysis module, a search module and an extraction module;
the analysis module is used for analyzing the background type of the target image and classifying the target image according to the background type;
the searching module is used for searching a background type corresponding to the task planning type;
the extraction module is used for extracting the same type of target images with the background type characteristics corresponding to the mission planning type.
With reference to the first aspect, an embodiment of the present invention provides a third possible implementation manner of the first aspect, where the algorithm unit includes: the system comprises an image preprocessing module, a target detection and identification module and a result downloading module;
the image preprocessing module is used for removing dryness, removing thin clouds and correcting and enhancing the image of the target image to obtain an enhanced target image;
the target detection and identification module is used for carrying out target detection on the enhanced target image, determining target characteristic information, and tracking an interested target in the enhanced target image based on the target characteristic information to obtain a detection and identification result;
and the result downloading module is used for downloading the detection recognition result and the image of the interested target.
With reference to the first aspect, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, where the image preprocessing module includes: a dryness removal sub-module, a thin cloud removal sub-module and a correction enhancement sub-module;
the de-drying submodule is used for removing the stripe noise and the random noise in the target image to obtain a de-noised target image;
the thin cloud removing submodule is used for enhancing the contrast of the de-noised target image to obtain a distorted target image;
and the image correction enhancement submodule is used for restoring the distorted target image to obtain the enhanced target image.
With reference to the first aspect, an embodiment of the present invention provides a fifth possible implementation manner of the first aspect, where the target detection and identification module includes: the system comprises a target extraction submodule, an information fusion submodule, a target detection submodule and a target identification submodule;
the target extraction submodule is used for carrying out target detection on the enhanced target image, determining target characteristic information and extracting an image of an interested target from the enhanced target image according to the target characteristic information;
the information fusion sub-module is used for acquiring two source sequence images and fusing the two source sequence images to obtain a fusion sequence image;
the target detection submodule is used for detecting an interested target in the fusion sequence image;
the target identification submodule is used for identifying an interested target and tracking the interested target.
With reference to the first aspect, an embodiment of the present invention provides a sixth possible implementation manner of the first aspect, where when fusing two source sequence images, the information fusion sub-module includes: preprocessing of sequence images, moving object detection, image multi-scale transformation, region-based image fusion and corresponding multi-scale inverse transformation.
With reference to the first aspect, an embodiment of the present invention provides a seventh possible implementation manner of the first aspect, where the factor information includes: the current orbit, attitude, camera installation matrix, spatial position of the observed point, solar altitude angle, observation phase angle and task requirements.
With reference to the first aspect, an embodiment of the present invention provides an eighth possible implementation manner of the first aspect, where the camera configuration information includes: camera turn-on time, exposure time, gain and frame rate.
The embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a space-based intelligent imaging system, which comprises an image acquisition unit, a scene perception control unit and an algorithm unit which are connected with each other; the image acquisition unit is used for calculating the information of various factors according to the task plan to obtain the configuration information of the camera and acquiring multi-frame image data acquired by the camera; the scene perception control unit is used for determining a background type matched with a task planning type and searching a target image with the background type; and the algorithm unit is used for preprocessing the target image, and carrying out target detection and identification on the preprocessed target image to obtain a detection and identification result. According to the embodiment of the invention, the image information can be acquired by the image acquisition unit, and the effective rate of downloading the detection identification result data is improved by combining the acquired multi-frame image data, the pressure of on-board data storage and the pressure of data transmission between the satellite-ground links are reduced, so that the timeliness of the satellite for executing tasks is improved, and the quick response of space-based observation is possible.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic structural diagram of a space-based intelligent imaging system according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of an image preprocessing module according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an image detection and identification module according to an embodiment of the present invention.
Icon:
10-an image acquisition unit; 11-a calculation module; 12-a configuration module; 13-a judgment module; 14-a receiving module; 20-a scene aware control unit; 21-an analysis module; 22-a lookup module; 23-an extraction module; 30-an algorithm unit; 31-an image pre-processing module; 32-a target detection identification module; 33-result download module; 311-drying submodule; 312-thin cloud removal submodule; 313-a correction enhancement sub-module; 321-target extraction submodule; 322-information fusion sub-module; 323-target detection submodule; 324-target identification submodule.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
At present, the development of on-board data processing capability and satellite-to-ground communication capability is relatively lagged. In addition, the data acquired by the load contains a lot of invalid data, which results in waste of space-based data storage, management and transmission resources. Because of being limited by satellite-borne computing capacity, the existing space-based imaging method cannot perform effective computation on a satellite, only performs simple computation or does not perform computation, and transmits data to the ground for processing. Due to the limitation of satellite-borne computing capacity, effective computing cannot be carried out on a satellite, data obtained by a detector load comprises a large amount of worthless data, and the data are stored on an effective load, so that high requirements are put forward on load storage capacity and data management capacity.
For the convenience of understanding the embodiment, a space-based intelligent imaging system disclosed by the embodiment of the invention is first described in detail.
The first embodiment is as follows:
referring to fig. 1, an embodiment of the present invention provides a space-based intelligent imaging system, including an image acquisition unit 10, a scene sensing control unit 20, and an algorithm unit 30, which are connected to each other;
the image acquisition unit 10 is configured to calculate information of multiple factors according to a mission plan to obtain camera configuration information, and acquire multi-frame image data acquired by a camera;
the scene perception control unit 20 is used for determining a background type matched with the mission planning type and searching a target image with the background type;
and the algorithm unit 30 is configured to pre-process the target image, and perform target detection and identification on the pre-processed target image to obtain a detection and identification result.
In the embodiment of the present invention, the factor information includes, but is not limited to: the current orbit, attitude, camera installation matrix, spatial position of the observed point, solar altitude angle, observation phase angle and task requirements. The camera configuration information may include, but is not limited to, a startup time, an exposure time, a gain, and a frame rate of the camera, and after the camera is started, the camera is configured according to the calculated camera configuration information, that is, image data starts to be acquired according to a preset time sequence, and the frame rate and the image size of the camera are limited by a network transmission speed and a hard disk writing speed of the camera and the space-based supercomputing platform. On the premise that the network transmission speed and the hard disk writing speed of the camera and the space-based hyper-computing platform are fixed, the image size needs to be relatively reduced when the frame frequency of the camera is larger, and on the contrary, the image size can be properly increased when the frame frequency of the camera is reduced. The space-based hyper-computation platform is a reconfigurable computing unit which runs on a space-based satellite platform and is flexible in structure and easy to expand, the space-based computing capability can be improved by adding computing nodes, and a good implementation environment and a good running platform are provided for an image processing algorithm module in an algorithm unit. The space-based hyper-computation platform can perform targeted processing on mass image data of a detector load, reduce satellite-to-ground link data transmission pressure and enable quick response to space-based observation to be possible. The space-based intelligent imaging system on the space-based supercomputing platform can realize functions of invalid data removal, denoising, thin cloud removal, deblurring, debouncing, contrast enhancement, low illumination enhancement, super-resolution reconstruction, three-dimensional reconstruction, geometric correction, color cast correction and image splicing, and is used for target detection and identification, environmental change monitoring, fire detection, earthquake monitoring, north pole monitoring, crop detection, bridge detection, water resource detection, road detection and the like.
The background types include, but are not limited to, a ground background, a deep space background, and an adjacent background, the task planning types correspond to the background types one to one, illustratively, if the task planning type is earth observation, the background type is a ground background; if the task planning type is the observation of the sky, the background type is a deep space background; if the task planning type is the adjacent edge observation, the image is mainly the adjacent edge background. The scene perception control unit needs to determine the background type according to the mission planning type, and the ground background is generally relatively complex, such as: roads, cities, forests, deserts or water surfaces and the like, so the global variance of the target image of the ground background is large, and the local variances are different in size. The following are exemplary: the local variance of the target image area of the highway, the desert and the water surface is small, and the local variance of the target image area of the city and the forest is large. The deep space background is generally a universe deep space and is relatively single, so that the global variance of the target image of the deep space background is small and the local variance is stable; the adjacent background, namely the part of the target image is a deep space background and the part of the target image is a ground background, and the global variance of the target image of the adjacent background is larger, but the local variance is smaller.
And establishing a model for each background type by using different global variances of the target images of different background types and adopting a background modeling method, matching the background model of the input image with the established model so as to complete scene classification, and loading corresponding image processing algorithm modules in an algorithm unit. The method comprises the steps of preprocessing target images of different background types by using a method corresponding to the target images, loading corresponding image processing algorithm modules in an algorithm unit, exemplarily, performing background suppression on the images of a deep space background by using a stray light suppression algorithm based on a local signal to noise ratio, extracting a moving target track, judging whether the target point is an interested target or not by combining the characteristics of the movement, the shape, the optics and the like of the target point in a visible light image, and performing autonomous tracking. For the image of the ground background, firstly, thin cloud removal, denoising and geometric correction are carried out on the image, then, the image is registered by utilizing the rotating angle of a rotary table between adjacent frames, the background is removed fixedly, the track of a moving target is extracted, and whether the target point is an interested target or not is judged by combining the characteristics of the movement, the shape, the optics and the like of the target point in the visible light image, and autonomous tracking is carried out.
The space-based hyper-computing platform can simultaneously receive mass heterogeneous detection data loaded by various detectors such as visible light, infrared, multispectral and hyperspectral. After receiving the image of the camera, the scene perception control unit 20 performs scene perception by using a scene perception algorithm for adaptive scene discrimination, and controls an algorithm to be loaded in subsequent processing according to a scene perception result, wherein the subsequent processing mainly includes image denoising, image correction enhancement, target detection and identification, invalid data removal and detection and identification result downloading.
Further, the image acquisition unit 10 includes: the device comprises a calculation module 11, a configuration module 12, a judgment module 13 and a receiving module 14;
the calculation module 11 is configured to calculate information of multiple factors according to the mission plan to obtain camera configuration information;
a configuration module 12, configured to configure the camera according to the camera configuration information;
a judging module 13, configured to judge whether the camera configuration is successful;
and the receiving module 14 is configured to, under the condition that the camera configuration is successful, obtain multi-frame image data acquired by the camera according to a preset time sequence.
Further, the scene-aware control unit 20 includes: an analysis module 21, a search module 22 and an extraction module 23;
the analysis module 21 is configured to analyze a background type of the target image and classify the target image according to the background type;
the searching module 22 is used for searching a background type corresponding to the task planning type;
and the extraction module 23 is used for extracting the same type of target images with the background type characteristics corresponding to the mission planning type.
In the embodiment of the present invention, the scene sensing control unit 20 may automatically perform scene sensing through a self-adaptive algorithm, detect a background type of an image, classify the image according to the background type, determine a same type of target image having a background type characteristic corresponding to a mission planning type according to the background type corresponding to the mission planning type, and control to load an algorithm corresponding to the background type.
Further, the arithmetic unit 30 includes: an image preprocessing module 31, a target detection and identification module 32 and a result downloading module 33;
the image preprocessing module 31 is configured to perform dryness removal, thin cloud removal, and image correction enhancement on the target image to obtain an enhanced target image;
in an embodiment of the invention, images are taken aloft by an imaging device, which includes a detector load, and a large amount of interference noise is present during imaging and transmission. Noise generated by the imaging device is due to stripe noise, which is an inherent defect of the device, and the noise is generally removed by a frequency domain filtering method. Secondly, a large amount of random noise generated by transmission is generally removed by a spatial filtering method, such as a median filtering method. The partial images carry a thin cloud due to the influence of the weather. The thin cloud has the following characteristics: high brightness, low contrast and low frequency, so the main method of thin cloud removal is to reduce the brightness of the cloud layer and enhance the contrast. Thin cloud removal can be classified into two main categories: polynomial method in the spatial domain and homomorphic filtering in the frequency domain. And after denoising and thin cloud removing, correcting and enhancing the image. The image correction includes image geometry correction and radiation correction. The image enhancement mainly comprises histogram equalization, spatial filtering enhancement and frequency domain filtering enhancement, and the enhanced image is convenient for subsequent operation.
On the basis of a space-based hyper-computational platform, image processing algorithm modules of different task types become possible according to the background types corresponding to various task requirements. The computing power and flexibility of the space-based supercomputing platform provide advantages for the configuration and loading of the algorithm library. The image preprocessing module 31 matched with the task types is loaded according to different task types, the algorithms comprise image denoising, image enhancement and the like, the corresponding target detection and identification module 32 is loaded according to target images of different background types, and the effectiveness of the space-based intelligent imaging method is guaranteed.
The target detection and identification module 32 is configured to perform target detection on the enhanced target image, determine target characteristic information, and track an interested target in the enhanced target image based on the target characteristic information to obtain a detection and identification result;
in the embodiment of the present invention, the target detection and identification module 32 determines target feature information, such as a gray scale feature, a morphological feature, a motion feature, a spectral feature, and the like, of the target image to be detected according to the scene perception and the background type. And extracting the interested target in the enhanced target image according to the target characteristic information. In the process of detection and identification, the processing of the image involves the steps of background suppression, image registration, image segmentation, target extraction, information fusion, target detection, target identification and the like. For different processing steps, it is necessary to match appropriate image processing algorithm modules according to scenes and tasks. For background suppression, three processing links of image registration and image segmentation mainly adopt different image processing algorithms corresponding to different processing modules for a ground background, a deep space background and a near-edge background according to a scene perception result. And the four processing links of target extraction, information fusion, target detection and target identification are mainly used for loading different algorithm modules according to the background type to carry out targeted processing.
And a result downloading module 33, configured to download the detection recognition result and the image of the target of interest.
In an embodiment of the present invention, referring to fig. 2, the image preprocessing module 31 may include: a dessication sub-module 311, a thin cloud removal sub-module 312, and a correction enhancer module 313;
the dryness removal submodule 311 is configured to remove stripe noise and random noise in the target image to obtain a denoised target image;
the thin cloud removing submodule 312 is configured to enhance contrast of the denoised target image to obtain a distorted target image;
and an image correction enhancer module 313, configured to perform restoration processing on the distorted target image to obtain an enhanced target image.
In the embodiment of the present invention, referring to fig. 3, the target detection and identification module 32 includes: a target extraction submodule 321, an information fusion submodule 322, a target detection submodule 323 and a target identification submodule 324;
the target extraction sub-module 321 is configured to perform target detection on the enhanced target image, determine target feature information, and extract an image of an object of interest from the enhanced target image according to the target feature information;
the information fusion sub-module 322 is configured to obtain two source sequence images, and fuse the two source sequence images to obtain a fused sequence image;
an object detection sub-module 323 for detecting an object of interest in the fused sequence image;
and a target identification submodule 324 for identifying and tracking the target of interest.
In the embodiment of the invention, the spatial target detection and identification under the multi-source load are taken as an example: the target extraction sub-module 321 maps the position of the target extracted from the image, calculates the position of the target in the image, combines the gray scale and the radiation information of the target of interest in the image with the texture, the shape, the speed and other information of the target of interest in the image, and outputs the combined information as the input of the information fusion sub-module 322, which is processed by the information fusion sub-module 322 to output whether the target is the target of interest. If the target is a false interference target, the target detection submodule 323 is switched to detect, if the target is determined as an interested target, the target identification submodule 324 is used for outputting the characteristics of the position center coordinate, the image of the nearby area, the speed and the gray level of the interested target, and the like, and simultaneously outputting the target track and the tracking instruction to the control system to realize closed-loop tracking.
The image is processed by the object detection and recognition module 32, and can output: and (4) outputting target tracks according to the center coordinates of the target point position and the characteristics of images of nearby areas, target speed, gray level and the like. And the position information of the target point, the image of the area near the target, the detection and identification results of the target motion speed, the target gray scale, the radiation characteristic and the like are downloaded, so that invalid image data are removed, and the pressure of a satellite-ground data transmission link is reduced. Exemplarily, tasks needing quick response to fire and the like can be given, information such as fire places, early warning levels and the like can be directly given, decision can be directly made on the ground according to satellite information, the decision is made after image processing is carried out without waiting for the completion of data transmission on the ground, and timeliness is remarkably improved.
In the embodiment of the invention, single-frame airplane detection is taken as an example:
and detecting the single-frame airplane by adopting a method based on a saliency map and invariant moment. Firstly, preprocessing a remote sensing image to be recognized, including graying and denoising, then extracting a saliency map of an original image by adopting a saliency map algorithm, and positioning a saliency target to be used as a candidate target. And after the candidate target is determined, extracting the pseudo Zernike moment and the affine invariant moment of the candidate target, and then performing feature selection and feature fusion. And extracting the pseudo Zernike moment and the affine invariant moment of the sample image by using the same method, then completing feature selection and feature fusion, finally using the Euclidean distance as similarity measurement, and selecting the sample image with the maximum similarity as a discrimination standard of the candidate target. If the sample image belongs to the target image, marking the candidate target as an identification target; otherwise, the candidate target is discarded.
Further, the information fusion sub-module 322, when fusing the two source sequence images, includes: preprocessing of sequence images, moving object detection, image multi-scale transformation, region-based image fusion and corresponding multi-scale inverse transformation.
Further, the factor information includes: the current orbit, attitude, camera installation matrix, spatial position of the observed point, solar altitude angle, observation phase angle and task requirements.
Further, the camera configuration information includes: camera turn-on time, exposure time, gain and frame rate.
In the embodiment of the invention, according to task planning, for different task types, corresponding image processing algorithm modules are loaded, corresponding interested targets are extracted, target characteristic information is extracted according to target characteristics and is classified, and finally, effective data is selected according to a detection and identification result to be downloaded, so that the waste of data bandwidth caused by transmission of a large amount of invalid data is avoided, and the application scene requirement of a quick response task can be met.
The invention can reduce the pressure of satellite data storage, reduce the pressure of satellite-to-ground link data transmission, improve the timeliness of satellite load execution tasks, utilize the satellite to obtain image information, combine satellite orbit attitude information, make the intelligent satellite become possible, facilitate the space monitoring, the remote sensing observation, and then provide the powerful guarantee for national defense safety, homeland safety.
The embodiment of the invention provides a space-based intelligent imaging system, which comprises an image acquisition unit, a scene perception control unit and an algorithm unit which are connected with each other; the image acquisition unit is used for calculating the information of various factors according to the task plan to obtain the configuration information of the camera and acquiring multi-frame image data acquired by the camera; the scene perception control unit is used for determining a background type matched with a task planning type and searching a target image with the background type; and the algorithm unit is used for preprocessing the target image, and carrying out target detection and identification on the preprocessed target image to obtain a detection and identification result. According to the embodiment of the invention, the image information can be acquired by the image acquisition unit, and the effective rate of downloading the detection identification result data is improved by combining the acquired multi-frame image data, the pressure of on-board data storage and the pressure of data transmission between the satellite-ground links are reduced, so that the timeliness of the satellite for executing tasks is improved, and the quick response of space-based observation is possible.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the method described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The computer program product provided in the embodiment of the present invention includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment, which is not described herein again.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (6)

1. The space-based intelligent imaging system is characterized by comprising an image acquisition unit, a scene perception control unit and an algorithm unit which are connected with each other;
the image acquisition unit is used for calculating the information of various factors according to the task plan to obtain the configuration information of the camera and acquiring multi-frame image data acquired by the camera; the factor information includes: the current track, attitude, camera installation matrix, spatial position of the observed point, solar altitude angle, observation phase angle and task requirements; the camera configuration information includes: the startup time, exposure time, gain and frame frequency of the camera;
the scene perception control unit is used for determining a background type matched with a task planning type and searching a target image with the background type; the task planning types correspond to the background types one by one; the background types comprise a ground background, a deep space background and an adjacent background;
the algorithm unit is used for preprocessing the target image, and performing target detection and identification on the preprocessed target image to obtain a detection and identification result;
the scene perception control unit comprises: the device comprises an analysis module, a search module and an extraction module;
the analysis module is used for analyzing the background type of the target image and classifying the target image according to the background type;
the searching module is used for searching a background type corresponding to the task planning type;
the extraction module is used for extracting the same type of target images with the background type characteristics corresponding to the mission planning type.
2. The space-based smart imaging system of claim 1, wherein the image acquisition unit comprises: the device comprises a calculation module, a configuration module, a judgment module and a receiving module;
the computing module is used for computing the information of various factors according to the task planning to obtain the configuration information of the camera;
the configuration module is used for configuring the camera according to the camera configuration information;
the judging module is used for judging whether the camera configuration is successful or not;
the receiving module is used for acquiring multi-frame image data acquired by the camera according to a preset time sequence under the condition that the camera is successfully configured.
3. The space-based smart imaging system of claim 1, wherein the algorithm unit comprises: the system comprises an image preprocessing module, a target detection and identification module and a result downloading module;
the image preprocessing module is used for removing dryness, removing thin clouds and correcting and enhancing the image of the target image to obtain an enhanced target image;
the target detection and identification module is used for carrying out target detection on the enhanced target image, determining target characteristic information, and tracking an interested target in the enhanced target image based on the target characteristic information to obtain a detection and identification result;
and the result downloading module is used for downloading the detection recognition result and the image of the interested target.
4. The space-based smart imaging system of claim 3, wherein the image pre-processing module comprises: a dryness removal sub-module, a thin cloud removal sub-module and a correction enhancement sub-module;
the de-drying submodule is used for removing the stripe noise and the random noise in the target image to obtain a de-noised target image;
the thin cloud removing submodule is used for enhancing the contrast of the de-noised target image to obtain a distorted target image;
and the image correction enhancement submodule is used for restoring the distorted target image to obtain the enhanced target image.
5. The space-based intelligent imaging system of claim 3, wherein the target detection identification module comprises: the system comprises a target extraction submodule, an information fusion submodule, a target detection submodule and a target identification submodule;
the target extraction submodule is used for carrying out target detection on the enhanced target image, determining target characteristic information and extracting an image of an interested target from the enhanced target image according to the target characteristic information;
the information fusion sub-module is used for acquiring two source sequence images and fusing the two source sequence images to obtain a fusion sequence image;
the target detection submodule is used for detecting an interested target in the fusion sequence image;
the target identification submodule is used for identifying an interested target and tracking the interested target.
6. The space-based smart imaging system of claim 5, wherein the information fusion sub-module, when fusing two source sequence images, comprises: preprocessing of sequence images, moving object detection, image multi-scale transformation, region-based image fusion and corresponding multi-scale inverse transformation.
CN201910566120.0A 2018-12-29 2019-06-26 Space-based intelligent imaging system Active CN110207671B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2018116542200 2018-12-29
CN201811654220 2018-12-29

Publications (2)

Publication Number Publication Date
CN110207671A CN110207671A (en) 2019-09-06
CN110207671B true CN110207671B (en) 2021-08-24

Family

ID=67794964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910566120.0A Active CN110207671B (en) 2018-12-29 2019-06-26 Space-based intelligent imaging system

Country Status (1)

Country Link
CN (1) CN110207671B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110519485B (en) * 2019-09-09 2021-08-31 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN110913226B (en) * 2019-09-25 2022-01-04 西安空间无线电技术研究所 Image data processing system and method based on cloud detection
CN110806198A (en) * 2019-10-25 2020-02-18 北京前沿探索深空科技有限公司 Target positioning method and device based on remote sensing image, controller and medium
CN110991313B (en) * 2019-11-28 2022-02-15 华中科技大学 Moving small target detection method and system based on background classification
CN111931833B (en) * 2020-07-30 2022-08-12 上海卫星工程研究所 Multi-source data driving-based space-based multi-dimensional information fusion method and system
CN112616027A (en) * 2020-12-11 2021-04-06 中国科学院软件研究所 Automatic planning imaging method and device
CN113478485A (en) * 2021-07-06 2021-10-08 上海商汤智能科技有限公司 Robot, control method and device thereof, electronic device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106134476B (en) * 2008-11-24 2012-09-05 西安电子科技大学 Spaceborne real-time parallel data treatment system
CN102904834A (en) * 2012-09-10 2013-01-30 中国航天科技集团公司第五研究院第五一三研究所 Satellite-bone data processing system based on advanced orbiting system (AOS)
CN103455708A (en) * 2013-07-24 2013-12-18 安徽省电力科学研究院 Power transmission line disaster monitoring and risk assessment platform based on satellite and weather information
CN103761524A (en) * 2014-01-17 2014-04-30 电子科技大学 Image-based linear target recognition and extraction method
CN106441237A (en) * 2015-08-10 2017-02-22 北京空间飞行器总体设计部 In-orbit autonomous adjusting method of optical remote sensing satellite camera imaging parameter
CN106851084A (en) * 2016-11-21 2017-06-13 北京空间机电研究所 Noted on real-time processing algorithm on a kind of remote sensing camera star and update platform
US10139279B2 (en) * 2015-05-12 2018-11-27 BioSensing Systems, LLC Apparatuses and methods for bio-sensing using unmanned aerial vehicles

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001076120A2 (en) * 2000-04-04 2001-10-11 Stick Networks, Inc. Personal communication device for scheduling presentation of digital content
AUPQ974100A0 (en) * 2000-08-28 2000-09-21 Burns, Alan Robert Real or near real time earth imaging system
US20050249281A1 (en) * 2004-05-05 2005-11-10 Hui Cheng Multi-description coding for video delivery over networks
EP2089677B1 (en) * 2006-12-06 2016-06-08 Honeywell International Inc. Methods, apparatus and systems for enhanced synthetic vision and multi-sensor data fusion to improve operational capabilities of unmanned aerial vehicles
US10290203B2 (en) * 2008-09-15 2019-05-14 Lasso Technologies, LLC Interface for communicating sensor data to a satellite network
CN102034103B (en) * 2010-12-03 2012-11-21 中国科学院软件研究所 Lineament extraction method of remote sensing image
CN108564587A (en) * 2018-03-07 2018-09-21 浙江大学 A kind of a wide range of remote sensing image semantic segmentation method based on full convolutional neural networks

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106134476B (en) * 2008-11-24 2012-09-05 西安电子科技大学 Spaceborne real-time parallel data treatment system
CN102904834A (en) * 2012-09-10 2013-01-30 中国航天科技集团公司第五研究院第五一三研究所 Satellite-bone data processing system based on advanced orbiting system (AOS)
CN103455708A (en) * 2013-07-24 2013-12-18 安徽省电力科学研究院 Power transmission line disaster monitoring and risk assessment platform based on satellite and weather information
CN103761524A (en) * 2014-01-17 2014-04-30 电子科技大学 Image-based linear target recognition and extraction method
US10139279B2 (en) * 2015-05-12 2018-11-27 BioSensing Systems, LLC Apparatuses and methods for bio-sensing using unmanned aerial vehicles
CN106441237A (en) * 2015-08-10 2017-02-22 北京空间飞行器总体设计部 In-orbit autonomous adjusting method of optical remote sensing satellite camera imaging parameter
CN106851084A (en) * 2016-11-21 2017-06-13 北京空间机电研究所 Noted on real-time processing algorithm on a kind of remote sensing camera star and update platform

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
天基信息港及其多源信息融合应用;李斌;《中国电子科学研究院学报》;20170620;第12卷(第3期);第254-255页,附图1-4 *

Also Published As

Publication number Publication date
CN110207671A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN110207671B (en) Space-based intelligent imaging system
CN110988912B (en) Road target and distance detection method, system and device for automatic driving vehicle
Bryson et al. Airborne vision‐based mapping and classification of large farmland environments
US7630797B2 (en) Accuracy enhancing system for geospatial collection value of an image sensor aboard an airborne platform and associated methods
US7603208B2 (en) Geospatial image change detecting system with environmental enhancement and associated methods
KR102200299B1 (en) A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof
US20070162195A1 (en) Environmental condition detecting system using geospatial images and associated methods
CN110770791A (en) Image boundary acquisition method and device based on point cloud map and aircraft
CN112801158A (en) Deep learning small target detection method and device based on cascade fusion and attention mechanism
KR101941878B1 (en) System for unmanned aircraft image auto geometric correction
CN112308913B (en) Vehicle positioning method and device based on vision and vehicle-mounted terminal
Torres et al. Combined weightless neural network FPGA architecture for deforestation surveillance and visual navigation of UAVs
CN113359782A (en) Unmanned aerial vehicle autonomous addressing landing method integrating LIDAR point cloud and image data
CN114973028B (en) Aerial video image real-time change detection method and system
CN110287939B (en) Space-based intelligent image processing method
US11651288B2 (en) Learning data generation apparatus, change area detection method and computer program
CN116310889A (en) Unmanned aerial vehicle environment perception data processing method, control terminal and storage medium
Yin et al. A self-supervised learning method for shadow detection in remote sensing imagery
Li et al. Algorithm for automatic image dodging of unmanned aerial vehicle images using two-dimensional radiometric spatial attributes
Differt Holistic methods for visual navigation of mobile robots in outdoor environments
Majidi et al. Aerial tracking of elongated objects in rural environments
Hung et al. Vision-based shadow-aided tree crown detection and classification algorithm using imagery from an unmanned airborne vehicle
Dimmeler et al. Combined airborne sensors in urban environment
Patel et al. Road Network Extraction Methods from Remote Sensing Images: A Review Paper.
Dhall et al. Ortho Image Mosaicing and Object Identification of UAV Data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant