CN114757978B - Remote sensing satellite multi-camera multi-load image pairing method - Google Patents
Remote sensing satellite multi-camera multi-load image pairing method Download PDFInfo
- Publication number
- CN114757978B CN114757978B CN202210541580.XA CN202210541580A CN114757978B CN 114757978 B CN114757978 B CN 114757978B CN 202210541580 A CN202210541580 A CN 202210541580A CN 114757978 B CN114757978 B CN 114757978B
- Authority
- CN
- China
- Prior art keywords
- load
- scene
- aligned
- line number
- loads
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
Abstract
The invention provides a remote sensing satellite multi-camera multi-load image matching method, which comprises the steps of selecting a load from a plurality of loads as a reference load for matching a scene, and matching the scene with other loads to be aligned of the reference load and a camera by utilizing imaging line time information; and matching scenes of other loads to be aligned of the cameras different from the reference load according to the longitude and latitude, and verifying the alignment accuracy through the imaging geographic range overlapping degree of the matched scenes. The invention realizes the image alignment function among multiple loads of the satellite and the multiple cameras, solves the problem of image dislocation during the pairing of the multiple loads and provides the alignment data of the multiple loads of the multiple cameras for users.
Description
Technical Field
The invention relates to the technical field of remote sensing satellites, in particular to a multi-camera multi-load image pairing method for a remote sensing satellite.
Background
The current remote sensing satellite is generally a multi-camera multi-load structure, such as a high-resolution seven-satellite dual-linear-array imaging carrying front-view camera and a rear-view camera, wherein the rear-view camera is provided with a rear-view panchromatic load and a rear-view multispectral load. The purpose of remote sensing satellite multi-camera multi-load image matching is two points: firstly, images with different resolutions of the same camera are paired, image fusion can be carried out to obtain products with higher information content, such as fusion of a high-resolution panchromatic image and a low-resolution multispectral image; and secondly, the imaging angles of the multiple cameras are different, and stereo pairs can be extracted by utilizing images with different viewing angles, so that a stereo mapping product is provided.
The current remote sensing satellite multi-camera multi-load pairing is mainly aligned directly through a fixed offset line, the method is simple and efficient, but due to the fact that the installation relation of loads of cameras in the satellite flight process may change, the mode depending on the offset of the fixed offset line can enable the alignment effect to be always poor and satisfactory, and better paired images cannot be obtained.
Therefore, how to provide an automatic and high-precision pairing method, which provides a high-quality paired image product and improves the pairing accuracy, is a technical problem that needs to be solved at present.
Disclosure of Invention
The invention provides a remote sensing satellite multi-camera multi-load image matching method, which is used for solving the defect that a better matched image cannot be obtained in the prior art and solving the problem of image dislocation during multi-load image matching.
The invention provides a remote sensing satellite multi-camera multi-load image pairing method, which comprises the following steps:
acquiring original code stream data, and performing format analysis on the original code stream data to obtain strip data of each load of the original code stream data, corresponding attitude and orbit auxiliary data and a line time file;
selecting a load from each load as a reference load for aligning the rest loads, determining the head and tail line number of each scene of the reference load, and performing scene division on the reference load to obtain a scene division product of the reference load;
acquiring the line time and the longitude and latitude of a load to be aligned of a camera, and carrying out scene matching on the load to be aligned based on the line time and the longitude and latitude to obtain a scene matching product of the load to be aligned;
and calculating the four-corner longitude and latitude of the matched pair scene of the reference load scene product and the load scene product to be aligned based on the established strict imaging model, calculating the overlapping degree of the imaging area based on the four-corner longitude and latitude, and verifying the alignment precision.
According to the remote sensing satellite multi-camera multi-load image pairing method provided by the invention, a load is selected from each load to serve as a reference load for aligning the rest loads, the head and tail line numbers of each scene of the reference load are determined, and the reference load is divided into scenes to obtain a reference load scene division product, and the method comprises the following steps:
selecting a reference load from the strip data of each load, and acquiring imaging line information and scene height of each scene of the reference load;
directly determining a first scene starting line number of the reference load based on the imaging line information, and obtaining a first scene ending line number based on the scene height;
determining the head line number of the other scenes according to the overlapping degree requirement between the scenes and the end line number of the previous scene, and obtaining the end line number according to the scene height;
and carrying out scene division based on the initial scene line number, the initial scene ending line number, the initial line number and the ending line number to obtain a reference load scene division product.
According to the remote sensing satellite multi-camera multi-load image matching method provided by the invention, the load to be aligned is matched and subjected to scene matching based on the line time and the longitude and latitude to obtain a load to be aligned scene matching product, and the method comprises the following steps of:
carrying out scene matching on different loads to be aligned of the same camera based on the line time to obtain a first load to be aligned scene-dividing product;
and matching the scenes of different loads to be aligned of different cameras based on the longitude and latitude to obtain a second load scene division product to be aligned.
According to the remote sensing satellite multi-camera multi-load image matching method provided by the invention, the scene matching is carried out on different loads to be aligned of the same camera based on the line time to obtain a first load to be aligned scene division product, and the method comprises the following steps:
determining the head and tail line numbers of the load to be aligned corresponding to the imaging time based on the imaging time of the head and tail lines of each scene of the reference load;
adding fixed imaging row offset caused by load installation difference on the basis of the row number of the load to be aligned to obtain the head and tail row number of each scene of the load to be aligned;
and allocating scenes for different loads to be aligned of the same camera based on the head and tail line numbers of each scene of the loads to be aligned to obtain a first load to be aligned scene division product.
According to the remote sensing satellite multi-camera multi-load image matching method provided by the invention, the matching and scene matching are carried out on different loads to be aligned of different cameras based on the longitude and latitude to obtain a second load to be aligned scene division product, and the method comprises the following steps:
calculating the longitude and latitude of the central point of the head and tail lines of each scene of the reference load based on the strict imaging model;
determining the head and tail line number of each scene of the load to be aligned based on the longitude and latitude inverse calculation;
and allocating the scenes of different loads to be aligned of different cameras based on the head and tail line numbers of each scene of the loads to be aligned to obtain a second load to be aligned scene division product.
The remote sensing satellite multi-camera multi-load image matching method provided by the invention selects a load from a plurality of loads as a reference load for matching and bisecting a scene, performs matching and bisecting on the load to be aligned based on line time and longitude and latitude to obtain a load to be aligned and a product for the bisecting of the scene, and verifies the alignment precision through the imaging geographic range overlapping degree of the pairing scene, so that the image alignment function among the satellite multi-camera multi-load is realized, the problem of image dislocation during multi-load image matching is solved, and the multi-camera multi-load alignment data is provided for a user.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a remote sensing satellite multi-camera multi-load image pairing method provided by the invention;
FIG. 2 is a schematic view of the visual axis pointing direction of the high-resolution seven-gauge twin-line camera according to the present invention;
fig. 3 is a second schematic flow chart of the remote sensing satellite multi-camera multi-load image pairing method provided by the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The remote sensing satellite multi-camera multi-load image pairing method of the invention is described below with reference to fig. 1-3.
Referring to fig. 1, the method for pairing the multiple camera and multiple load images of the remote sensing satellite provided by the invention comprises the following steps:
and 110, acquiring original code stream data, and performing format analysis on the original code stream data to obtain long strip data of each load of the original code stream data, corresponding attitude and orbit auxiliary data and a line time file.
Specifically, the present embodiment takes the one-orbit high-resolution seven- # satellite data as an example, and details the multi-camera multi-load image pairing method.
The second line-up camera in the present embodiment includes a front-view camera and a rear-view camera, where the front-view camera carries a front-view Panchromatic camera FPA (Forward Panchromatic), and the rear-view camera carries a rear-view Panchromatic sensor BPA (rear Panchromatic) and a rear-view MultiSpectral sensor BMS (rear MultiSpectral rear-view MultiSpectral).
In this embodiment, the original code stream data of the one-orbit high-resolution seven- # satellite is subjected to sub-packaging and load-based analysis to obtain strip data of three loads, namely BMS, BPA and FPA, and corresponding attitude and orbit auxiliary data and a line time file.
It should be noted that the names in the present embodiment are explained:
strip data: and accumulating a plurality of rows (strips) of image data generated by the continuous time imaging of the linear array push-broom type imaging scanner.
When the device is operated: the time system of the satellite records the scanning time of each imaging line, namely the line time.
Attitude and orbit assistance data: the attitude and orbit determination system of the satellite measures and records satellite attitude, position and speed information in the satellite flight process.
And 120, selecting a load from each load as a reference load for aligning the rest loads, determining the head and tail line numbers of each scene of the reference load, and performing scene segmentation on the reference load to obtain a reference load scene segmentation product.
It should be added that, in this embodiment, the concrete definition of the scenery is: the strip data which is large in data volume, long in coverage range and incapable of being directly used and processed is cut, a plurality of independent standard scene products with certain overlapping degree are obtained, and follow-up use is facilitated.
Specifically, in the present embodiment, the rear-view multispectral camera is first divided into scenes by using the rear-view multispectral sensor BMS as a reference load. The method comprises the following steps:
BMS has a column width of 8967, and a scene height is set to keep the image square as much as possibleIs 8980. The track is imaged with a line number of a head line of 237 lines, namely a head scene starting line number of 237 lines and an end line number of 237+8980-1= 9216.
The overlap degree between scenes is set to be 15 percent, after the first scene is determined, the starting line number and the ending line number of each other scene are as follows:
in the above formula, the first and second carbon atoms are,is as followsThe number of the start line of the scene,is as followsThe scene end line number is set according to the overlapping degree, and the number of lines of the overlapping area is 0.15 × 8980= 1347. Is determined to beAfter the scene starting line number, the ending line number is calculated from the scene height,. Accordingly, the head and tail line numbers of each scene of the rail BMS load are sequentially determined, and the BMS is divided into scenes.
Specifically, in the embodiment, by obtaining the row time and the longitude and latitude of the load to be aligned of the camera, matching and scenic matching are performed on different loads to be aligned of the same camera and different loads to be aligned of different cameras, so as to obtain respective corresponding load products to be aligned.
And 140, calculating the longitude and latitude of four corners of a paired scene of the reference load scene division product and the load scene division product to be aligned based on the established strict imaging model, calculating the overlapping degree of respective imaging areas based on the longitude and latitude of the four corners, and verifying the alignment accuracy.
Specifically, after completing the reference load scenery separation and the load scenery separation to be aligned, the embodiment calculates the longitude and latitude of four corners of the reference load scenery separation product and the load scenery separation product to be aligned based on the strict imaging model, thereby determining the imaging areas, calculating the overlapping degree between the imaging areas, and verifying the alignment accuracy. The longitude and latitude of the four corners refer to longitude and latitude coordinates of four corner vertexes of the imaging area.
The remote sensing satellite multi-camera multi-load image matching method provided by the invention selects a load from a plurality of loads as a reference load for matching and bisecting a scene, performs matching and bisecting on the load to be aligned based on line time and longitude and latitude to obtain a load to be aligned and a product for the bisecting of the scene, and verifies the alignment precision through the imaging geographic range overlapping degree of the pairing scene, thereby realizing the image alignment function among the satellite multi-camera multi-load, solving the problem of image dislocation existing during the multi-load image matching, and providing the alignment data of the multi-camera multi-load for users.
Based on the above embodiments, the selecting a load from each load as a reference load for aligning the other loads, determining the head and tail line numbers of each scene of the reference load, and performing scene segmentation on the reference load to obtain a reference load scene segmentation product includes:
selecting a reference load from the strip data of each load, and acquiring imaging line information and scene height of each scene of the reference load;
directly determining a first scene starting line number of the reference load based on the imaging line information, and obtaining a first scene ending line number based on the scene height;
determining the head line number of the other scenes according to the overlapping degree requirement between the scenes and the end line number of the previous scene, and obtaining the end line number according to the scene height;
and carrying out scene division based on the initial scene line number, the initial scene ending line number, the initial line number and the ending line number to obtain a reference load scene division product.
Specifically, the embodiment provides the steps of determining the reference load and performing the framing to obtain the reference load framing product. Firstly, selecting a reference load from strip data of each load, and acquiring imaging line information and scene height of each scene of the reference load; then directly determining the initial line number of the first scene of the reference load according to the imaging line information, and calculating according to the scene height to obtain the final line number of the first scene; and finally, determining the line number of the first line of the other scenes according to the overlapping degree requirement between the scenes and the line number of the end of the previous scene, then obtaining the line number of the end according to the scene height, and dividing the scenes.
Based on the above embodiment, the allocating and scene matching the load to be aligned based on the travel time and the longitude and latitude to obtain the load to be aligned scene-divided product includes:
carrying out scene matching on different loads to be aligned of the same camera based on the line time to obtain a first load to be aligned scene-dividing product;
and matching the scenes of different loads to be aligned of different cameras based on the longitude and latitude to obtain a second load scene division product to be aligned.
In particular, the present implementation provides a method how to match scenes for different loads to be aligned of the same camera and different cameras. For different loads to be aligned of the same camera, matching and scene division are carried out according to line time, namely the imaging time of the head line and the tail line of each scene, so as to obtain a first load to be aligned and scene division product. For different loads to be aligned of different cameras, matching and scene division are carried out according to the longitude and latitude, namely the imaging geographic position, so that a second load to be aligned scene division product is obtained.
Based on the above embodiment, the obtaining of the first to-be-aligned load scene division product by allocating scenes based on the different to-be-aligned loads of the same camera in the line time includes:
determining the head and tail line number of the load to be aligned corresponding to the imaging time based on the imaging time of the head and tail line of each scene of the reference load;
adding fixed imaging row offset caused by load installation difference on the basis of the row number of the load to be aligned to obtain the head and tail row number of each scene of the load to be aligned;
and allocating scenes for different loads to be aligned of the same camera based on the head and tail line numbers of each scene of the loads to be aligned to obtain a first load to be aligned scene division product.
Specifically, the ground objects to be imaged of different loads to be aligned of the same camera in the embodiment are basically close at the same time, and only slight deviation exists in the installation relation of the loads. After the benchmark load BMS is divided into scenes, the BPA load of the same rearview camera is divided into scenes by utilizing line time, and the method comprises the following steps:
A. and determining the initial line number and the end line number of the load to be aligned according to the scene starting time and the ending time corresponding to the head line number and the tail line number of each scene of the camera reference load. For example, in view 1 of BMS, the starting line number is 237, and the corresponding starting time is(ii) a End row number 9216, corresponding end time. To be aligned in the load BPAThe corresponding row number is 899,corresponding to row number 36818.
B. By observation, the fixed row offset between the BPA and BMS loads is 486, so the first view of the BPA load is ultimately determined to have a starting row number of 899+486=1385 and an ending row number of 36818+486= 37304.
Based on the above embodiment, the obtaining of the second to-be-aligned load scene division product by performing scene division on different to-be-aligned loads of different cameras based on the longitude and latitude includes:
calculating the longitude and latitude of the central point of the head and tail lines of each scene of the reference load based on the strict imaging model;
determining the head and tail line number of each scene of the load to be aligned based on the longitude and latitude inverse calculation;
and allocating the scenes of different loads to be aligned of different cameras based on the head and tail line numbers of each scene of the loads to be aligned to obtain a second load to be aligned scene division product.
Specifically, in this embodiment, the different cameras have different viewing angles and different imaging positions at the same time, and therefore, the paired scenes need to be allocated according to the imaging geographic position information, which is specifically as follows:
A. referring to fig. 2, front and rear view cameras of a high-resolution seven satellite are simultaneously started to image, a certain time interval exists when the 2 cameras pass through the same intersatellite point, a front view corresponds to a certain view in the middle of the rear view, and the longitude and latitude of each view of the BMS are calculated through a strict imaging model.
B. And (3) performing back calculation by using the FPA strict imaging model, wherein the FPA row number of the BMS 1 st scene longitude and latitude back calculation is not in the FPA imaging range, namely the BMS 1 st scene does not have a corresponding FPA product. Traversing to the 3 rd scene of BMS, wherein the longitude and latitude of the central points of the initial row and the final row are respectivelyAndthe back-calculated FPA start behavior 31 and end behavior 23893 are the FPA initials, which are numbered 3. And traversing and calculating the longitude and latitude of the BMS in sequence and back calculating to the line number of the FPA to complete the pairing and scene division of the FPA.
After the BMS, BPA and FPA three loads are divided equally to obtain corresponding image products, the matched BMS, BPA and FPA scenes are taken, respective four-corner longitude and latitude are calculated by means of a strict imaging model to determine the imaging area of the scenes, then the calculated BMS and BPA overlapping degree is 91.89%, the BMS and FPA overlapping degree is 91.86%, the reason for slight difference is that the matched scenes are only aligned up and down, and some dislocation possibly exists on the left and right, so that the matched scenes meet 90% overlapping degree, namely the matched scenes meet the standard.
Referring to fig. 3, fig. 3 is a schematic diagram of a complete method of the remote sensing satellite multi-camera multi-load image pairing method provided by the invention.
Firstly, format analysis is carried out on the original data 310 to obtain each load strip data 320, then a reference load 330 is obtained from each load strip data 320, and scene division is carried out to obtain a reference load scene division product 340.
Secondly, matching and scene-dividing the load 350 to be aligned of the same camera and the load 360 to be aligned of different cameras based on line time and longitude and latitude respectively to obtain a load scene-dividing product 370 to be aligned and a second load scene-dividing product 380 to be aligned.
Finally, a geographical range overlap 390 is obtained according to the reference load scene product 340, the first load scene product to be aligned 370, and the second load scene product to be aligned 380.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (2)
1. A remote sensing satellite multi-camera multi-load image pairing method is characterized by comprising the following steps:
acquiring original code stream data, and performing format analysis on the original code stream data to obtain strip data of each load of the original code stream data, corresponding attitude and orbit auxiliary data and a line time file;
selecting a load from each load as a reference load for aligning the rest loads, determining the head and tail line number of each scene of the reference load, and performing scene division on the reference load to obtain a scene division product of the reference load;
acquiring the travel time and the longitude and latitude of a load to be aligned of a camera, and carrying out scene matching on the load to be aligned based on the travel time and the longitude and latitude to obtain a scene-divided product of the load to be aligned;
calculating four-corner longitude and latitude of a matched pair scene of the reference load scene division product and the load scene division product to be aligned based on the established strict imaging model, calculating the overlapping degree of an imaging area based on the four-corner longitude and latitude, and verifying the alignment precision;
the step of distributing the scene to the load to be aligned based on the travel time and the longitude and latitude to obtain a scene division product of the load to be aligned comprises the following steps:
carrying out scene matching on different loads to be aligned of the same camera based on the line time to obtain a first load to be aligned scene-dividing product;
distributing and grading different loads to be aligned of different cameras based on the longitude and latitude to obtain a second load to be aligned grading product;
the obtaining of a first load-to-be-aligned scene division product by allocating scene-division to different loads to be aligned of the same camera based on the line time comprises:
determining the head and tail line number of the load to be aligned corresponding to the imaging time based on the imaging time of the head and tail line of each scene of the reference load;
adding imaging line fixed offset caused by load installation difference on the basis of the line number of the load to be aligned to obtain the head and tail line number of each scene of the load to be aligned;
allocating scenes of different loads to be aligned of the same camera based on the head and tail line numbers of each scene of the loads to be aligned to obtain a first load to be aligned scene division product;
the obtaining of a second load-to-be-aligned scenic product by matching scenic pairs for different loads to be aligned of different cameras based on the longitude and latitude comprises the following steps:
calculating the longitude and latitude of the central point of the head and tail lines of each scene of the reference load based on the strict imaging model;
determining the head and tail line number of each scene of the load to be aligned based on the longitude and latitude inverse calculation;
and allocating the scenes of different loads to be aligned of different cameras based on the head and tail line numbers of each scene of the loads to be aligned to obtain a second load to be aligned scene division product.
2. The remote sensing satellite multi-camera multi-load image pairing method as claimed in claim 1, wherein the selecting a load from each load as a reference load for aligning the rest loads, determining the head and tail line numbers of each scene of the reference load, and performing scene segmentation on the reference load to obtain a reference load scene segmentation product comprises:
selecting a reference load from the strip data of each load, and acquiring imaging line information and scene height of each scene of the reference load;
directly determining a first scene starting line number of the reference load based on the imaging line information, and obtaining a first scene ending line number based on the scene height;
determining the first line number of the other scenes according to the overlapping degree requirement between the scenes and the end line number of the previous scene, and obtaining the end line number according to the scene height;
and carrying out scene division based on the initial scene starting line number, the initial scene ending line number, the initial scene line number and the ending line number to obtain a reference load scene division product.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210541580.XA CN114757978B (en) | 2022-05-19 | 2022-05-19 | Remote sensing satellite multi-camera multi-load image pairing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210541580.XA CN114757978B (en) | 2022-05-19 | 2022-05-19 | Remote sensing satellite multi-camera multi-load image pairing method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114757978A CN114757978A (en) | 2022-07-15 |
CN114757978B true CN114757978B (en) | 2022-08-30 |
Family
ID=82335891
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210541580.XA Active CN114757978B (en) | 2022-05-19 | 2022-05-19 | Remote sensing satellite multi-camera multi-load image pairing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114757978B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116761084B (en) * | 2023-08-22 | 2023-11-10 | 中国科学院空天信息创新研究院 | Full-load view dividing method for remote sensing satellite double-linear array and three-linear array cameras |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111693025A (en) * | 2020-06-12 | 2020-09-22 | 深圳大学 | Remote sensing image data generation method, system and equipment |
CN112213750A (en) * | 2020-09-30 | 2021-01-12 | 珠海欧比特宇航科技股份有限公司 | Hyperspectral satellite film full-spectrum pixel-by-pixel imaging angle parameter processing method and medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102506827B (en) * | 2011-11-08 | 2013-07-03 | 中国科学院长春光学精密机械与物理研究所 | Registration and fusion method for high-frame-frequency images of multi-load photoelectric tracking measuring equipment |
US11125623B2 (en) * | 2017-06-26 | 2021-09-21 | L3 Cincinnati Electronics Corporation | Satellite onboard imaging systems and methods for space applications |
CN107610164B (en) * | 2017-09-11 | 2020-07-14 | 北京空间飞行器总体设计部 | High-resolution four-number image registration method based on multi-feature mixing |
CN113454677A (en) * | 2018-12-29 | 2021-09-28 | 长沙天仪空间科技研究院有限公司 | Remote sensing satellite system |
-
2022
- 2022-05-19 CN CN202210541580.XA patent/CN114757978B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111693025A (en) * | 2020-06-12 | 2020-09-22 | 深圳大学 | Remote sensing image data generation method, system and equipment |
CN112213750A (en) * | 2020-09-30 | 2021-01-12 | 珠海欧比特宇航科技股份有限公司 | Hyperspectral satellite film full-spectrum pixel-by-pixel imaging angle parameter processing method and medium |
Also Published As
Publication number | Publication date |
---|---|
CN114757978A (en) | 2022-07-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2247094B1 (en) | Orthophotographic image creating method and imaging device | |
US7339614B2 (en) | Large format camera system with multiple coplanar focusing systems | |
KR101679456B1 (en) | Systems and methods of capturing large area images in detail including cascaded cameras andor calibration features | |
US5259037A (en) | Automated video imagery database generation using photogrammetry | |
US10122949B2 (en) | High-resolution camera unit for a drone, with correction of the wobble-type distortions | |
US8428344B2 (en) | System and method for providing mobile range sensing | |
JP3776787B2 (en) | 3D database generation system | |
CN104969238A (en) | Stereo assist with rolling shutters | |
JPH08159762A (en) | Method and apparatus for extracting three-dimensional data and stereo image forming apparatus | |
EP2719163A2 (en) | System and method for forming a video stream containing gis data in real-time | |
CN113220027B (en) | Concave polygon area unmanned aerial vehicle flight path planning based on remote sensing task | |
JP6616967B2 (en) | Map creation apparatus and map creation method | |
CN102640052A (en) | Multi-resolution digital large format camera with multiple detector arrays | |
CN114757978B (en) | Remote sensing satellite multi-camera multi-load image pairing method | |
WO2013191583A2 (en) | Method for producing an image of the surface of the earth from a moving carrier and a device for implementing same | |
Rottensteiner et al. | A strip adjustment approach for precise georeferencing of ALOS optical imagery | |
CN112288637A (en) | Unmanned aerial vehicle aerial image rapid splicing device and rapid splicing method | |
CN115731100A (en) | Image splicing method and system based on multiple unmanned aerial vehicles | |
JP3808833B2 (en) | Aerial photogrammetry | |
JP2000276045A (en) | Method and device for making map using photographed picture, and method of correcting distortion of photographed picture | |
CN117576343B (en) | Three-dimensional MESH model manufacturing method based on high-resolution satellite stereoscopic image | |
Blaser et al. | System design, calibration and performance analysis of a novel 360 stereo panoramic mobile mapping system | |
Cramer | The ADS40 Vaihingen/Enz geometric performance test | |
CN108665410B (en) | Image super-resolution reconstruction method, device and system | |
US10859377B2 (en) | Method for improving position information associated with a collection of images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |