CN115775351A - Aviation image orthorectification multitask parallel processing method - Google Patents

Aviation image orthorectification multitask parallel processing method Download PDF

Info

Publication number
CN115775351A
CN115775351A CN202211489977.5A CN202211489977A CN115775351A CN 115775351 A CN115775351 A CN 115775351A CN 202211489977 A CN202211489977 A CN 202211489977A CN 115775351 A CN115775351 A CN 115775351A
Authority
CN
China
Prior art keywords
image
ortho
sub
orthographic
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211489977.5A
Other languages
Chinese (zh)
Inventor
段延松
覃宇庭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202211489977.5A priority Critical patent/CN115775351A/en
Publication of CN115775351A publication Critical patent/CN115775351A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses an aerial image ortho-rectification multitask parallel processing method. The method comprises the steps of firstly, calculating a task overall target and planning a plurality of subtasks according to the aerial photography measurement area condition, the principle that parallel calculation tasks must be mutually independent and the principle of photogrammetry orthorectification, then, processing all tasks in a multi-core multi-CPU calculation cluster in parallel, and finally, confirming production results and setting related geographic information output results. The method carries out block processing on the orthographic images, avoids the processes of pixel-by-pixel processing and splicing the orthographic images, carries out parallel processing by using a multi-core computer, and has high calculation speed and high correction precision.

Description

Multi-task parallel processing method for aerial image orthorectification
Technical Field
The invention belongs to the technical field of remote sensing surveying and mapping, and particularly relates to an aerial image ortho-rectification multi-task parallel processing method.
Background
Due to the fact that the scales of all positions of the aerial image imaged in the central projection mode are not consistent, and the influences of factors such as topographic relief, the structure and performance of the sensor and the like, the geographic position of the aerial image pixel deviates from the actual geographic position. In order to eliminate the displacement of image points and obtain an ortho-image with uniform precision proportion and accurate positioning, the aerial image needs to be subjected to ortho-correction. Orthorectification is the conversion of a centrally projected image into an orthorectified projected image by digital differential correction. An orthographic projection image can be obtained by using a strict physical Model or a general empirical Model and combining a Digital Elevation Model (DEM) corresponding to the image range to simultaneously perform tilt correction and projection difference correction. And splicing the obtained multiple orthophotos, carrying out treatments such as light and color evening, and finally cutting according to a certain range to obtain a final orthophoto map. The orthoimage obtained through the orthorectification processing has an accurate spatial position, can be matched with a topographic map, and has the advantages of being rich in information, visual in image, clear in image, easy to interpret and the like.
With the rapid development of aerospace technology and sensor technology, particularly the introduction of unmanned aerial vehicle technology, the aerial image acquisition efficiency is greatly improved, the number and quality of images are increased explosively, and the conventional method for producing an orthophoto map by firstly correcting each image by a single chip, and then carrying out splicing, cutting and other processing cannot cope with the current situation. On the other hand, the current computer processing technology is also greatly improved, multi-core, multi-CPU, GPU, cluster computing, cloud computing and the like are popularized and applied to various industries, and the application of the new computer processing technology to aerial image ortho-image production is an urgent task. However, because the orthoimage production has a strong professional characteristic, the direct use of these computer technologies can improve the processing efficiency to a certain extent, but still cannot achieve the effects of seamless fusion and greatly improving the efficiency.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an aerial image ortho-rectification multitask parallel processing method, which comprehensively considers the technical requirements of a photogrammetry processing professional algorithm and computer parallel processing, organically combines the photogrammetry processing professional algorithm and the computer parallel processing, fully utilizes the advantage of high-performance parallel processing of the current computer, greatly improves the overall production efficiency of an aerial image, and is very suitable for the rapid generation of the current aerial ortho-image of an unmanned aerial vehicle.
In order to achieve the above object, the technical solution provided by the present invention is an aerial image orthorectification multitask parallel processing method, comprising the following steps:
step 1, estimating the average elevation of an aerial photography measurement area, and respectively projecting four angular points of each image onto an average elevation surface according to internal and external orientation elements of the image to obtain an image ground coverage area;
step 1.1, acquiring DEM data of an aerial photography area, estimating the average elevation of the aerial photography measurement area according to the DEM, and taking the elevation plane as a reference plane for subsequent processing;
step 1.2, respectively projecting four corner coordinates of each image onto a plane according to a collinear equation and internal and external orientation elements recorded when the camera acquires the images, and obtaining a coverage range of an original image in an object space coordinate system;
step 2, counting the ground coverage areas of all the images to obtain the whole range of the aerial photography measurement area, selecting a target area range for producing the whole orthoimage from the whole area range, and calculating the number of row and column pixels of the target whole orthoimage according to the pixel ground size of the orthoimage;
step 2.1, counting all the image ground coverage areas obtained in the step 1.2 to obtain the overall range of the measurement area, and selecting a rectangular area as the target area range of the overall orthoimage production;
step 2.2, determining the ground size of an orthoimage pixel according to the imaging geometric model;
and 2.3, dividing the side length of the target area range determined in the step 2.1 by the pixel ground size obtained in the step 2.2 to calculate the number of the pixels of the rows and the columns of the target integral orthoimage.
Step 3, dividing the target integral ortho-image into sub-image blocks according to the number of the row and column pixels of the target integral ortho-image obtained in the step 2, and the parameters of a computer memory and a hard disk and by combining the memory bytes occupied by the image pixels;
step 3.1, determining the size of a sub-image block according to the number of the row and column pixels of the target integral ortho-image obtained in the step 2 and the parameters of a computer memory and a hard disk in combination with the memory byte occupied by the image pixel;
step 3.2, the size of the sub-image blocks obtained in the step 3.1 is utilized to carry out blocking processing on the target ortho-image, and the number of the sub-ortho-image blocks is counted;
step 4, searching the optimal original image in all the original images according to the imaging geometric relation for each sub-orthographic image block obtained in the step 3 to form an orthographic correction subtask;
step 4.1, calculating the ground coordinates corresponding to the center and four corner points of each sub orthographic image block of the sub orthographic image blocks obtained by dividing in the step 3 to obtain the ground coverage area of the sub orthographic image blocks;
step 4.2, screening projection images capable of containing the ground coverage range of the whole sub-orthographic projection image block from the projection images of all the original images in the ground coverage range of the sub-orthographic projection image block obtained in the step 4.1;
step 4.3, taking an average value of coordinates of four corner points of the projection image as coordinates of an image center, calculating the distance from the center of the projection image screened in the step 4.2 to the center of the sub-image block, and selecting an original image corresponding to the projection center with the minimum distance as an optimal original image;
step 4.4, calculating coordinates of four corner points of the image block in the original image according to the inverse solution digital differential correction on the original image screened in the step 4.3;
4.5, calculating the original image coordinates of the pixels in the sub-orthographic image block by using a bilinear interpolation method;
step 4.6, carrying out gray scale bilinear interpolation according to the obtained coordinates of the pixels of the sub-orthographic image block in the original image, assigning the gray scale obtained by interpolation to the corresponding pixels, and further obtaining the gray scale of the whole sub-orthographic image block;
and 5, executing all the subtasks in parallel, after all the tasks are executed, checking the integral orthographic image data, and setting geographic information such as a scale, a picture name, a picture number, geographic coordinate information, a publishing description and the like to form a final result.
Moreover, the collinearity equation used in the calculation of the object space coordinates in step 1.2 is in the form of a positive solution:
Figure BDA0003963027120000031
wherein, (X, Y) are object space coordinates of the corner points; x S ,Y S ,Z S Is the object space coordinates of the filming site; z is the ground point elevation corresponding to the four corner points and is an unknown quantity, and the average elevation of the whole aerial photography area is taken as an approximate value of Z during calculation and substituted into the formula (1) to obtain the coverage range of the projection image; f is the principal distance, i.e. the distance of the centre of the photograph from the image plane; a is a i ,b i ,c i (i =1,2,3) is the nine directional cosines of the three outer azimuthal elements of the image, which is calculated as follows:
Figure BDA0003963027120000032
in the formula (I), the compound is shown in the specification,
Figure BDA0003963027120000036
ω and κ are the three outer azimuthal elements of the image.
And, the step 2.2 determines the calculation formula of the ground size of the pixel of the ortho image according to the imaging geometric model as follows:
Figure BDA0003963027120000033
in the formula, m is a scale denominator of an ortho image, a is a pixel size, A is a pixel ground size, f is a camera principal distance, and H is a photographing height of an average elevation plane.
Furthermore, the ground coordinates corresponding to the center and four corners of each sub-orthographic image block in the step 4.1 are calculated as follows:
Figure BDA0003963027120000034
wherein (X ', Y') is the coordinate of any point on the ortho image, (X, Y) is the corresponding ground coordinate, (X) 0 ,Y 0 ) The ground coordinates corresponding to the center of the pixel point at the lower left corner of the target ortho image are obtained, and m is the proportional scale denominator of the ortho image.
In step 4.4, the coordinates of the four corner points in the image are calculated as follows:
Figure BDA0003963027120000035
wherein, (x, y) is the coordinate of the corner point on the original image; (x) 0 ,y 0 ) The coordinates of the principal points of the original image are obtained; f is the principal distance, i.e. the distance of the centre of the photograph from the image plane; x S ,Y S ,Z S Is the object space coordinates of the filming site; a is i ,b i ,c i (i =1,2,3) is nine directional cosines consisting of three outer azimuthal elements of the image; z is the elevation of the angular point and is obtained by DEM interpolation.
In step 4.5, the original image coordinate calculation formula of the pixels inside the neutron orthographic image block is as follows:
Figure BDA0003963027120000041
wherein (x) 1 ,y 1 ),(x 2 ,y 2 ),(x 3 ,y 3 ),(x 4 ,y 4 ) The coordinates of four corner points of the sub-orthophoto block on the original image (x) ij ,y ij ) The coordinates of the ith row and the jth column pixel in the sub orthographic projection image block are shown, and M and N are the row and column pixel numbers of the sub orthographic projection image block respectively.
Compared with the prior art, the invention has the following advantages:
1) The efficiency is high, the block processing is carried out on the orthographic image, the digital differential correction carried out on the orthographic image pixel by pixel is avoided, and the splicing process of the orthographic image is avoided; 2) The speed is high, the gray level solving process of each image block is independent, parallel processing can be carried out, and the calculation speed is improved; 3) The accuracy is high, the unmanned aerial vehicle aerial image course overlapping degree can reach 85%, the sidewise overlapping degree can reach 65%, the optimal original image is selected to carry out digital differential correction, the area where the image block is located can be almost regarded as being vertically observed, the projection difference is small, and the solving accuracy is high.
Drawings
FIG. 1 is a technical flow chart of an embodiment of the present invention.
FIG. 2 is a schematic diagram of projecting an original image onto a plane according to the present invention.
FIG. 3 is a diagram illustrating selection of an optimal original image.
FIG. 4 is a schematic diagram of determining the position of a fixed image block in an original image and performing gray level interpolation by using an inverse solution method according to the present invention.
Detailed Description
The invention provides an aerial image ortho-rectification multitask parallel processing method, which comprises the steps of firstly estimating the average elevation of an aerial photography measurement area, respectively projecting four angular points of each image onto an average elevation surface according to internal and external orientation elements of the image to obtain an image ground coverage area, then obtaining the whole range of the measurement area as the target area range of the whole ortho-image production by counting all the image ground ranges, calculating the number of row and column pixels of the whole ortho-image according to the ground size of an ortho-image pixel determined by an imaging geometric model, then dividing the result ortho-image into sub-image blocks by combining the memory byte occupied by the image pixel according to the most common parameters of a computer memory and a hard disk, searching the best original image in all the original images according to the imaging geometric relationship for each image block to form an ortho-rectification subtask, finally executing all the subtasks in parallel, checking the data of the whole ortho-image and setting geographic information to form the final result after all the tasks are executed.
The technical scheme of the invention is further explained by combining the attached drawings.
As shown in fig. 1, the process of the embodiment of the present invention includes the following steps:
step 1, estimating the average elevation of an aerial photography measurement area, and projecting four angular points of each image onto the average elevation surface respectively according to the internal and external orientation elements of the image to obtain the image ground coverage.
Step 1.1, obtaining DEM data of an aerial photography area, estimating the average elevation of the aerial photography measurement area according to the DEM, and taking the elevation plane as a reference plane for subsequent processing.
Step 1.2, respectively projecting four corner point coordinates of each image onto a plane according to a collinear equation and internal and external orientation elements recorded when the camera acquires the images, and obtaining the coverage range of an original image in an object space coordinate system, wherein the collinear equation is in a positive solution form:
Figure BDA0003963027120000051
wherein, (X, Y) are object space coordinates of the corner points; x S ,Y S ,Z S Is the object space coordinates of the filming site; z is the ground point elevation corresponding to the four corner points and is an unknown quantity, and the average elevation of the whole aerial photography area is taken as an approximate value of Z during calculation and substituted into the formula (1) to obtain the coverage range of the projection image; f is the principal distance, i.e. the distance of the centre of the photograph from the image plane; a is i ,b i ,c i (i =1,2,3) is the nine directional cosines of the three outer azimuthal elements of the image, which is calculated as follows:
Figure BDA0003963027120000052
in the formula (I), the compound is shown in the specification,
Figure BDA0003963027120000053
ω and κ are the three outer azimuthal elements of the image.
And 2, counting the ground coverage areas of all the images to obtain the whole range of the aerial photography measurement area, selecting a target area range for producing the whole orthoimage from the whole range, and calculating the number of the row pixels and the column pixels of the target whole orthoimage according to the ground size of the pixel of the orthoimage.
And 2.1, counting all the image ground coverage ranges obtained in the step 1.2 to obtain the whole range of the measurement area, and selecting a rectangular area as a target area range for the whole orthometric image production.
Step 2.2, determining the size of the pixel ground of the orthoimage according to the imaging geometric model, wherein the calculation formula is as follows:
Figure BDA0003963027120000054
in the formula, m is a scale denominator of an ortho image, a is a pixel size, A is a pixel ground size, f is a camera principal distance, and H is a photographing height of an average elevation plane.
And 2.3, dividing the side length of the target area range determined in the step 2.1 by the pixel ground size obtained in the step 2.2 to calculate the number of the pixels of the rows and the columns of the target integral orthoimage.
And 3, dividing the target integral ortho-image into sub-image blocks according to the number of the row and column pixels of the target integral ortho-image obtained in the step 2, and the parameters of a computer memory and a hard disk and by combining memory bytes occupied by image pixels.
Step 3.1, determining the size N × N of the sub-image block according to the number of pixels in rows and columns of the target whole orthoimage obtained in step 2 and the parameters of the memory and the hard disk of the computer by combining the memory bytes occupied by the image pixels, wherein N is 512 in this embodiment.
And 3.2, carrying out blocking processing on the target orthographic image by utilizing the size of the sub-image block obtained in the step 3.1, and counting the number of the sub-orthographic image blocks.
And 4, searching the optimal original image in all the original images for each sub-orthographic image block obtained in the step 3 according to the imaging geometric relationship to form an orthographic correction subtask.
Step 4.1, calculating the ground coordinates corresponding to the four corner points of each sub orthographic projection image block and the ground coordinates corresponding to the center of the sub orthographic projection image block for the sub orthographic projection image blocks obtained by dividing in the step 3, wherein the calculation formula is as follows:
Figure BDA0003963027120000061
in the formula (X) ,Y ) Is the coordinate of any point on the orthographic image, and (X, Y) is the corresponding ground coordinate (X) 0 ,Y 0 ) The ground coordinates corresponding to the centers of the pixel points at the lower left corner of the target orthographic image are obtained, and m is the denominator of the scale of the orthographic image.
And 4.2, screening projection images which can contain the ground coverage range of the whole sub-orthographic projection image block from the projection images of all the original images in the ground coverage range of the sub-orthographic projection image block obtained in the step 4.1.
And 4.3, taking an average value of coordinates of four corner points of the projection image as coordinates of an image center, calculating the distance from the center of the projection image screened in the step 4.2 to the center of the sub-image block, and selecting the original image corresponding to the projection center with the minimum distance as the optimal original image.
Step 4.4, calculating coordinates (x) of four corner points of the image block in the original image according to the digital differential correction of an inverse solution method for the original image screened in the step 4.3 1 ,y 1 ),(x 2 ,y 2 ),(x 3 ,y 3 ),(x 4 ,y 4 ) The collinearity equation used is in inverse form:
Figure BDA0003963027120000062
in the formula (x) 0 ,y 0 ) The coordinates of the principal points of the original image are obtained; f is the principal distance, i.e. the distance of the centre of the photograph from the image plane; x S ,Y S ,Z S Is the object space coordinates of the filming site; a is i ,b i ,c i (i =1,2,3) is nine directional cosines consisting of three outer azimuthal elements of the image; z is the elevation of the corner point, obtained by DEM interpolation.
Step 4.5, calculating the original image coordinates (x) of the pixels in the sub-orthographic image block by using a bilinear interpolation method ij ,y ij ) The calculation formula is as follows:
Figure BDA0003963027120000071
wherein (x) 1 ,y 1 ),(x 2 ,y 2 ),(x 3 ,y 3 ),(x 4 ,y 4 ) Coordinates of four corner points of the sub-orthophoto block on the original image, (x) ij ,y ij ) The coordinates of the ith row and the jth column pixel in the sub orthographic projection image block are shown, and M and N are the row and column pixel numbers of the sub orthographic projection image block respectively.
And 4.6, carrying out gray scale bilinear interpolation according to the obtained coordinates of the pixels of the sub-orthographic image blocks in the original image, and endowing the gray scale obtained by interpolation to the corresponding pixels so as to obtain the gray scale of the whole sub-orthographic image block.
And 5, executing all the subtasks in parallel, after all the tasks are executed, checking the integral ortho-image data, and setting geographic information such as a scale, a picture name, a picture number, geographic coordinate information, a publishing description and the like to form a final result.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications, additions and substitutions for the embodiments described may be made by those skilled in the art without departing from the spirit of the invention or exceeding the scope of the invention as defined in the accompanying claims.

Claims (10)

1. An aerial image ortho-rectification multitask parallel processing method is characterized by comprising the following steps:
step 1, estimating the average elevation of an aerial photography measurement area, and respectively projecting four angular points of each image onto an average elevation surface according to internal and external orientation elements of the image to obtain an image ground coverage area;
step 2, counting the ground coverage areas of all the images to obtain the overall range of the aerial photography measurement area, selecting a target area range for overall ortho-image production from the overall range, and calculating the number of row pixels and column pixels of the target overall ortho-image according to the ground size of pixels of the ortho-image;
step 3, dividing the whole target orthoimage into sub-image blocks according to the number of pixels in rows and columns of the whole target orthoimage obtained in the step 2 and parameters of a computer memory and a hard disk and by combining memory bytes occupied by image pixels;
step 4, searching the optimal original image in all the original images according to the imaging geometric relation for each sub-orthographic image block obtained in the step 3 to form an orthographic correction subtask;
and 5, executing all the subtasks in parallel, checking the integral orthographic image data and setting geographic information to form a final result after all the tasks are executed.
2. The aerial image ortho-rectification multitask parallel processing method as claimed in claim 1, characterized in that: step 1 comprises the following steps:
step 1.1, acquiring DEM data of an aerial photography area, estimating the average elevation of the aerial photography area according to the DEM, and taking the elevation plane as a reference plane for subsequent processing;
and step 1.2, respectively projecting four corner point coordinates of each image onto a plane according to a collinear equation and internal and external orientation elements recorded when the camera acquires the images, and obtaining the coverage range of the original image in an object space coordinate system.
3. The aerial image ortho-rectification multitask parallel processing method as claimed in claim 2, characterized by that: the collinearity equation used for calculating the object space coordinates in step 1.2 is in the form of a positive solution:
Figure FDA0003963027110000011
wherein, (X, Y) are object space coordinates of the corner points; x S ,Y S ,Z S Is the object space coordinates of the filming site; z is the elevation of a ground point corresponding to four corner points and is an unknown quantity, and the average elevation of the whole aerial photography area is used as an approximate value of Z in calculation and substituted into the formula (1) to obtain the coverage range of the projection image; f is the principal distance, i.e. the distance of the centre of the photograph from the image plane; a is i ,b i ,c i (i =1,2,3) is nine direction cosines composed of three external azimuth elements of the image, and is calculated as follows:
Figure FDA0003963027110000021
in the formula (I), the compound is shown in the specification,
Figure FDA0003963027110000022
ω and κ are the three outer azimuthal elements of the image.
4. The aerial image ortho-rectification multitask parallel processing method as claimed in claim 1, characterized by that: the step 2 comprises the following steps:
step 2.1, counting all the image ground coverage ranges obtained in the step 1.2 to obtain the whole range of the measured area, and selecting a rectangular area as the target area range of the whole orthometric image production;
step 2.2, determining the ground size of the pixel of the orthoimage according to the imaging geometric model, wherein the calculation formula is as follows:
Figure FDA0003963027110000023
wherein m is a scale denominator of the ortho image, a is a pixel size, A is a pixel ground size, f is a camera principal distance, and H is a photographing height of the average elevation plane;
and 2.3, dividing the side length of the target area range determined in the step 2.1 by the pixel ground size obtained in the step 2.2 to calculate the number of pixels of rows and columns of the whole orthoimage of the target.
5. The aerial image ortho-rectification multitask parallel processing method as claimed in claim 1, characterized in that: the step 3 comprises the following steps:
step 3.1, determining the size N multiplied by N of the sub-image block according to the number of the row and column pixels of the target integral ortho-image obtained in the step 2 and the parameters of a computer memory and a hard disk and by combining the memory bytes occupied by the image pixels;
and 3.2, carrying out blocking processing on the target ortho-image by using the sizes of the sub-image blocks obtained in the step 3.1, and counting the number of the sub-ortho-image blocks.
6. The aerial image ortho-rectification multitask parallel processing method as claimed in claim 1, characterized by that: step 4 comprises the following steps:
step 4.1, calculating the ground coordinates corresponding to the center and four corners of each sub orthographic projection image block obtained by dividing in the step 3 to obtain the ground coverage range of the sub orthographic projection image block;
step 4.2, screening projection images capable of containing the ground coverage range of the whole sub-orthographic projection image block from the projection images of all the original images in the ground coverage range of the sub-orthographic projection image block obtained in the step 4.1;
step 4.3, taking an average value of coordinates of four corner points of the projection image as coordinates of an image center, calculating the distance from the center of the projection image screened in the step 4.2 to the center of the sub-image block, and selecting an original image corresponding to the projection center with the minimum distance as an optimal original image;
step 4.4, calculating coordinates of four corner points of the image block in the original image according to the inverse solution digital differential correction on the original image screened in the step 4.3;
4.5, calculating the original image coordinates of the pixels in the sub-orthographic image block by using a bilinear interpolation method;
and 4.6, performing gray bilinear interpolation according to the obtained coordinates of the pixels of the sub-orthographic image block in the original image, assigning the gray obtained by interpolation to the corresponding pixels, and further obtaining the gray of the whole sub-orthographic image block.
7. The aerial image ortho-rectification multitask parallel processing method as claimed in claim 6, characterized by that: the ground coordinate calculation mode corresponding to the center and four corners of each sub orthographic image block in the step 4.1 is as follows:
Figure FDA0003963027110000031
wherein (X) ,Y ) Is the coordinate of any point on the orthographic image, and (X, Y) is the corresponding ground coordinate (X) 0 ,Y 0 ) The ground coordinates corresponding to the center of the pixel point at the lower left corner of the target ortho image are obtained, and m is the proportional scale denominator of the ortho image.
8. The aerial image ortho-rectification multitask parallel processing method as claimed in claim 7, characterized by that: the coordinate calculation mode of the four corner points in the image in step 4.4 is as follows:
Figure FDA0003963027110000032
wherein, (x, y) is the coordinate of the corner point on the original image; (x) 0 ,y 0 ) The coordinates of the principal points of the original image are obtained; f is the principal distance, i.e. the distance of the centre of the photograph from the image plane; x S ,Y S ,Z s Is the object space coordinates of the filming site; a is i ,b i ,c i (i =1,2,3) is nine directional cosines consisting of three outer azimuthal elements of the image; z is the elevation of the corner point, obtained by DEM interpolation.
9. The aerial image ortho-rectification multitask parallel processing method as claimed in claim 8, characterized by that: step 4.5, the original image coordinate calculation formula of the pixels in the neutron orthographic image block is as follows:
Figure FDA0003963027110000033
in the formula (x) 1 ,y 1 ),(x 2 ,y 2 ),(x 3 ,y 3 ),(x 4 ,y 4 ) The coordinates of the four corner points of the sub-orthographic image block on the original image (x) ij ,y ij ) The coordinates of the ith row and the jth column pixel in the sub orthographic projection image block are shown, and M and N are the row and column pixel numbers of the sub orthographic projection image block respectively.
10. The aerial image ortho-rectification multitask parallel processing method as claimed in claim 1, characterized in that: the geographic information set in the step 5 comprises a scale, a picture name, a picture number, geographic coordinate information and a publication description.
CN202211489977.5A 2022-11-25 2022-11-25 Aviation image orthorectification multitask parallel processing method Pending CN115775351A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211489977.5A CN115775351A (en) 2022-11-25 2022-11-25 Aviation image orthorectification multitask parallel processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211489977.5A CN115775351A (en) 2022-11-25 2022-11-25 Aviation image orthorectification multitask parallel processing method

Publications (1)

Publication Number Publication Date
CN115775351A true CN115775351A (en) 2023-03-10

Family

ID=85390256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211489977.5A Pending CN115775351A (en) 2022-11-25 2022-11-25 Aviation image orthorectification multitask parallel processing method

Country Status (1)

Country Link
CN (1) CN115775351A (en)

Similar Documents

Publication Publication Date Title
CA2395257C (en) Any aspect passive volumetric image processing method
US8315477B2 (en) Method and apparatus of taking aerial surveys
JP5389964B2 (en) Map information generator
US10789673B2 (en) Post capture imagery processing and deployment systems
CN110555813B (en) Rapid geometric correction method and system for remote sensing image of unmanned aerial vehicle
CN113192193A (en) High-voltage transmission line corridor three-dimensional reconstruction method based on Cesium three-dimensional earth frame
CN111003214B (en) Attitude and orbit refinement method for domestic land observation satellite based on cloud control
CN115187798A (en) Multi-unmanned aerial vehicle high-precision matching positioning method
CN112862966B (en) Method, device, equipment and storage medium for constructing surface three-dimensional model
CN116086411B (en) Digital topography generation method, device, equipment and readable storage medium
CN112330537A (en) Method for quickly splicing aerial images of unmanned aerial vehicle in emergency rescue activities
KR102159134B1 (en) Method and system for generating real-time high resolution orthogonal map for non-survey using unmanned aerial vehicle
CN113739767B (en) Method for producing orthoscopic image aiming at image acquired by domestic area array swinging imaging system
CN112750075A (en) Low-altitude remote sensing image splicing method and device
CN113415433A (en) Pod attitude correction method and device based on three-dimensional scene model and unmanned aerial vehicle
CN113034347A (en) Oblique photographic image processing method, device, processing equipment and storage medium
CN107705272A (en) A kind of high-precision geometric correction method of aerial image
CN111598930A (en) Color point cloud generation method and device and terminal equipment
CN110779517A (en) Data processing method and device of laser radar, storage medium and computer terminal
CN115775351A (en) Aviation image orthorectification multitask parallel processing method
Zhang et al. Tests and performance evaluation of DMC images and new methods for their processing
CN117934735A (en) Modeling method and device for relief surface elevation model
CN111561949A (en) Coordinate matching method for airborne laser radar and hyperspectral imager all-in-one machine
CN117745818A (en) Airport scene target positioning method, airport scene target positioning device, computer equipment and storage medium
JP6133057B2 (en) Geographic map landscape texture generation based on handheld camera images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination