CN114359045A - Image data processing method and processing device thereof, and unmanned equipment - Google Patents

Image data processing method and processing device thereof, and unmanned equipment Download PDF

Info

Publication number
CN114359045A
CN114359045A CN202111488631.9A CN202111488631A CN114359045A CN 114359045 A CN114359045 A CN 114359045A CN 202111488631 A CN202111488631 A CN 202111488631A CN 114359045 A CN114359045 A CN 114359045A
Authority
CN
China
Prior art keywords
image
digital
ortho
global
digital ortho
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111488631.9A
Other languages
Chinese (zh)
Inventor
唐明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xaircraft Technology Co Ltd
Original Assignee
Guangzhou Xaircraft Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xaircraft Technology Co Ltd filed Critical Guangzhou Xaircraft Technology Co Ltd
Priority to CN202111488631.9A priority Critical patent/CN114359045A/en
Publication of CN114359045A publication Critical patent/CN114359045A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application provides a processing method and a processing device of image data and unmanned equipment. The image data processing method comprises the steps of establishing a global digital orthoimage; the method comprises the steps that a first digital ortho-image and a first weight map corresponding to N images are obtained in real time from the start of scene shooting of unmanned equipment, wherein N is a positive integer; and updating the global digital ortho image according to the first digital ortho image and the first weight map so as to determine a target digital ortho image in the updated global digital ortho image. The method and the device can effectively reduce the color difference of the digital orthographic images and improve the visual effect.

Description

Image data processing method and processing device thereof, and unmanned equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for processing image data, and an unmanned device.
Background
At present, along with the rapid development of unmanned technology, unmanned equipment is widely applied to shooting of scenes such as agriculture, tourist attractions or cities due to the advantages of strong mobility or low safety risk coefficient and the like.
When the unmanned device photographs a scene, an open source scheme such as OpenRealm or Map2DFusion is usually adopted to generate a Digital ortho image (DOM). However, the color difference of the digital orthoimages generated by these open source schemes is very obvious, and the visual effect cannot meet the requirement.
Disclosure of Invention
In view of this, embodiments of the present application provide an image data processing method, an image data processing apparatus, and an unmanned device, which can effectively reduce color difference of a digital ortho-image and improve visual effect.
In a first aspect, an embodiment of the present application provides a method for processing image data. The image data processing method comprises the following steps: establishing a global digital orthoimage; the method comprises the steps that a first digital ortho-image and a first weight map corresponding to N images are obtained in real time from the start of scene shooting of unmanned equipment, wherein N is a positive integer; and updating the global digital ortho image according to the first digital ortho image and the first weight map so as to determine a target digital ortho image in the updated global digital ortho image.
In an embodiment of the present invention, the updating the global digital ortho image according to the first digital ortho image and the first weight map includes: determining a first image pyramid according to the first digital ortho-image; determining a first weight pyramid according to the first weight map; determining a second digital ortho-image and a second weight map corresponding to the second digital ortho-image from the global digital ortho-image according to the coordinate range of the first digital ortho-image; acquiring a second image pyramid according to the second digital orthographic image, and acquiring a second weight pyramid according to the second weight map; determining a target image pyramid corresponding to the target digital ortho-image based on a comparison result between a first weight value of the first weight pyramid and a second weight value of the second weight pyramid and a color value of the second image pyramid at any grid in the first digital ortho-image; determining a target digital ortho-image according to the target image pyramid; and updating the second digital ortho-image in the global digital ortho-image into the target digital ortho-image.
In an embodiment of the application, the determining a target image pyramid corresponding to the target digital ortho image based on a comparison result between a first weight value of the first weight pyramid and a second weight value of the second weight pyramid and a color value of the second image pyramid at any grid of the first digital ortho image includes: at any grid point of the first digital ortho-image, if the first weight value of the first weight pyramid is not greater than the second weight value of the second weight pyramid and the color value of the second image pyramid is not a zero value, determining that the second image pyramid is a target image pyramid corresponding to the target digital ortho-image; and if the first weight value is greater than the second weight value or the color value of the third image pyramid is zero, determining that the first image pyramid is a target image pyramid corresponding to the target digital ortho-image.
In an embodiment of the application, the determining the second digital ortho image and the second weight map corresponding to the second digital ortho image from the global digital ortho image according to the coordinate range of the first digital ortho image includes: determining the overlapping range of the first digital ortho-image and the global digital ortho-image according to the coordinate range of the first digital ortho-image; expanding the size of the global digital ortho-image according to the overlapping range to determine the expanded global digital ortho-image; and determining a second digital ortho image and a second weight map corresponding to the second digital ortho image from the expanded global digital ortho image according to the coordinate range of the first digital ortho image.
In an embodiment of the present application, the obtaining of the first digital ortho-image and the first weight map corresponding to the N images in real time includes: when N is greater than 1, acquiring N third digital orthoimages and N third weight maps corresponding to the N images in real time; splicing and fusing the N third digital ortho-images into a first digital ortho-image; and splicing and fusing the N third weight graphs into the first weight graph.
In an embodiment of the present application, the updating the global digital ortho image according to the first digital ortho image and the first weight map includes: and when N is greater than 1, updating the global digital ortho-images according to the first digital ortho-image corresponding to each image in the N images and the first weight map corresponding to the first digital ortho-image corresponding to each image in sequence according to the time sequence corresponding to the N images.
In an embodiment of the present application, the obtaining of the first digital ortho-image and the first weight map corresponding to the N images in real time includes: acquiring a first digital surface model corresponding to the N images in real time; and performing orthorectification on the first digital surface model to obtain a first digital orthoimage and a first weight map corresponding to the N images.
In an embodiment of the present application, the obtaining of the first digital surface model corresponding to the N images in real time includes: establishing a global digital surface model; calculating in real time according to the sparse point cloud or the dense point cloud corresponding to the N images to obtain a second digital surface model corresponding to the N images; updating a third digital surface model in the global digital surface model according to the second digital surface model corresponding to the N images to obtain a target digital surface model; and determining the target digital surface model as a first digital surface model corresponding to the N images.
In an embodiment of the application, the updating the third digital surface model in the global digital surface model according to the second digital surface model corresponding to the N images to obtain the target digital surface model includes: obtaining a first elevation value E of each grid point from a second digital surface model corresponding to the N images1And the number of times f that a significant elevation value exists; using formulas
Figure BDA0003397585630000031
Calculating to obtain an updated second elevation value E of each grid point2', wherein, E2A second elevation value at the position corresponding to each grid point in the third digital surface model is obtained; according to the updated second elevation value E of each grid point2' determining a target digital surface model.
In an embodiment of the present application, the method for processing image data further includes: establishing global tile data; slicing the second digital ortho-image to obtain current tile data corresponding to the target digital ortho-image; and updating the global tile data according to the current tile data to obtain updated global tile data.
In an embodiment of the present application, the updating global tile data according to current tile data includes: if the position of the first tile data in the current tile data is not overlapped with the positions of all the tile data in the global tile data, adding the first tile data in the global tile data; or if the first tile data is not a zero value and overlaps with the position of the second tile data in the global tile data, replacing the second tile data with the first tile data; or if the first tile data is equal to a zero value and overlaps with the location of the second tile data, ignoring the first tile data.
In a second aspect, an embodiment of the present application provides an apparatus for processing image data. The processing device comprises an establishing module, an obtaining module and an updating module. The establishing module is used for establishing a global digital orthoimage; the acquisition module is used for acquiring a first digital ortho-image and a first weight map corresponding to N images in real time from the start of scene shooting of the unmanned equipment, wherein N is a positive integer; the updating module is used for updating the global digital ortho image according to the first digital ortho image and the first weight map so as to determine a target digital ortho image.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and the computer program is configured to execute any one of the image data processing methods described in the first aspect.
In a fourth aspect, embodiments of the present application provide an unmanned device. The drone includes a processor and a memory for storing processor-executable instructions. Wherein the processor is configured to perform any one of the image data processing methods described in the first aspect above.
The embodiment of the application provides an image data processing method, an image data processing device and unmanned equipment, wherein a global digital ortho-image is established, and the global digital ortho-image is updated in real time according to a first digital ortho-image and a first weight map which are acquired in real time, so that the color difference generated by directly splicing the first digital ortho-image and the global digital ortho-image can be eliminated by using the first weight map. In addition, compared with a mode of calculating the digital ortho-images in an off-line mode aiming at all images of the whole scene, the method and the device for calculating the digital ortho-images can acquire the first digital ortho-images corresponding to the N images in real time while acquiring the N images and calculating the poses corresponding to the N images by the unmanned equipment in a flying mode, and the speed of calculating the digital ortho-images is improved.
Drawings
Fig. 1 is a schematic flowchart illustrating a method for processing image data according to a first embodiment of the present application.
Fig. 2 is a schematic flowchart illustrating a method for processing image data according to a second embodiment of the present application.
Fig. 3A is a schematic flowchart illustrating a method for processing image data according to a third embodiment of the present application.
Fig. 3B is a schematic diagram illustrating an overlapping range of the first digital orthogonal image and the global digital orthogonal image according to an embodiment of the present application.
Fig. 4 is a schematic flowchart illustrating a method for processing image data according to a fourth embodiment of the present application.
Fig. 5 is a schematic flowchart illustrating a method for processing image data according to a fifth embodiment of the present application.
Fig. 6 is a schematic flowchart illustrating a method for updating a third digital surface model in a global digital surface model according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of an apparatus for processing image data according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of an unmanned aerial vehicle according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a schematic flowchart illustrating a method for processing image data according to a first embodiment of the present application. The image data processing method can be executed by a controller or a processor on the unmanned device, and can also be executed by a user terminal such as a mobile phone, a tablet or a computer which is remotely connected with the unmanned device. Take the controller as an example. As shown in fig. 1, the processing method of the image data may include the following steps.
S110: and establishing a global digital ortho image.
In some embodiments, the controller may create the global digital orthophotos based on user instructions. In other embodiments, the controller may monitor whether the unmanned device starts to capture a scene, and create a global digital orthophoto image when it is monitored that the unmanned device starts to capture the scene.
When the current moment of acquiring the image is the first moment, the global digital ortho image may be a background image, and the background image may be selected by the user according to the habit, or may be set to any color by the user, for example, the background color in the background image may be set to black, the color value of the pixel corresponding to black may be a zero value, and the zero value may be represented by 0 or 000. When the current time of the captured image is a time other than the first time, such as an intermediate time or a final time, the global digital ortho image may include color information different from the background image, for example, a collection of all historical digital ortho images before the current time of the captured image.
S120: the method comprises the steps of obtaining a first digital ortho-image and a first weight map corresponding to N images in real time from the shooting of a scene by unmanned equipment, wherein N is a positive integer.
In some embodiments, every time the unmanned device acquires N images, the controller performs simultaneous or separate calculation on the N images, and then acquires the first digital orthophoto image and the first weight map corresponding to the N images. In other embodiments, a preset time interval may be set, the unmanned device acquires N images in each preset time interval, and the controller performs simultaneous or separate calculation on the N images to acquire the first digital orthophoto image and the first weight map corresponding to the image.
The N images can be obtained through photographic imaging, scanning imaging or radar imaging and the like, N can be any positive integer greater than or equal to 1, the numerical value of N is not specifically limited in the application, and the first digital orthoimage and the first weight map corresponding to the N images can be obtained in real time. For example, assuming that the human eye can recognize 25 frames of images in real time every 1 second, N may be set to any positive integer less than or equal to 25 or more, such as 1, 5, 7, 10, 20, etc., so as to achieve real-time acquisition of the first digital orthophotos and the first weight map corresponding to the N images. It should be understood that when N is less than 25, real time may also be referred to as quasi real time, so that quasi real time acquisition of the first digital ortho image and the first weight map corresponding to the N images may be achieved.
The number of the first digital ortho-image and the first weight map may be one or more, for example, when N is 1, the first digital ortho-image may be a single digital ortho-image corresponding to the current image, and when N is greater than 1, the first digital ortho-image may be multiple digital ortho-images corresponding to the multiple images, respectively, or may be a single digital ortho-image formed by splicing multiple digital ortho-images corresponding to the multiple images, which is not limited in this application.
S130: and updating the global digital ortho image according to the first digital ortho image and the first weight map so as to determine a target digital ortho image in the updated global digital ortho image.
Specifically, the target digital ortho-image may be determined by splicing and fusing existing digital ortho-images in the first digital ortho-image and the global digital ortho-image according to the overlapping range of the first digital ortho-image and the global digital ortho-image and the first weight map, so as to update the global digital ortho-image, update the global digital ortho-image to obtain an updated global digital ortho-image, and extract a digital ortho-image in the same coordinate range as the first digital ortho-image from the updated global digital ortho-image.
Each pixel value on the first weight map can reflect the distance from a certain pixel point to the central point of the image, and it should be understood that the closer a certain pixel point is to the central point of the image, the larger the pixel value of the pixel point is, that is, the larger the weight value of the pixel point is, the higher the quality of the pixel point is. The first weight map is used for eliminating color difference generated by directly splicing the first digital ortho image and the global digital ortho image. It should be understood that the manner of updating the global digital ortho image may be to use a maximum and minimum value fusion method, a weighted average fusion method, a color space fusion method or a multi-scale image fusion method to fuse the first digital ortho image into the global digital ortho image by using the first weight value, so as to update the global digital ortho image, and the application does not specifically limit how to update the global digital ortho image according to the first digital ortho image and the first weight map.
The coordinate range of the first digital ortho image may be determined by coordinates of four vertices of the first digital ortho image, and the coordinates of the four vertices of the first digital ortho image may be determined according to an overlapping area of the first digital ortho image and the global digital ortho image. The target digital ortho image is the digital ortho image after the first digital ortho image is spliced and fused into the global digital ortho image by using the first weight map, so that the target digital ortho image can be determined according to the coordinate range of the first digital ortho image, and the coordinate ranges of the first digital ortho image and the target digital ortho image are the same.
The size of the global digital ortho image may be a default when being established, the global digital ortho image may be directly updated if the size of the global digital ortho image is sufficient to splice the first digital ortho image acquired in real time, and the size of the global digital ortho image may be adaptively extended according to the size of the first digital ortho image acquired in real time if the size of the global digital ortho image is insufficient to splice the first digital ortho image acquired in real time.
According to the technical scheme provided by the embodiment of the application, the global digital ortho image is established, and the global digital ortho image is updated in real time according to the first digital ortho image and the first weight map which are acquired in real time, so that the color difference generated by directly splicing the first digital ortho image and the global digital ortho image can be eliminated by utilizing the first weight map. In addition, compared with a mode of calculating the digital ortho-images in an off-line mode aiming at all images of the whole scene, the method and the device for calculating the digital ortho-images can acquire the first digital ortho-images corresponding to the N images in real time while acquiring the N images and calculating the poses corresponding to the N images by the unmanned equipment in a flying mode, and the speed of calculating the digital ortho-images is improved.
Fig. 2 is a schematic flowchart illustrating a method for processing image data according to a second embodiment of the present application. The embodiment shown in fig. 2 is a modification of the embodiment shown in fig. 1. As shown in fig. 2, the difference from the embodiment shown in fig. 1 is that steps S1301 to S1307 correspond to step S130 in the embodiment shown in fig. 1.
S1301: a first image pyramid is determined from the first digital ortho image.
Specifically, the first digital ortho image may be subjected to a multi-scale, multi-resolution decomposition to obtain a first image pyramid.
S1302: and determining a first weight pyramid according to the first weight map.
Specifically, the first weight map may be subjected to multi-scale, multi-resolution decomposition, thereby obtaining a first weight pyramid.
The first image pyramid may be a pyramid obtained by performing multi-scale and multi-resolution decomposition on the first digital ortho image, for example, the first image pyramid may be a laplacian pyramid. The first weight pyramid may be a pyramid obtained by performing multi-scale and multi-resolution decomposition on the first weight map, for example, the type of the first weight pyramid may be a gaussian pyramid. The number of layers of the first image pyramid and the first weight pyramid may be set according to actual requirements, for example, the number of layers may be 3, 5,6, or even more, which is not specifically limited in this application. The first image pyramid may be represented as L1 k(x, y), the first weight pyramid may be represented as w1 k(x, y), where k is the pyramid level, (x, y) represents a grid of rows x and columns y. For example, when the number of layers of the first image pyramid and the first weight pyramid is 6, k is 1,2,3,4,5, 6. The types of the first image pyramid and the first weight pyramid can be adaptively adjusted according to actual conditions, which is not the case in the present applicationAre specifically defined.
S1303: and determining a second weight map corresponding to the second digital ortho image and the second digital ortho image from the global digital ortho image according to the coordinate range of the first digital ortho image.
S1304: and acquiring a second image pyramid according to the second digital orthographic image, and acquiring a second weight pyramid according to the second weight map.
In some embodiments, a corresponding coordinate range in the global digital ortho image may be determined according to the coordinate range of the first digital ortho image, thereby determining a second digital ortho image and a second weight map within the coordinate range, and further determining a second image pyramid and a second weight pyramid according to the second digital ortho image. In other embodiments, the image pyramid and the weight pyramid may be directly stored in the global digital ortho image, so that after the corresponding coordinate range in the global digital ortho image is determined, the second image pyramid and the second weight pyramid corresponding to the second digital ortho image in the coordinate range may be directly determined without determining the second digital ortho image and the second weight map in the coordinate range.
The second digital ortho-image is a digital ortho-image before the first digital ortho-image is spliced and fused in the coordinate range of the global digital ortho-image which is the same as the coordinate range of the first digital ortho-image. The second image pyramid is a pyramid obtained by performing multi-scale and multi-resolution decomposition on the second digital ortho image, and the second weight pyramid is a pyramid obtained by performing multi-scale and multi-resolution decomposition on the second weight map. The second image pyramid can be represented as L3 k(x, y), the second weight pyramid may be represented as w2 k(x,y)。
S1305: and determining a target image pyramid corresponding to the target digital ortho image based on a comparison result between a first weight value of the first weight pyramid and a second weight value of the second weight pyramid and a color value of the second image pyramid at any grid in the first digital ortho image.
The comparison result between the first weight value of the first weight pyramid and the second weight value of the second weight pyramid may be that a difference between the first weight value of the first weight pyramid and the second weight value of the second weight pyramid is greater than a certain preset value, such as 0, 1, 5, or other values, or that a ratio between the first weight value of the first weight pyramid and the second weight value of the second weight pyramid is greater than a certain preset value, such as 0.5, 1, 5, or other values, which is not specifically limited in this application.
S1306: and determining a target digital ortho-image according to the target image pyramid.
In some embodiments, the target image pyramid may be laplacian pyramid data, and the gaussian pyramid image corresponding to each layer of the laplacian pyramid may be restored layer by layer from top to bottom starting from the top layer of the target image pyramid, and the finally restored lowest layer image of the gaussian pyramid is determined, that is, the lowest layer image is determined to be the target digital orthographic image. Assuming that the pyramid of the target image has 6 layers, the target digital ortho image can be represented as
Figure BDA0003397585630000091
S1307: and updating the second digital ortho-image in the global digital ortho-image into the target digital ortho-image.
Specifically, the second digital ortho-image may be replaced with the target digital ortho-image, so that the updated global digital ortho-image can be obtained.
According to the technical scheme provided by the embodiment of the application, the target image pyramid corresponding to the target digital ortho-image is determined by combining the comparison result between the first weight value in the first weight pyramid and the second weight value in the second weight pyramid and combining the color value of the second image pyramid at any grid point, so that the highest quality of the target image pyramid can be ensured.
In addition, the second image pyramid may include color information (represented by color values) corresponding to the previously acquired image, and the previously acquired image may fail to reflect the real color information of a certain grid point due to the resolution, the distance, or the like, that is, the color value of the grid point is zero. However, if the weight values are compared, the true color information of the grid point may be discarded because the first weight value is smaller than the second weight value, and thus the true color information of the image may be lost.
Therefore, in the embodiment of the application, the size of the color value of the second image pyramid is combined while the size of the weight value is compared, so that the phenomenon that real color information contained in the first image pyramid and different from the real color information contained in the second image pyramid is lost due to the fact that the size of the weight value is only compared is avoided, the probability of mistakenly selecting pixels corresponding to grid points in the fusion process is reduced, and the fusion effect is improved.
In an embodiment of the present application, steps S1201 to S1203 correspond to step S120 in the embodiment shown in fig. 1.
S1201: and when N is greater than 1, acquiring N third digital orthoimages and N third weight maps corresponding to the N images in real time.
For example, assuming that N is 3, the N images include image 1, image 2 and image 3, the N third digital ortho images include the third digital ortho image 1 corresponding to image 1, the third digital ortho image 2 corresponding to image 2 and the third digital ortho image 3 corresponding to image 3, and the N third weight maps include the third weight map 1 corresponding to image 1, the third weight map2 corresponding to image 2 and the third weight map 3 corresponding to image 3.
S1202: and splicing and fusing the N third digital orthogonal images into the first digital orthogonal image.
For example, the third digital ortho-image 1, the third digital ortho-image 2 and the third digital ortho-image 3 are merged and fused into one first digital ortho-image.
S1203: and splicing and fusing the N third weight graphs into the first weight graph.
For example, the third weight map 1, the third weight map2 and the third weight map 3 are fused into a first weight map.
It should be understood that the stitching in steps S1202 and S1203 may include, but is not limited to, stitching by using a rigid body transformation model, an affine transformation model, or a projective transformation model, and the fusing may include, but is not limited to, a weighted average fusing method, a maximum value and minimum value fusing method, a color space-based fusing method, or a multi-scale image fusing method, and this is not particularly limited in this application.
In the embodiment of the application, the N third digital ortho images corresponding to the N images are spliced and fused into the first digital ortho image, and the N third weight maps are spliced and fused into the first weight map, so that when a first digital ortho image and a first weight map are obtained in real time, only one updating process needs to be executed, that is, the global digital ortho image is updated according to the first digital ortho image and the first weight map, and thus the target digital ortho image in the updated global digital ortho image can be obtained.
Fig. 3A is a schematic flowchart illustrating a method for processing image data according to a third embodiment of the present application. Fig. 3B is a schematic diagram illustrating an overlapping range of the first digital orthogonal image and the global digital orthogonal image according to an embodiment of the present application. The embodiment shown in fig. 3A is a variation of the embodiment shown in fig. 2. As shown in fig. 3A, the difference from the embodiment shown in fig. 2 is that steps S13031 to S13033 correspond to step S1303 in the embodiment shown in fig. 2.
S13031: and determining the overlapping range of the first digital orthogonal image and the global digital orthogonal image according to the coordinate range of the first digital orthogonal image.
For example, referring to fig. 3B, if the length of the global digital ortho image is 3m and the width is n, the size of the global digital ortho image can be represented as 3m × n, and the coordinate range of the global digital ortho image can be represented by a1(0,0)、B1(3m,0)、C1(3m, n) and D1(0, n) are defined by these four vertices. The size of the first digital ortho image is 2m multiplied by n, and the coordinate range of the first digital ortho image is from A2(2m,0)、B2(4m,0)、C2(4m, n) and D2(2m, n) are defined by these four vertices. Thus, it is possible to provideAccording to the coordinate range of the first digital ortho image, the overlapping range of the first digital ortho image and the global digital ortho image can be determined to be A2(2m,0)、B1(3m,0)、C1(3m, n) and D2(2m, n) (i.e., the shaded portion shown in FIG. 3B). The overlapping range shown in fig. 3B is only exemplary, and may be according to any actually determined figure, and the present application is not limited to this.
S13032: and expanding the size of the global digital ortho image according to the overlapping range to determine the expanded global digital ortho image.
For example, referring to FIG. 3B, the size of the global digital ortho image to be extended can be determined from B1、B2、C2And C1The size corresponding to the limited coordinate range is expanded from 3m multiplied by n to 4m multiplied by n, so that the expanded global digital ortho image is determined to be A1、B2、C2And D1The corresponding digital orthographic images in the defined coordinate range. Wherein, is prepared from B1、B2、C2And C1The color values within the defined coordinate range may each be set to a value of zero.
S13033: and determining a second digital ortho image and a second weight map corresponding to the second digital ortho image from the expanded global digital ortho image according to the coordinate range of the first digital ortho image.
According to the technical scheme provided by the embodiment of the application, the size of the global digital ortho image is not limited by the size of a single scene, so that the image data processing method provided by the embodiment of the application can be applied to different scenes, and the influence on the visual effect caused by the excessive background color displayed in the global digital ortho image can be avoided.
In an embodiment of the present application, steps S13051 and S13052 correspond to step S1305 in the embodiment shown in fig. 1.
S13051: at any grid point of the first digital ortho-image, if the first weight value of the first weight pyramid is not greater than the second weight value of the second weight pyramid, and the color value of the second image pyramid is not a zero value, determining that the second image pyramid is a target image pyramid corresponding to the target digital ortho-image.
S13052: and if the first weight value is greater than the second weight value or the color value of the second image pyramid is zero, determining that the first image pyramid is a target image pyramid corresponding to the target digital ortho-image.
For example, in steps S13051 and S13052, it may be simplified to determine a target image pyramid corresponding to the target digital ortho image by using the following formula (1), where the target image pyramid may be represented as L0 k(x, y), assuming a zero value represented by 0.
Figure BDA0003397585630000121
In the embodiment of the application, the image pyramid with a larger weight value is selected as the target image pyramid by combining the difference value between the first weight value in the first weight pyramid and the second weight value in the second weight pyramid at any grid point, so that the highest quality of the target image pyramid can be ensured. In addition, when the color value of the second image pyramid is zero, the first image pyramid is selected as the target image pyramid, so that the loss of real color information contained in the first image pyramid and different from that in the second image pyramid is avoided, the probability of wrong selection of pixels corresponding to grid points in the fusion process is reduced, and the fusion effect is improved.
Fig. 4 is a schematic flowchart illustrating a method for processing image data according to a fourth embodiment of the present application. The embodiment shown in fig. 4 is a modification of the embodiment shown in fig. 1. As shown in fig. 4, steps S1204 and S1205 may be an exemplary implementation of step S120 in the embodiment shown in fig. 1.
S1204: the method comprises the steps of obtaining a first digital surface model corresponding to N images in real time from the shooting of a scene by unmanned equipment.
In some embodiments, N ═ 1, the first digital surface model corresponding to the current image may be obtained in real time. In other embodiments, N >1, the number of the first digital surface models is one, and the plurality of digital surface models corresponding to the plurality of images can be obtained in real time, so that the plurality of digital surface models are spliced and fused into the first digital surface model. In still other embodiments, N >1, the number of the first digital surface models is N, and the first digital surface model corresponding to each of the plurality of images can be acquired in real time.
S1205: and performing orthorectification on the first digital surface model to obtain a first digital orthoimage and a first weight map corresponding to the N images.
Specifically, the first digital ortho-images and the first weight maps corresponding to the N images can be obtained by performing ortho-correction on the first digital surface models and poses corresponding to the N images by using methods such as general polynomial ortho-correction or photo ortho-correction. The color value in the first digital ortho image can be calculated by adopting a formula (2), wherein c refers to the color value in the first digital ortho image, I refers to the pixel value at a certain grid point, P refers to a projection matrix of a certain image corresponding to the first digital ortho image, the projection matrix can reflect the pose of the certain image, and X refers to the spatial position of the certain grid point on the first digital ortho image. Each weight value in the first weight map may be calculated by using formula (3), where w is a weight value of a certain grid point, d is a distance from a projection point to a center of a certain image, and γ is a distance from a boundary corner point of a certain image to the center.
Formula (2) is where c is I × (P × X)
w is 255 x (1-d/γ) formula (3)
In some embodiments, when the number of the first digital surface model is one, the orthorectification may be performed on the first digital surface model, so as to obtain a first digital orthoimage and a first weight map corresponding to the N images. In other real-time examples, when the number of the first digital surface models is multiple, the orthorectification may be performed on the multiple first digital surface models respectively, so as to obtain the first digital orthoimage and the first weight map corresponding to each of the N images.
In general, in the open source scheme of Map2DFusion, only a simple image transformation (warp) operation (e.g., euclidean transformation, similarity transformation, affine transformation, or photogrammetric transformation) is performed to obtain a digital orthoimage. The open source scheme does not carry out the ortho-rectification, so the precision of the digital ortho-image obtained by the open source scheme is very low, and the open source scheme cannot be practically applied.
According to the technical scheme provided by the embodiment of the application, the first digital ortho-image and the first weight map corresponding to the N images are obtained by performing the ortho-correction on the first digital surface model, and the ortho-correction is adopted when the first digital ortho-image is generated in real time, so that the precision of the first digital ortho-image can be improved, and the realizability is improved.
In an embodiment of the present application, step S1308 corresponds to step S130 in the embodiment shown in fig. 1.
S1308: and when N is greater than 1, updating the global digital ortho image according to the first digital ortho image corresponding to each image and the first weight map corresponding to each image in the N images in sequence according to the time sequence corresponding to the N images.
For example, assuming that the N images are image 1, image 2, and image 3 in sequence according to the time sequence of acquiring the N images by the unmanned device, the global digital ortho image is updated according to the first digital ortho image 1 corresponding to the image 1 and the first weight map 1, and the updated global digital ortho image 1 is obtained. And updating the updated global digital ortho-image 1 according to the first digital ortho-image 2 corresponding to the image 2 and the first weight map2 to obtain the updated global digital ortho-image 2. And updating the updated global digital ortho-image 3 according to the first digital ortho-image 3 corresponding to the image 3 and the first weight map 3 to obtain the updated global digital ortho-image 3. In some embodiments, the target digital ortho image may be determined from the updated global digital ortho image 3. In other embodiments, the target digital ortho image 1 may be determined from the updated global digital ortho image 1, the target digital ortho image 2 may be determined from the updated global digital ortho image 2, and the target digital ortho image 3 may be determined from the updated global digital ortho image 3.
In the embodiment of the application, the global digital ortho-image is updated according to the time sequence corresponding to the N images and the first weight map corresponding to each image in the N images, so that the color difference generated in the process of fusing the first digital ortho-image corresponding to each image into the global digital ortho-image can be eliminated in sequence, and the visual effect is further improved.
In an embodiment of the present application, the method for processing image data further includes steps S140 to S160.
S140: and establishing the global tile data.
It should be understood that step S140 may be performed simultaneously with step S110, or may be performed at any time before the slicing process is performed on the target digital ortho image, which is not specifically limited in this application.
When the current time of acquiring the image is the first time, the global tile data may be blank or may be any default tile data, and when the current time of acquiring the image is other than the first time, such as an intermediate time or a final time, the global tile data may include a set of all historical tile data before the current time of acquiring the image.
S150: and slicing the target digital ortho image to obtain current tile data corresponding to the target digital ortho image.
Specifically, the second digital ortho image may be divided into tile forms (e.g., jpg/png format) with fixed sizes (e.g., 256 × 256), so as to obtain current tile data corresponding to the second digital ortho image. The current tile data may be available for display at the front end of the web page.
S160: and updating the global tile data according to the current tile data to obtain updated global tile data.
It should be understood that, the manner of updating the global tile data according to the current tile data may be to add the current tile data to the global tile data, or to replace part of the tile data in the global tile data with the current tile data, or may omit the current tile data, and the specific updating manner may be determined according to actual conditions, as long as the updated global tile data can contain more image information, which is not specifically limited in this application.
According to the technical scheme provided by the embodiment of the application, the target digital ortho-image is sliced, so that current tile data corresponding to the target digital ortho-image is obtained, global tile data is updated according to the current tile data, and then the real-time generation and updating of the tile data can be realized, so that the global tile data contains the latest and more accurate tile data in real time.
Fig. 5 is a schematic flowchart illustrating a method for processing image data according to a fifth embodiment of the present application. As shown in fig. 5, steps S12041 to S12044 are an exemplary implementation of step S1204 in the embodiment shown in fig. 4.
S12041: and establishing a global digital surface model.
In some embodiments, the controller may build the global digital surface model according to a user's instructions. In other embodiments, the controller may monitor whether the drone starts to take a scene shot, and establish the global digital surface model when it is monitored that the drone starts to take a scene shot.
The global digital surface model may initially set the elevation values to be invalid values, which may be any value that can be distinguished from the elevation values represented in the true digital surface model. For example, the invalid value may be set to-9999, or may be set to other values such as-10000 or-8889. S12041 and step S110 may be established simultaneously, or may be established at any time before step S12042, which is not specifically limited in this application.
S12042: and calculating in real time according to the sparse point cloud or the dense point cloud corresponding to the N images to obtain a second digital surface model corresponding to the N images.
In some embodiments, the sparse point clouds corresponding to the N images may be output in real time by a synchronous positioning And Mapping (SLAM) or a Motion recovery Structure (SFM) or the like, or the dense point clouds corresponding to the N images may be output in real time by a Semi-Global Matching (SGM) or the like, And then the second digital surface model corresponding to the N images may be obtained by calculating in real time according to the sparse point clouds or the dense point clouds corresponding to the N images.
The method for obtaining the second digital surface model corresponding to the N images through real-time calculation may adopt a triangulation method, an inverse distance weighting method, or the like, which is not specifically limited in this application. The second digital surface model may be expressed in the form of an elevation map.
S12043: and updating a third digital surface model in the global digital surface model according to the second digital surface model corresponding to the N images so as to obtain the target digital surface model.
Specifically, a third digital surface model in the global digital surface model, which has the same coordinate range as the second digital surface model, may be determined according to the coordinate range of the second digital surface model corresponding to the N images, and the second digital surface model and the third digital surface model corresponding to the N images may be fused, so that the target digital surface model may be obtained.
It should be understood that the size of the global digital surface model may be a default at the time of establishing, and the size of the global digital surface model may be adaptively adjusted when updating the third digital surface model in the global digital surface model, which is not specifically limited in this application.
S12044: and determining the target digital surface model as a first digital surface model corresponding to the N images.
Specifically, the target digital surface model may be assigned to the first digital surface models corresponding to the N images, or the first digital surface models corresponding to the N images may be replaced with the target digital surface model.
According to the technical scheme provided by the embodiment of the application, the third digital surface model in the global digital surface model is updated according to the second digital surface models corresponding to the N images, so that the existing third digital surface model in the global digital surface model can be updated in a covering manner through the new second digital surface model. In addition, compared with the first digital surface model corresponding to the N images determined without updating the existing third digital surface model in the global digital surface model, the target digital surface model is determined to be the first digital surface model corresponding to the N images, so that the first digital surface model corresponding to the N images and the global digital surface model can meet the requirement of visual effect better after being fused, and the first digital surface model corresponding to the N images is fused with other elevation values in the global digital surface model, so that the accuracy of the first digital surface model corresponding to the N images is improved.
In an embodiment of the present application, steps S1501 to S1503 correspond to step S150 in the embodiment shown in fig. 4.
S1501: and if the position of the first tile data in the current tile data is not overlapped with the positions of all the tile data in the global tile data, adding the first tile data in the global tile data.
For example, assume that the positions of all tile data in the current tile data are respectively represented by (I)i,Ji) Is shown in the specification, wherein IiDenotes the number of lines, JiRepresenting the number of columns, global tile data the location of all tile data in the global tile data is respectively (P)i,Qi) Is represented by the formula, wherein PiRepresenting the number of lines, QiIndicating the number of columns. The first tile data is any tile data in the current tile data, and the position of the first tile data is (I)1,J1) Is represented by the formula (I)1,J1) And all of (P)i,Qi) And if the data do not overlap, adding the first tile data in the global tile data.
S1502: replacing the second tile data with the first tile data if the first tile data is not a zero value and overlaps with a location of the second tile data in the global tile data.
The second tile data is any one of the global tile data. For example, assume the location of the second tile data is (P)1,Q1) Is represented by (I)1,J1) And (P)1,Q1) Is the same (i.e., the two locations overlap) and the first tile data is not a zero value, the second tile data is replaced with the first tile data. The first tile data may be represented by a color value, a zero value referring to a color value corresponding to the background color, e.g. a zero value may represent a color value of 0 or 000.
S1503: if the first tile data is equal to a zero value and overlaps with the location of the second tile data, the first tile data is ignored.
For example, suppose (I)1,J1) And (P)1,Q1) Is the same and the first tile data is equal to a value of zero, the second tile data in the global tile data may be left unchanged, i.e., the first tile data is ignored.
In the embodiment of the application, if the position of the first tile data in the current tile data is not overlapped with the positions of all the tile data in the global tile data, the first tile data is added to the global tile data, so that the updated global tile data comprises all the non-overlapped tile data. If the first tile data in the current tile data is not a zero value and overlaps with the position of the second tile data in the global tile data, the second tile data is replaced by the first tile data with a better visual effect, and the current tile data is obtained after the second digital ortho-image is sliced, and the color difference of the second digital ortho-image is eliminated, so that the visual effect of the first tile data in the current tile data is better than that of the second tile data. Further, the first tile data is ignored if it is equal to a zero value and overlaps with the position of the second tile data. Compared with the method that when the first tile data is equal to a zero value and is overlapped with the position of the second tile data, the second tile data is directly replaced by the first tile data, the real color information in the second tile data can be prevented from being lost, and the probability of error updating in the process of updating the global tile data is reduced.
Fig. 6 is a schematic flowchart illustrating a method for updating a third digital surface model in a global digital surface model according to an embodiment of the present application. As shown in fig. 6, steps S120431 to S120433 are an exemplary implementation of step S12043 in the embodiment shown in fig. 5.
S120431: obtaining a first elevation value E of each grid point from a second digital surface model corresponding to the N images1And the number of times f that a significant elevation value exists.
It should be understood that there may be no altitude information at some grid points, that is, the altitude value at the grid point may become an invalid value, and correspondingly, the number f of times that an effective altitude value exists is 0, if there is altitude information at some grid points, each altitude information is an effective altitude value, and the number f of times that an effective altitude value exists may be determined according to the statistical result.
S120432: using formulas
Figure BDA0003397585630000181
Calculating to obtain an updated second elevation value E of each grid point2', wherein, E2The second elevation value at the corresponding position of each grid point in the third digital surface model is obtained.
S120433: according to the updated second elevation value E of each grid point2' determining a target digital surface model.
In particular, the elevation value of each grid point in the third digital surface model may be updated to an updated second elevation value E of each grid point2', to determine an updated third digital surface model.
According to the technical scheme provided by the embodiment of the application, a formula is utilized
Figure BDA0003397585630000182
Calculating to obtain an updated second elevation value E of each grid point2And according to the updated second elevation value E of each grid point2Determining a target digital surface model to achieve fusion of the second digital surface model with the third digital surface model by means of weighted averagingAnd the weighted average mode is simple and visual, so that the speed of fusing the second digital surface model and the third digital surface model in real time can be improved.
Fig. 7 is a schematic structural diagram of an apparatus for processing image data according to an embodiment of the present application. The image data processing apparatus 700 includes a creating module 710, an obtaining module 720, and an updating module 730. The creating module 710 is used for creating a global digital ortho image. The obtaining module 720 is configured to obtain, in real time, a first digital ortho-image and a first weight map corresponding to N images from the start of scene shooting by the unmanned device, where N is a positive integer; the updating module 730 is configured to update the global digital ortho image according to the first digital ortho image and the first weight map to determine a target digital ortho image in the updated global digital ortho image.
According to the technical scheme provided by the embodiment of the application, the global digital ortho image is established, and the global digital ortho image is updated in real time according to the first digital ortho image and the first weight map which are acquired in real time, so that the color difference generated by directly splicing the first digital ortho image and the global digital ortho image can be eliminated by utilizing the first weight map. In addition, compared with a mode of calculating the digital ortho-images in an off-line mode aiming at all images of the whole scene, the method and the device for calculating the digital ortho-images can acquire the first digital ortho-images corresponding to the N images in real time while acquiring the N images and calculating the poses corresponding to the N images by the unmanned equipment in a flying mode, and the speed of calculating the digital ortho-images is improved.
In an embodiment of the present application, the updating module 730 is further configured to determine a first image pyramid according to the first digital ortho image; determining a first weight pyramid according to the first weight map; determining a second digital ortho-image and a second weight map corresponding to the second digital ortho-image from the global digital ortho-image according to the coordinate range of the first digital ortho-image; acquiring a second image pyramid according to the second digital orthographic image, and acquiring a second weight pyramid according to the second weight map; determining a target image pyramid corresponding to the target digital ortho-image based on a comparison result between a first weight value of the first weight pyramid and a second weight value of the second weight pyramid and a color value of the second image pyramid at any grid in the first digital ortho-image; determining a target digital ortho-image according to the target image pyramid; and updating the second digital ortho-image in the global digital ortho-image into the target digital ortho-image.
In an embodiment of the application, the updating module 730 is further configured to determine, at any grid point of the first digital ortho image, that the second image pyramid is a target image pyramid corresponding to the target digital ortho image if the first weight value of the first weight pyramid is not greater than the second weight value of the second weight pyramid and the color value of the second image pyramid is not equal to a zero value; and if the first weight value is greater than the second weight value or the color value of the second image pyramid is zero, determining that the first image pyramid is a target image pyramid corresponding to the target digital ortho-image.
In an embodiment of the present application, the updating module 730 is further configured to determine an overlapping range of the first digital orthogonal image and the global digital orthogonal image according to the coordinate range of the first digital orthogonal image; expanding the size of the global digital ortho-image according to the overlapping range to determine the expanded global digital ortho-image; and determining a second digital ortho image and a second weight map corresponding to the second digital ortho image from the expanded global digital ortho image according to the coordinate range of the first digital ortho image.
In an embodiment of the application, the obtaining module 720 is further configured to obtain, in real time, N third digital orthoimages and N third weight maps corresponding to the N images when N > 1; splicing and fusing the N third digital ortho-images into a first digital ortho-image; and splicing and fusing the N third weight graphs into the first weight graph.
In an embodiment of the application, the updating module 730 is further configured to, when N >1, sequentially update the global digital ortho image according to the first digital ortho image corresponding to each image in the N images and the first weight map corresponding to the first digital ortho image corresponding to each image according to the time sequence corresponding to the N images.
In an embodiment of the present application, the obtaining module 720 is further configured to obtain a first digital surface model corresponding to the N images in real time; and performing orthorectification on the first digital surface model to obtain a first digital orthoimage and a first weight map corresponding to the N images.
In an embodiment of the present application, the obtaining module 720 is further configured to establish a global digital surface model; calculating in real time according to the sparse point cloud or the dense point cloud corresponding to the N images to obtain a second digital surface model corresponding to the N images; updating a third digital surface model in the global digital surface model according to the second digital surface model corresponding to the N images to obtain a target digital surface model; and determining the target digital surface model as a first digital surface model corresponding to the N images.
In an embodiment of the application, the obtaining module 720 is further configured to obtain the first elevation value E of each grid point from the second digital surface model corresponding to the N images1And the number of times f that a significant elevation value exists; using formulas
Figure BDA0003397585630000201
Calculating to obtain an updated second elevation value E of each grid point2', wherein, E2A second elevation value at the position corresponding to each grid point in the third digital surface model is obtained; according to the updated second elevation value E of each grid point2' determining a target digital surface model.
In an embodiment of the present application, the building module 710 is further configured to build global tile data. The image data processing apparatus further includes a slicing module 740. The slicing module 740 is configured to slice the target digital ortho image to obtain current tile data corresponding to the target digital ortho image. The updating module 730 is further configured to update the global tile data according to the current tile data to obtain updated global tile data.
In an embodiment of the present application, the updating global tile data according to current tile data includes: if the position of the first tile data in the current tile data is not overlapped with the positions of all the tile data in the global tile data, adding the first tile data in the global tile data; or if the first tile data is not a zero value and overlaps with the position of the second tile data in the global tile data, replacing the second tile data with the first tile data; or if the first tile data is equal to a zero value and overlaps with the location of the second tile data, ignoring the first tile data.
It should be understood that, for the specific working processes and functions of the establishing module 710, the obtaining module 720, the updating module 730, and the slicing module 740 in the foregoing embodiments, reference may be made to the description in the image data processing method provided in the foregoing embodiments of fig. 1 to 6, and in order to avoid repetition, details are not described here again.
Fig. 8 is a schematic structural diagram of an unmanned aerial vehicle according to an embodiment of the present application.
Referring to fig. 8, drone 800 includes a processor 810, which further includes one or more processors, and memory resources, represented by memory 820, for storing instructions, such as applications, that are executable by processor 810. The application programs stored in memory 820 may include one or more modules that each correspond to a set of instructions. Further, the processor 810 is configured to execute instructions to perform any of the image data processing methods described above.
Drone 800 may also include a power component configured to provide power management for drone 800, a wired or wireless network interface configured to connect drone 800 to a network, and an input-output (I/O) interface. The drone 800 may operate based on an operating system, such as a Windows Server, stored in the memory 820TM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTMOr the like.
A non-transitory computer readable storage medium, wherein instructions, when executed by a processor of the drone 800, enable the drone 800 to perform a method of processing image data. The image data processing method may be executed by an agent program. The image data processing method comprises the steps of establishing a global digital orthoimage; the method comprises the steps that a first digital ortho-image and a first weight map corresponding to N images are obtained in real time from the start of scene shooting of unmanned equipment, wherein N is a positive integer; and updating the global digital ortho image according to the first digital ortho image and the first weight map so as to determine a target digital ortho image in the updated global digital ortho image.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed unmanned aerial device, processing apparatus, and processing method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical division, and other divisions may be realized in practice, for example, a plurality of modules may be combined or integrated into another system, or some features may be omitted, or not executed.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program check codes, such as a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the processing apparatus and the unmanned device described above may refer to corresponding processes in the foregoing method embodiments, and are not described herein again.
It should be noted that, in the description of the present application, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present application, "a plurality" means two or more unless otherwise specified.
It should be noted that the combination of the features in the embodiments of the present application is not limited to the combination described in the embodiments of the present application or the combination described in the specific embodiments, and all the features described in the present application may be freely combined or combined in any manner unless contradictory to each other.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modifications, equivalents and the like that are within the spirit and principle of the present application should be included in the scope of the present application.

Claims (14)

1. A method of processing image data, comprising:
establishing a global digital orthoimage;
the method comprises the steps that a first digital ortho-image and a first weight map corresponding to N images are obtained in real time from the start of scene shooting of unmanned equipment, wherein N is a positive integer;
and updating the global digital ortho image according to the first digital ortho image and the first weight map so as to determine a target digital ortho image in the updated global digital ortho image.
2. The processing method of claim 1, wherein said updating the global digital ortho image according to the first digital ortho image and the first weight map comprises:
determining a first image pyramid according to the first digital ortho-image;
determining a first weight pyramid according to the first weight map;
determining a second digital ortho image and a second weight map corresponding to the second digital ortho image from the global digital ortho image according to the coordinate range of the first digital ortho image;
acquiring a second image pyramid according to the second digital ortho-image, and acquiring a second weight pyramid according to the second weight map;
determining a target image pyramid corresponding to a target digital ortho-image based on a comparison result between a first weight value of the first weight pyramid and a second weight value of the second weight pyramid, and a color value of the second image pyramid, at any grid in the first digital ortho-image;
determining the target digital ortho-image according to the target image pyramid;
updating the second digital ortho-image in the global digital ortho-image to the target digital ortho-image.
3. The processing method of claim 2, wherein the determining a target image pyramid corresponding to a target digital ortho image based on a comparison result between a first weight value of the first weight pyramid and a second weight value of the second weight pyramid and a color value of the second image pyramid at any grid in the first digital ortho image comprises:
at any grid point of the first digital ortho-image, if a first weight value of the first weight pyramid is not greater than a second weight value of the second weight pyramid, and a color value of the second image pyramid is not a zero value, determining that the second image pyramid is a target image pyramid corresponding to the target digital ortho-image;
if the first weight value is greater than the second weight value or the color value of the second image pyramid is zero, determining that the first image pyramid is a target image pyramid corresponding to the target digital ortho-image.
4. The processing method according to claim 2, wherein determining a second digital ortho image and a second weight map corresponding to the second digital ortho image from the global digital ortho image according to the coordinate range of the first digital ortho image comprises:
determining the overlapping range of the first digital orthoimage and the global digital orthoimage according to the coordinate range of the first digital orthoimage;
expanding the size of the global digital ortho image according to the overlapping range to determine an expanded global digital ortho image;
and determining a second weight map corresponding to the second digital ortho-image and the second digital ortho-image from the expanded global digital ortho-image according to the coordinate range of the first digital ortho-image.
5. The processing method according to claim 1, wherein the acquiring in real time a first digital ortho image and a first weight map corresponding to the N images comprises:
when N is greater than 1, acquiring N third digital orthoimages and N third weight maps corresponding to the N images in real time;
splicing and fusing the N third digital ortho images into the first digital ortho image;
and splicing and fusing the N third weight maps into the first weight map.
6. The processing method of claim 1, wherein said updating the global digital ortho image according to the first digital ortho image and the first weight map comprises:
and when N is greater than 1, sequentially updating the global digital ortho-images according to the time sequence corresponding to the N images and the first weight map corresponding to the first digital ortho-image corresponding to each image in the N images.
7. The processing method according to claim 1, wherein the acquiring in real time a first digital ortho image and a first weight map corresponding to the N images comprises:
acquiring a first digital surface model corresponding to the N images in real time;
and performing orthorectification on the first digital surface model to obtain a first digital orthoimage and a first weight map corresponding to the N images.
8. The process of claim 7, wherein said obtaining a first digital surface model corresponding to said N images in real time comprises:
establishing a global digital surface model;
calculating in real time according to the sparse point cloud or the dense point cloud corresponding to the N images to obtain a second digital surface model corresponding to the N images;
updating a third digital surface model in the global digital surface model according to the second digital surface model corresponding to the N images to obtain a target digital surface model;
and determining the target digital surface model as a first digital surface model corresponding to the N images.
9. The process of claim 8, wherein said updating a third digital surface model of said global digital surface model based on a second digital surface model corresponding to said N images to obtain a target digital surface model comprises:
obtaining a first elevation value E of each grid point from a second digital surface model corresponding to the N images1And the number of times f that a significant elevation value exists;
using formulas
Figure FDA0003397585620000031
Calculating to obtain an updated second elevation value E of each grid point2', wherein, E2A second elevation value at a position corresponding to each grid point in the third digital surface model;
according to the updated second elevation value E of each grid point2' determining the target digital surface model.
10. The processing method according to any one of claims 1 to 9, further comprising:
establishing global tile data;
slicing the target digital ortho image to obtain current tile data corresponding to the target digital ortho image;
and updating the global tile data according to the current tile data to obtain updated global tile data.
11. The processing method of claim 10, wherein said updating the global tile data according to the current tile data comprises:
if the position of the first tile data in the current tile data is not overlapped with the positions of all the tile data in the global tile data, adding the first tile data in the global tile data; alternatively, the first and second electrodes may be,
replacing a second tile data in the global tile data with the first tile data if the first tile data is not a zero value and overlaps with a position of the second tile data; or
Ignoring the first tile data if the first tile data is equal to a zero value and overlaps with a location of the second tile data.
12. An apparatus for processing image data, comprising:
the establishing module is used for establishing a global digital ortho-image;
the acquisition module is used for acquiring a first digital ortho-image and a first weight map corresponding to N images in real time from the start of scene shooting by the unmanned equipment, wherein N is a positive integer;
and the updating module is used for updating the global digital ortho image according to the first digital ortho image and the first weight map so as to determine a target digital ortho image in the updated global digital ortho image.
13. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program for executing a method of processing image data according to any one of claims 1 to 11.
14. An unmanned device, comprising:
a processor;
a memory for storing the processor-executable instructions,
wherein the processor is configured to execute a processing method of image data according to any one of the preceding claims 1 to 11.
CN202111488631.9A 2021-12-07 2021-12-07 Image data processing method and processing device thereof, and unmanned equipment Pending CN114359045A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111488631.9A CN114359045A (en) 2021-12-07 2021-12-07 Image data processing method and processing device thereof, and unmanned equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111488631.9A CN114359045A (en) 2021-12-07 2021-12-07 Image data processing method and processing device thereof, and unmanned equipment

Publications (1)

Publication Number Publication Date
CN114359045A true CN114359045A (en) 2022-04-15

Family

ID=81097846

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111488631.9A Pending CN114359045A (en) 2021-12-07 2021-12-07 Image data processing method and processing device thereof, and unmanned equipment

Country Status (1)

Country Link
CN (1) CN114359045A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071273A (en) * 2023-03-09 2023-05-05 中国科学院空天信息创新研究院 Method for processing color consistency of orthophoto based on extended update area

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071273A (en) * 2023-03-09 2023-05-05 中国科学院空天信息创新研究院 Method for processing color consistency of orthophoto based on extended update area

Similar Documents

Publication Publication Date Title
CN110160502B (en) Map element extraction method, device and server
CN110135455B (en) Image matching method, device and computer readable storage medium
WO2019161813A1 (en) Dynamic scene three-dimensional reconstruction method, apparatus and system, server, and medium
CN112434709B (en) Aerial survey method and system based on unmanned aerial vehicle real-time dense three-dimensional point cloud and DSM
KR101195942B1 (en) Camera calibration method and 3D object reconstruction method using the same
US8259994B1 (en) Using image and laser constraints to obtain consistent and improved pose estimates in vehicle pose databases
CN110910493B (en) Three-dimensional reconstruction method and device and electronic equipment
JP6733267B2 (en) Information processing program, information processing method, and information processing apparatus
WO2011049046A1 (en) Image processing device, image processing method, image processing program, and recording medium
WO2019144300A1 (en) Target detection method and apparatus, and movable platform
CN111127524A (en) Method, system and device for tracking trajectory and reconstructing three-dimensional image
CN110648363A (en) Camera posture determining method and device, storage medium and electronic equipment
CN112083403B (en) Positioning tracking error correction method and system for virtual scene
JP2022509329A (en) Point cloud fusion methods and devices, electronic devices, computer storage media and programs
JP3618649B2 (en) An extended image matching method between images using an indefinite window
CN107862652B (en) Aerial view generation method and device
JP2020008802A (en) Three-dimensional map generation device and three-dimensional map generation method
CN114359045A (en) Image data processing method and processing device thereof, and unmanned equipment
KR102475790B1 (en) Map making Platform apparatus and map making method using the platform
CN113808269A (en) Map generation method, positioning method, system and computer readable storage medium
CN113034347A (en) Oblique photographic image processing method, device, processing equipment and storage medium
CN113129422A (en) Three-dimensional model construction method and device, storage medium and computer equipment
WO2021051220A1 (en) Point cloud fusion method, device, and system, and storage medium
JP2015005220A (en) Information display device and information display method
CN115345990A (en) Oblique photography three-dimensional reconstruction method and device for weak texture scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination