WO2021115124A1 - 一种耕地现场边云协同的三维重建方法 - Google Patents

一种耕地现场边云协同的三维重建方法 Download PDF

Info

Publication number
WO2021115124A1
WO2021115124A1 PCT/CN2020/131409 CN2020131409W WO2021115124A1 WO 2021115124 A1 WO2021115124 A1 WO 2021115124A1 CN 2020131409 W CN2020131409 W CN 2020131409W WO 2021115124 A1 WO2021115124 A1 WO 2021115124A1
Authority
WO
WIPO (PCT)
Prior art keywords
reconstruction
image
cloud
dimensional
edge
Prior art date
Application number
PCT/CN2020/131409
Other languages
English (en)
French (fr)
Inventor
胡月明
陈春
徐驰
刘江川
陈联诚
张飞扬
张瑞
Original Assignee
华南农业大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华南农业大学 filed Critical 华南农业大学
Priority to US17/594,450 priority Critical patent/US11763522B2/en
Publication of WO2021115124A1 publication Critical patent/WO2021115124A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/08Bandwidth reduction

Definitions

  • the invention belongs to the technical field of image processing, and relates to a three-dimensional reconstruction method of farmland field edge-cloud collaboration.
  • Agricultural projects are difficult to track and supervise in real time and efficiently.
  • agriculture, rural areas, and farmland are large in scale and widely distributed, and the scope of monitoring and inspection of complete agricultural projects is too large.
  • government inspection agencies can only sample a very small percentage of sample data for inspection, while long-term tracking and monitoring can only passively rely on the way of reporting layer by layer of written materials, and the efficiency is difficult to further improve.
  • the two-dimensional image material of the UAV can be generated through the three-dimensional reconstruction of the three-dimensional farmland model, which is similar to the effect of a close-up bird's-eye view. It can largely assist the on-site manual visual inspection. It can be used to clearly see the fields, wastelands, forests, buildings, field roads, water conservancy ditches, and geographic location information with geographic location information, and on the basis of these models, further research can also build automatic identification and calculation of these areas.
  • the reconstruction of a three-dimensional model usually requires a huge amount of computation and the support of high-performance computing equipment. Taking into account the battery power of the existing commercial drones, the drones can fly in the air for about 20 minutes.
  • the present invention provides a three-dimensional reconstruction method of farmland field edge cloud collaboration, which is mainly oriented to agricultural project monitoring scenes.
  • edge computing equipment is introduced to perform advanced calculation and make full use of the cloud.
  • the advantages of parallel computing reduce the reconstruction time and data transmission volume of 3D models, in order to improve the response speed and quality of 3D reconstruction results.
  • the technical solution adopted by the present invention to solve its technical problems is to provide a three-dimensional reconstruction method of farmland field edge-cloud collaboration, including:
  • the edge computing device performs image metadata extraction and corresponding image preprocessing
  • the edge computing device divides the preprocessed image data set
  • Each three-dimensional reconstruction container in the cloud data center performs the same three-dimensional reconstruction steps for its assigned images to generate a three-dimensional sub-model
  • the edge computing device provides three-dimensional model retrieval and on-demand download; the three-dimensional model includes a plurality of three-dimensional sub-models.
  • the step S2 includes:
  • the edge computing device extracts attributes related to the image; including: exposure information, focal length, shooting time, shooting location, relative height of the drone, and reference objects;
  • the step S3 includes:
  • the convex hull is a convex polygon formed by connecting the outermost points of the entire aerial photograph field, and the convex polygon contains all the points in the point set; according to the number of available containers, The convex hull is equally divided into a number equal to the number of containers; each one is called a sub-region, and the three-dimensional reconstruction task of each sub-region corresponds to the container on the cloud data center one-to-one;
  • the S5 step includes:
  • the S8 step includes:
  • the edge computing device tracks the reconstruction progress of each container in real time; after the reconstruction is completed, it provides a search for the three-dimensional model and image of the corresponding location, and downloads the three-dimensional model and related files to the local as required.
  • the positive effects of the present invention are: for the application of three-dimensional reconstruction in farmland monitoring, the method considers the method of quickly acquiring large-scale three-dimensional reconstruction images on site, adopts edge-cloud collaborative computing for fast three-dimensional reconstruction; uses edge-cloud collaborative computing ( Advance + Parallel) for fast 3D reconstruction:
  • the local edge server is used for advanced calculation, and the image data set is filtered and preprocessed, which reduces the amount of data transmission and data transmission time, and divides the image data set according to the operating status of the cloud data center, in the form of a container Configure cloud computing resources.
  • the edge computing device provides the function of 3D model retrieval and on-demand download.
  • edge computing and cloud computing is used in the 3D reconstruction of the farmland scene to realize the rapid reconstruction of the 3D model.
  • FIG. 1 is a method flow chart of a three-dimensional reconstruction method of farmland field edge-cloud collaboration provided by an embodiment of the present invention
  • FIG. 2 is a comparison diagram of a collaborative architecture of an edge cloud computing architecture provided by an embodiment of the present invention and a traditional architecture;
  • Figure 3 is a graph showing the relative altitude changes of the drone in a specific embodiment
  • Fig. 4 is a result diagram of three-dimensional modeling of one of the containers in a specific embodiment
  • FIG. 5 is a schematic diagram of seamless splicing of orthoimages of 10 three-dimensional sub-models of cultivated land in a specific embodiment
  • Fig. 6 is a result of seamless mosaic of orthoimages of 10 three-dimensional sub-models of cultivated land in a specific embodiment
  • FIG. 7 is a diagram of the result of contrast enhancement of FIG. 6 in a specific embodiment
  • FIG. 8 is a diagram of the result obtained by adopting a traditional three-dimensional reconstruction method in a specific embodiment.
  • the method for 3D reconstruction of farmland field edge-cloud collaboration includes:
  • the edge computing device performs image metadata extraction and corresponding image preprocessing
  • the edge computing device divides the preprocessed image data set
  • Each three-dimensional reconstruction container in the cloud data center performs the same three-dimensional reconstruction steps for its assigned images to generate a three-dimensional sub-model
  • the edge computing device provides three-dimensional model retrieval and on-demand download; the three-dimensional model includes a plurality of three-dimensional sub-models.
  • the infrastructure mainly includes four parts: edge terminals (drones), edge computing equipment and back-end cloud data centers, and edge network facilities.
  • the edge terminal has basic functions such as data collection, storage and basic transmission.
  • edge terminals include smart cameras, drones, and smart robots. Due to the three-dimensional reconstruction of the arable land, in this solution, drones are used as edge terminals; edge computing devices are devices with certain computing, storage, and communication capabilities, such as edge servers, edge embedded devices, and edge gateways. In weak network environments such as suburbs and rural areas, edge computing devices will be used for advance calculations to preprocess the collected image data sets.
  • the present invention uses the server located at the edge as the edge core device.
  • the cloud data center has powerful computing and storage capabilities. In three-dimensional reconstruction, with powerful cloud computing capabilities, it can process a large amount of image data.
  • Edge network facilities include established multi-operator networks (4G and 5G networks such as China Mobile, China Unicom, and China Telecom).
  • the method is mainly oriented to agricultural project monitoring scenarios.
  • edge computing equipment is introduced for advance calculation, and the advantages of parallel computing on the cloud are fully utilized to reduce the reconstruction time and data transmission volume of the three-dimensional model.
  • the right half of the figure is the edge cloud computing architecture collaboration architecture provided by the embodiments of the present invention. Based on this architecture, the following describes 8 of the three-dimensional reconstruction methods of farmland field edge cloud collaboration provided by the embodiments of the present invention. step.
  • Step 1 The edge computing device obtains the image of the cultivated land taken by the drone
  • control software such as DJI Ground Station (GSPro) and DJI Terra (Terra) can be run on edge devices, including the function of automatically generating routes for selected target cultivated land areas, planning in advance, and one-time after shooting the cultivated land Import the image into the edge computing device.
  • Step 2 The edge computing device performs image data extraction and preprocessing
  • the image metadata is extracted to analyze the attributes related to the image. Generally, these data are attached to the image file in the form of EXIF keys.
  • the attributes that can be used include exposure information, focal length, shooting time, shooting location (GPS latitude and longitude), altitude and its reference, etc.
  • Metadata is stored in the form of several tables as required, archived and stored in a structured database or static files.
  • the edge computing device After the drone image is transmitted to the edge computing device, the edge computing device performs corresponding image preprocessing, including the following steps:
  • the relative altitude information of the drone when each picture in the image data is taken is taken to filter out images that are irrelevant or not helpful for three-dimensional reconstruction, such as pictures taken during take-off and landing.
  • the specific method that can be used is to calculate the median of the flying height in all the pictures and mark it as the reference shooting height. Delete the pictures whose distance is too large from the reference shooting height and the pictures whose flying height changes too much.
  • the specific method of the difference comparison of picture positions is to compare the relative distances of five consecutive pictures. If the position change between the first picture and the fifth picture is less than a preset threshold, such as 20 meters, the second and fourth pictures are deleted.
  • image content coverage is to extract image feature points from two adjacent images, that is, some points in the image that are obvious and easy to detect and match. After these feature points are matched, the image coordinate transformation parameters are calculated to calculate the image overlap degree. If the overlap of adjacent pictures exceeds a certain threshold, only one of the pictures is kept.
  • the transmission of low-quality images to the cloud data center can be reduced, thereby reducing the bandwidth pressure, traffic overhead, and reserved bandwidth cost of the connection from the edge computing device to the cloud data center.
  • reducing low-quality images can reduce the number of abnormal points in point cloud matching, and significantly improve the quality of the 3D reconstruction results.
  • Step 3 Edge computing device segmentation of farmland image data set
  • the present invention chooses to divide a large-scale farmland 3D reconstruction task into several 3D reconstruction subtasks. , Placed in multiple three-dimensional reconstruction containers to generate several three-dimensional sub-models, and finally merge the three-dimensional sub-models and associated images.
  • the edge computing device In order to divide the large-scale 3D reconstruction task of cultivated land into several subtasks, the edge computing device will segment the image data set, and then hand it over to the reconstruction container in the cloud data center for processing.
  • the specific method of segmentation of farmland image data set is introduced as follows:
  • the convex hull is a convex polygon formed by connecting the outermost points of the entire aerial field. It can contain points. Focus all the points.
  • the convex hull is equally divided into the number of parts equal to the number of containers, each part is called a sub-region, and the three-dimensional reconstruction task of each sub-region corresponds to the container on the cloud one-to-one. These sub-regions are similar in area and regular in shape, which is convenient for reconstruction and visual inspection.
  • Step 4 Rebuild the container arrangement and image transmission
  • the three-dimensional reconstruction processing process has a high degree of dependence and coupling.
  • container virtualization and micro/macro services are used in the cloud data center to encapsulate the processing process, and multiple processes can be processed in parallel through orchestration. Three-dimensional reconstruction subtasks.
  • the edge computing device After orchestrating and deploying the 3D reconstruction container, the edge computing device transmits the images required by the corresponding subtask to the designated container via the edge network infrastructure.
  • the edge computing device monitors the network transmission status at the same time, and tracks the container status and the status of the three-dimensional reconstruction subtasks. Common tracking methods include heartbeat and poll. If the transmission fails or the container fails, the edge computing device will re-execute the container deployment and image transmission.
  • Step 5 Calculation of the 3D sub-model of the cultivated land in Yunshang
  • Each three-dimensional reconstruction container in the cloud data center performs the same three-dimensional reconstruction steps on its assigned image sub-data set to generate a three-dimensional sub-model.
  • the 3D reconstruction method based on the point cloud is relatively mature, which can represent the real 3D scene or object through a series of 3D space points, and the source of the sparse point cloud is the image feature points.
  • the detection of feature points is the most basic step in the entire 3D reconstruction process. The quality of the detection effect has a great influence on the final result. Commonly used detection algorithms include SIFT, SURF, etc. For example, you can choose to use the widely used SIFT algorithm to extract feature points.
  • SfM Structure from Motion
  • the feature point matching is also performed to form a sparse point cloud.
  • the generation of the dense point cloud uses a three-dimensional reconstruction algorithm based on a patch. After the dense point cloud is formed, the outline of the three-dimensional model can basically be recognized with the naked eye. In order to realize the real three-dimensionality of the physical object, it is necessary to perform surface reconstruction.
  • the embodiment of the present invention uses the Delaunay triangulation and the Power Crust algorithm. After the surface reconstruction of the point cloud data, the outline and shape of the real object are clearly visible.
  • the last step is texture mapping. The function of texture mapping is to make the reconstructed 3D model closer to the real object, with color, texture and detail characteristics.
  • DSM Digital Surface Model
  • DTM Digital Terrain Model
  • Orthophoto orthophoto
  • the digital terrain model is widely used in the calculation of the area, volume, and slope of various cultivated land, and can be used for the visibility judgment between any two points in the model and the drawing of any cross-sectional diagram. It is used in the application of cultivated land monitoring to draw contours, Slope and aspect maps, three-dimensional perspective maps, orthophotographic maps and map repairs can also be used as auxiliary data for field classification.
  • the corresponding model result file is also generated and stored in the reconstruction container.
  • the specific files involved include cloud (LAZ format, PLY format), orthophotos (GeoTIFF format, PNG format, MBTiles format, Tiles format), and rendered 3D sub-models (OBJ format).
  • Step 7 3D model of cultivated land on the cloud and image stitching and enhancement
  • the three-dimensional reconstruction container After the three-dimensional reconstruction container generates the three-dimensional sub-model on the cloud, the three-dimensional sub-model is further merged on the cloud, and the orthoimage after the three-dimensional reconstruction is mosaicked, for example, seamless mosaic.
  • the 3D model and image enhancement processing will continue on the cloud, such as feathering, sharpening, and contrast enhancement.
  • feathering For example, through linear stretching, Gaussian stretching, the contrast of the image can be enhanced, and the white balance of the image can be adjusted.
  • processing software such as ENVI can apply various image enhancement algorithms to image data, so that the resulting image is more suitable for specific application requirements than the original image.
  • Image enhancement can be processed in the spatial domain to enhance the detailed or main part of the current object in the image.
  • convolution filtering processing such as high-pass filtering, low-pass filtering, Laplacian, directional filtering, etc.
  • radiation domain enhancement processing can also be used to transform the gray value of a single pixel
  • spectral enhancement processing based on multi-spectral data to transform the band to achieve the effect of image enhancement, such as principal component transformation, independent principal component transformation, color space transformation, etc.
  • Step 8 Edge computing equipment provides 3D model retrieval and on-demand download
  • the edge computing device can track the reconstruction progress of each container in the form of a browser web page in real time. After the reconstruction is completed, the 3D model and image of the corresponding location can be retrieved, and the 3D model and related files can be downloaded to the local for browsing and viewing on demand.
  • steps 1-4 are completed on the edge computing device
  • steps 5-7 are completed in multiple 3D reconstruction containers deployed in the cloud data center
  • step 8 is completed on the edge computing device.
  • the design of this method fully considers the heterogeneity of edge computing equipment and cloud data center and the difference in processing capabilities, and innovatively proposes to cooperate with edge computing equipment and cloud computing platform to complete the entire 3D reconstruction process, shortening the 3D reconstruction time by 79% , Realize the three-dimensional reconstruction on-site map.
  • a specific embodiment is listed below to quantitatively analyze the performance improvement achieved by the edge-cloud collaborative computing architecture described in the embodiments of the present invention when performing 3D reconstruction of cultivated land compared with the traditional architecture.
  • the program first measures the processing time using the traditional architecture.
  • the traditional architecture the three-dimensional reconstruction steps of cultivated land are placed on equipment or equipment groups that are very close to the physical location, such as embedded equipment deployed in cultivated land in smart agriculture projects, image workstations equipped by inspectors, and inspection agencies.
  • the computing cluster or cloud data center owned the corresponding equipment was selected for measurement in this case comparison experiment.
  • the specific reconstruction process is to reconstruct the complete data set on the corresponding device or device group immediately after acquiring the image data.
  • This solution adopts an edge-cloud collaborative computing architecture.
  • the edge computing device After the image data is obtained, the edge computing device performs the first calculation-extracting metadata from the image data set obtained by the drone, and then segmenting the image data set according to the metadata.
  • Edge computing equipment measures the performance of itself and the same cloud data center, and chooses to orchestrate and deploy multiple 3D reconstruction containers to perform 3D reconstruction in the edge or cloud data center. Multiple reconstruction containers in the cloud data center execute reconstruction tasks in parallel to quickly obtain 3D reconstruction results.
  • the edge computing device chose to divide the image data set data into 10 groups, and the transmitted images were handed over to 10 reconstruction containers for reconstruction. While ensuring the reconstruction quality and delivering the reconstruction results, the complete reconstruction time was shortened to 32 minutes19 second. Compared with the traditional architecture (Table 1), the reconstruction time is reduced by 79%-90%.
  • the rack server used in the cloud data center in the experiment is Dell PowerEdge R430 rack server.
  • Each server is equipped with two Intel Xeon E5-2630 v3 2.4GHz physical CPUs. With the support of Hyper-Threading Technology, each CPU can have 8 cores and 16 threads.
  • the rack server has 256GB of memory.
  • the cloud data center uses multiple Intel Ethernet controller 10GbE X540-AT2 network cards to support full-duplex 10Gbps network connections.
  • a Lenovo tower server is selected as the edge computing device for this experiment. Its operating system is Ubuntu 16.04LTS, the physical CPU is Intel i7-6700 3.4GHz, the memory is 8GB, and it is equipped with a discrete graphics card AMD Radeon RX 550.
  • the network card model of the edge server is Realtek RTL8168. This experiment is based on the 8M uplink bandwidth.
  • the data set used in this experiment was obtained from aerial photography of DJI drone DJI Phantom 4.
  • the camera model it carries is FC300S, aperture is 2.8, shutter speed is 1/640s, focal length is 3.6mm, and pixel size is 4000*3000 .
  • the data set contains about 330 acres of land.
  • the study area is located in Miren Village, Gantang Town, Potou District, Zhanjiang City, Guangdong province. The area is flat, mainly cultivated land, and a small amount of residential land.
  • the drone flew for about 11 minutes, the aerial height was 59.5 meters, and a total of 347 images were obtained, including the take-off and landing process, covering about 330 acres of land.
  • Step 1 The edge computing device obtains the image of the cultivated land taken by the drone
  • the process of acquiring UAV images is the process of transmitting farmland images collected by UAVs to edge computing devices.
  • the specific process in this embodiment is to transmit all the images to the Lenovo edge tower server through the WiFi network at one time after the shooting of the cultivated land is completed.
  • Step 2 The edge computing device performs image data extraction and preprocessing
  • Image metadata is extracted for specific analysis of image-related attributes.
  • these metadata are appended to the image file in the form of EXIF keys.
  • the attributes that can be used include exposure information, focal length, shooting time, shooting location (GPS latitude and longitude), altitude and its reference, etc.
  • the three-axis speed of the drone during shooting, the PTZ attitude, etc. are also of great help to the subsequent image and 3D model analysis.
  • This information together with the storage path and method of the image file, constitutes metadata, which is stored as a number of tables as required. In the form of files, archived and stored in a structured database or static files.
  • Python data science toolkit pandas (for data processing), matplotlib (for plotting), scipy (computing spatial data)
  • Image manipulation toolkit PIL image manipulation and EXIF information reading
  • os file level manipulation
  • Geographic information toolkit pygeohash (calculation geohash), gmaps (Google map operation interface)
  • results obtained are saved in a structured database (specifically SQLite, considering the limited performance of edge computing devices) and backed up in a static csv file.
  • the edge computing device performs corresponding image preprocessing, including the following steps:
  • the specific method is to calculate the median of the flying height in all pictures and mark it as the reference shooting height. Delete the pictures whose distance is too large from the reference shooting height and the pictures whose flying height changes too much.
  • the specific method is to compare the relative distances of five consecutive pictures, and if the position change between the first picture and the fifth picture is less than a certain threshold, the second and fourth pictures are deleted.
  • Remove images with high overlap Read the overlap of the entire data set. For example, when the overlap is higher than the threshold 80%, the corresponding number of images will be deleted to reduce redundant data. In this specific case, the data volume of the original data set is 1.8GB (347 pictures). After the screening process, the data volume is reduced to 1.19GB (251 pictures).
  • Step 3 Edge computing device segmentation of farmland image data set
  • the edge computing device In order to divide the large-scale 3D reconstruction task of cultivated land into several subtasks, the edge computing device will segment the image data set, and then hand it over to each reconstruction container in the cloud data center for processing.
  • the specific image data set segmentation method is introduced as follows:
  • the number of containers available in the cloud data center is acquired.
  • the number of containers is 10 containers, and then the shooting route and image coverage in the image metadata are acquired.
  • the three-dimensional reconstruction tasks correspond to the containers on the cloud one-to-one.
  • the number of pictures has been reduced to 251. These 251 pictures continue to be divided into ten groups (30, 23, 18, 18, 14, 33, 25, 26, respectively) , 26 photos, 38 photos).
  • steps 2-3 took a total of 14 seconds.
  • Step 4 Reconstruct container arrangement and image transmission on the cloud
  • This solution uses containerization and Microservice/Macroservice in the cloud data center to encapsulate the processing process, and multiple 3D reconstruction subtasks can be processed in parallel through conventional orchestration. Since the connected cloud data center container management engine is the Docker container engine widely used in recent years, correspondingly, in this embodiment, the Dockerfile is conventionally written to describe the Docker image used, the installation runtime, and the specific execution Logic and execution entry.
  • Edge computing devices can connect to the Kubernetes service page to monitor the deployment status of 3D reconstruction containers on the cloud.
  • this case uses three parallel network paths to upload the previously divided 10 groups of pictures at the same time.
  • they are equally divided into three groups (83, 81 and 87 respectively) ), upload to the corresponding container in the cloud data center via three parallel network paths at the same time (see Table 3. Container number), and the upload time for each group is 18 minutes 42 seconds, 19 minutes, 18 minutes and 55 seconds.
  • Step 5 Calculation of the 3D sub-model of the cultivated land in Yunshang
  • Each reconstruction container executes the 3D reconstruction process in parallel: image feature point detection, sparse cloud reconstruction, dense cloud reconstruction, 3D model meshing, and texturing.
  • image feature point detection sparse cloud reconstruction
  • dense cloud reconstruction dense cloud reconstruction
  • 3D model meshing 3D model meshing
  • Parallel computing time for 10 containers of cloud server Take the longest computing time of 10 containers (12 minutes and 25 seconds).
  • FIG. 4 it is a schematic diagram of the result of three-dimensional modeling of one of the 10 containers.
  • Step 7 3D model of cultivated land on the cloud and image stitching and enhancement
  • the 3D sub-model is generated on the cloud by the 3D reconstruction container, the 3D sub-model is further merged on the cloud, and the orthoimage after the 3D reconstruction is mosaicked, for example, Seamless Mosaic, as shown in Figure 5. Show.
  • the 3D model and image enhancement processing will continue on the cloud, such as feathering, sharpening, and contrast enhancement.
  • feathering For example, through linear stretching, Gaussian stretching, the contrast of the image can be enhanced, and the white balance of the image can be adjusted.
  • the ENVI software and runtime library are deployed on the container on the cloud, 10 three-dimensional sub-models are transferred to the container, and the orthographic top-view angle image of the three-dimensional reconstruction model is selected.
  • the seamless stitching process takes 2 minutes and 10 seconds to get the result after stitching as shown in Figure 6.
  • the mosaic result image is stretched by Gaussian to enhance the contrast of the image, as shown in Figure 7.
  • the comparison with the result obtained by the traditional 3D reconstruction method is as follows:
  • this solution adopts the edge + cloud method, and its 3D reconstruction saves 79% of the time compared with the traditional method (the traditional method takes more than 2 hours and 30 minutes, and this solution only takes 32 minutes and 19 seconds), but two The reconstruction effect is completely comparable and fully meets the requirements of agricultural project monitoring.
  • Step 8 Edge computing equipment provides 3D model retrieval and on-demand download
  • the edge computing device can track the reconstruction progress of each container in the form of a browser web page in real time. After the reconstruction is completed, the 3D model and image of the corresponding location can be searched and downloaded, and the 3D model and related files can be downloaded to the local for further browsing and analysis.
  • the pre-processing of the edge + parallel upload + parallel 3D reconstruction + download of 3D reconstruction results, the total time-consuming calculation is:
  • Parallel computing time of 10 containers of cloud server take the longest computing time of 10 containers (12 minutes 25 seconds)
  • the total time of this solution is 32 minutes and 19 seconds.
  • the use of "edge-cloud” computing collaboration reduces the reconstruction time by 79%-90%.
  • the three-dimensional reconstruction method of farmland field edge-cloud collaboration can obtain image materials of hundreds to thousands of acres of farmland from aerial photography using drones, and then cooperate with edge computing equipment and cloud servers with powerful computing functions. Applying the best combination of them to the calculation process of 3D reconstruction of cultivated land can realize the rapid reconstruction of the 3D model with geographic location information on site.
  • the generated model is similar to the artificial visual effect, which can greatly assist manual visual inspection.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

一种耕地现场边云协同的三维重建方法,采用边缘与云协同计算架构,在获取图像数据后由边缘计算设备进行先行计算,对无人机拍摄的图像数据集提取元数据,进而根据元数据分割图像数据集。边缘计算设备衡量自身及云数据中心的性能,选择编排部署多个三维重建容器于云数据中心内进行三维重建。云数据中心内多个重建容器并行执行重建任务,以快速获得三维重建结果,并提供给边缘计算设备检索及下载;该方法主要面向农业项目监测场景,减少三维模型的重建时间和数据传输量,以期提升三维重建结果的响应速度和质量;以供农业项目现场大规模监测、验收、审核用途。

Description

一种耕地现场边云协同的三维重建方法
本申请要求于2019年12月10日提交中国专利局、申请号为201911261761.1、发明名称为“一种耕地现场边云协同的三维重建方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明属于图像处理技术领域,涉及一种耕地现场边云协同的三维重建方法。
背景技术
目前,我国在制定惠农政策及设立相关农业项目的同时,有关管理机构也高度重视这些政策及项目的执行情况及效果,并采用各种监测措施以跟踪项目进度及保障政策落实。然而,关于农业项目监测仍然存在诸多痛点。
农业项目较难进行实时高效的跟踪监督。比如,农业、农村、农田规模大、分布广,完整农业项目监测检查面过大。从监测手段上考虑,政府的监查机构只能抽取极小比例的样本数据进行检查,而长期性的跟踪监测只能被动地依靠文字书面材料层层上报的方式,效率难以进一步提升。
另一方面,目前所采用的农业项目的监测方式在实时性、覆盖范围上存在缺陷,无法做到兼顾。就农业项目监测而言,近年较多研究通过使用高空遥感的方式监测农田建设及生产状况,然而高空遥感卫星过境的实际周期是固定的——每年只固定几次过境,这导致难以进行细粒度的时空分析;另外,无论气象状态如何,卫星轨道及姿态无法迅速调整,没有多次拍摄的机会,而云层厚度、光线的因素干扰,时常导致单次采集的影像无法使用;并且,农业项目监测必须配合其进度的时间需求,而采用高空遥感的方式显然无法配合进行实施。
通常,农业项目也采取人工数据采样的辅助监测手段:除了汇报文字材料之外,同时配有视频影像材料。但是,依靠人工拍摄的这些影像材料清晰范围只是拍摄点周围几十米,而且大部分都是侧视影像,覆盖范围有 限,并不能很好地展现整个农业项目的效果,而且难以进一步对视频资料进一步分析利用。
操作小型无人机进行拍摄也是近年来被广泛提及的农业项目监测方式。但是无人机获取的耕地视频影像,由于是连续拍摄,多数是没有具体地理位置信息的影像,在监测现场要人工从几千亩中识别田块、道路水利设施是很困难的,工作量极大;也无法在监测现场通过人工辨识的方式对这些影像进行进一步标注分析。
无人机二维影像素材可经由三维重建产生的立体耕地模型,近似于近距离鸟瞰效果,它很大程度上可以辅助现场人工目测检查。用它可清晰看到具有地理位置信息的田块、荒地、林木、建筑、田间道路、水利沟渠及地理位置信息等,而且在这些模型的基础上,进一步的研究还可构建自动识别计算这些地物的长度面积及损毁状况的系统。但是,重建三维立体模型通常需要极大的运算量和高性能计算设备的支持。考虑到现有的商用无人机的电池电量,无人机能在空中飞行约20分钟,如果只用常规的方法要拼接而后再进行耕地的三维重建需要巨大的计算工作量,至少需耗时好几个小时以上的时间,这对监测机构进行现场监测、验收、审核是不现实的。另外,该技术有着极大的带宽需求与网络质量要求,一旦采集进入网络质量较差的区域,便无法实时的上传数据,给重建工作带来了严重的滞后性。
发明内容
为了克服现有技术的上述缺点,本发明提供一种耕地现场边云协同的三维重建方法,主要面向农业项目监测场景,针对耕地三维重建,引入了边缘计算设备进行先行计算,并充分发挥云上并行计算的优势,减少三维模型的重建时间和数据传输量,以期提升三维重建结果的响应速度和质量。
本发明解决其技术问题所采用的技术方案是:提供一种耕地现场边云协同的三维重建方法,包括:
S1、获取无人机拍摄耕地图像及将所述耕地图像传输到边缘计算设 备;
S2、所述边缘计算设备进行图像元数据提取及对应的图像预处理;
S3、所述边缘计算设备对预处理后的图像数据集进行分割;
S4、根据分割结果对云数据中心编排部署三维重建容器;所述边缘计算设备将所述分割结果对应子任务所需的图像,传输到对应的容器内;
S5、云数据中心内每个三维重建容器对于各自分配的图像,执行相同的三维重建步骤以生成三维子模型;
S6、在所述云数据中心上建立三维模型的地理参考坐标,并生成数字表面模型和数字地形模型及正射影像;
S7、在云数据中心上合并所有所述三维子模型,并对三维重建后的正射影像进行拼接处理及图像增强的处理;
S8、所述边缘计算设备提供三维模型检索及按需下载;所述三维模型包括多个三维子模型。
优选的:所述步骤S2包括:
S21、所述边缘计算设备提取图像相关的属性;包括:曝光信息、焦距、拍摄时间、拍摄地点、无人机相对高度及其参照物;
S22、根据图像数据中每张图片拍摄时所述无人机相对高度信息剔除不相关或对三维重建没有帮助的图像;
S23、根据相邻拍摄图片位置差分删除图片过密区域中冗余的图片;
S24、根据图片内容覆盖度的差分删除冗余图片。
优选的:所述步骤S3包括:
S31、获取云数据中心内可用容器数目,获取图像元数据中拍摄航线及影像覆盖范围;
S32、计算影像覆盖范围的凸包;所述凸包为将整个航拍田块的最外层的点连接起来构成的凸多边形,凸多边形内包含点集中所有的点;根据所述可用容器数目将所述凸包均分为与所述容器数目相等的份数;每一份称为一个子区域,且每一子区域的三维重建任务与云数据中心上的容器一一对应;
S33、当某一子区域内图片数目小于预设阈值时,利用元数据中 geohash属性,执行N-最近邻搜索,匹配geohash前缀以搜索附近的图片补足图片数目。
优选的:所述S5步骤包括:
根据检测算法SIFT提取图像特征点,进行特征点匹配,形成稀疏点云;
使用基于面片的三维立体重建算法,形成稠密点云;
再使用Delaunay三角化及PowerCrust算法,对其表面重建并进行纹理映射,以生成三维子模型。
优选的:所述S8步骤包括:
所述边缘计算设备实时跟踪各容器重建进度;在重建完成后,提供检索查找对应位置的三维模型及图像,并按需下载三维模型及相关文件到本地。
本发明的积极效果是:针对三维重建在耕地监测中的应用,该方法考虑现场快速获取大规模三维重建图像的方法,采用边缘-云协同计算用于快速三维重建;将边缘-云协同计算(先行+并行)用于快速三维重建:
1.采用在本地的边缘服务器进行先行计算,对图像数据集进行筛选预处理等工作,减少了数据传输量和数据传输时间,并根据云数据中心的运行状况分割图像数据集,以容器的形式配置云上计算资源。
2.将无人机图像按地块分成若干组上传至云数据中心,分配到多个容器中,进行并行计算。
3.将并行运算的子图进行拼接,获取田块全图。
4.边缘计算设备提供三维模型检索及按需下载的功能。
所以,将边缘计算及云计算二者组合用于耕地三维重建场景,实现了三维模型的快速重建。
本发明的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本发明而了解。本发明的目的和其他优点可通过在所写的说明书、权利要求书、以及附图中所特别指出的结构来实现和获得。
下面通过附图和实施例,对本发明的技术方案做进一步的详细描述。
说明书附图
附图用来提供对本发明的进一步理解,并且构成说明书的一部分,与本发明的实施例一起用于解释本发明,并不构成对本发明的限制。在附图中:
图1为本发明实施例提供的耕地现场边云协同的三维重建方法的方法流程图;
图2为本发明实施例提供的边缘云计算架构协同架构与传统架构对比图;
图3为具体实施例中无人机的相对航高变化情况的曲线图;
图4为具体实施例中其中一个容器三维建模的结果图;
图5为具体实施例中10个耕地三维子模型的正射影像进行无缝拼接示意图;
图6为具体实施例中10个耕地三维子模型的正射影像无缝拼接后结果图;
图7为具体实施例中对图6进行对比度增强后的结果图;
图8为具体实施例中采用传统三维重建方式得到的结果图。
具体实施方式
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。
参照图1所示,本发明实施例提供的耕地现场边云协同的三维重建方法,包括:
S1、获取无人机拍摄耕地图像及将所述耕地图像传输到边缘计算设备;
S2、所述边缘计算设备进行图像元数据提取及对应的图像预处理;
S3、所述边缘计算设备对预处理后的图像数据集进行分割;
S4、根据分割结果对云数据中心编排部署三维重建容器;所述边缘计算设备将所述分割结果对应子任务所需的图像,传输到对应的容器内;
S5、云数据中心内每个三维重建容器对于各自分配的图像,执行相同的三维重建步骤以生成三维子模型;
S6、在所述云数据中心上建立三维模型的地理参考坐标,并生成数字表面模型和数字地形模型及正射影像;
S7、在云数据中心上合并所有所述三维子模型,并对三维重建后的正射影像进行拼接处理及图像增强的处理;
S8、所述边缘计算设备提供三维模型检索及按需下载;所述三维模型包括多个三维子模型。
本实施例中,在三维重建的应用场景中,基础设施主要包括边缘终端(无人机),边缘计算设备和后端的云数据中心,边缘网络设施四部分。
边缘终端具有数据收集,存储及基础传输等基本功能。在一般三维重建应用场景中,边缘终端包括智能摄像头,无人机,智能机器人等。由于面向耕地三维重建场景,因而在本方案中,采用无人机作为边缘终端;边缘计算设备是具有一定的计算、存储、通信能力的设备,如边缘服务器,边缘嵌入式设备,边缘网关等。在郊区,农村等弱网环境中,边缘计算设备将用于先行计算,对所采集到的图像数据集进行预处理。本发明将位于边缘位置的服务器作为边缘核心设备。云数据中心具有强大的计算能力和存储能力,在三维重建中,凭借强大的云计算能力,能够处理大量的图像数据。边缘网络设施包含已建成的多运营商网络(移动,联通,电信等4G、5G网络)。
本实施例中,该方法主要面向农业项目监测场景,针对耕地三维重建,引入了边缘计算设备进行先行计算,并充分发挥云上并行计算的优势,减少三维模型的重建时间和数据传输量,以期提升三维重建结果的响应速度和质量,以供农业项目现场大规模监测、验收、审核用途,提高了工作效率。
参照图2所示,其中图中右半部分为本发明实施例提供的边缘云计算 架构协同架构,下面基于该架构分别介绍本发明实施例提供的耕地现场边云协同的三维重建方法的8个步骤。
第1步 边缘计算设备获取无人机拍摄耕地图像
获取无人机拍摄图像及将无人机采集到的耕地图像传输到边缘计算设备上的过程。比如可在边缘设备上运行大疆地面站(GSPro),大疆智图(Terra)等控制软件,包含了选定目标耕地区域自动生成航线的功能,提前规划,在对耕地拍摄完成后一次性将图像导入到边缘计算设备中。
第2步 边缘计算设备进行图像数据提取及预处理
图像元数据提取为具体分析图像相关的属性,一般这些数据均以EXIF键值的方式附加在图像文件中。进行后续三维重建时,能利用到的属性有曝光信息,焦距,拍摄时间,拍摄地点(GPS经纬度),高度及其参照等。元数据按需求存储为若干表格的形式,归档并存储于结构性数据库或静态文件中。
将无人机图像传输到边缘计算设备后,边缘计算设备进行对应的图像预处理,包含如下步骤:
1.利用图像数据中每张图片拍摄时所述无人机相对高度信息筛选不相关或对三维重建没有帮助的图像,如起飞降落过程中拍摄的图片。比如,可采用的具体方式为计算所有图片中飞行高度的中位数,标记为参考拍摄高度。删除距离与参考拍摄高度相差过大的图片及飞行高度变化过大的图片。
2.利用相邻拍摄图片位置差分删除图片过密区域中冗余的图片。图片位置的差分比较的具体方式为比较连续五张图片的相对距离,如果首张图片与第五张图片的位置变化小于预设阈值,如20米,则删除第二张及第四张图片。
3.利用图片内容覆盖度的差分删除冗余图片。图片内容覆盖度的计算为从相邻两张图片中提取图像特征点,即图像中一些特征明显、便于检测、匹配的点,这些特征点经匹配后求出图像坐标变换参数,从而计算图片重叠度。如果相邻图片重叠度超过一定阈值,则仅保留其中一张图片。
通过以上预处理步骤可减少低质量图片的传输至云数据中心,进而减 轻边缘计算设备至云数据中心连接的带宽压力,流量开销和预留带宽成本。另一方面,减少低质量图片能减少点云匹配中异常点的数量,显著提升三维重建结果质量。
第3步 边缘计算设备进行耕地图像数据集分割
针对耕地三维重建场景,本实施例中,选择将三维重建过程中计算量较大,对计算存储资源要求较高的部分放置于云数据中心内进行处理。在云数据中心内可动态调整部署多个虚拟容器,与之相对应的可以采用并行计算的方式加速三维重建过程,因而本发明选择将一个大规模的耕地三维重建任务分割成若干三维重建子任务,放置于多个三维重建容器中,生成若干三维子模型,最后再将三维子模型及关联的图像进行合并。
具体分割步骤如下:
为了将大规模的耕地三维重建任务分割成若干子任务,边缘计算设备将对图像数据集进行分割,而后交由云数据中心内的重建容器进行处理。具体的耕地图像数据集分割方式介绍如下:
1.获取云数据中心内可用的容器数目,获取图像元数据中拍摄航线及影像覆盖范围。
2.计算影像覆盖范围的凸包(Convex Hull)。凸包在本发明实施例中的意义如下:给定整个田块二维平面上的点集,凸包就是将整个航拍田块的最外层的点连接起来构成的凸多边形,它能包含点集中所有的点。并根据可用容器数目将该凸包均分为与容器数目相等的份数,每一份称为一个子区域,且每一子区域的三维重建任务与云上容器一一对应。这些子区域面积相近,形状规则,便于重建和目视。
3.若某一子区域内图片数目小于一定阈值,利用元数据中geohash属性,执行N-最近邻搜索,匹配geohash前缀以搜索附近的图片补足图片数目。
第4步 重建容器编排及图像传输
三维重建处理过程存在高度的依赖性和耦合性,本发明实施例中,在云数据中心内采用了容器虚拟化及微/宏服务的手段将处理过程进行封装,通过编排的方式可以并行处理多个三维重建子任务。
在编排部署三维重建容器后,边缘计算设备将对应子任务所需的图像经由边缘网络基础设施传输到指定的容器内。边缘计算设备同时监控网络传输状况,并对容器状态及三维重建子任务状况进行跟踪,跟踪方式常见有心跳(heartbeat),轮询(poll)等。若传输失败或容器出现故障,则由边缘计算设备重新执行容器部署和图像传输。
第5步 云上耕地三维子模型计算
云数据中心内每个三维重建容器对于各自分配的图像子数据集执行相同的三维重建步骤以生成三维子模型。基于点云的三维重建方式较为成熟,可将三维实景或实物通过一系列的三维空间点来表示,稀疏点云的来源同为图像特征点。特征点的检测是整个三维重建过程中最为基础的一步,其检测效果的好坏对最后的结果有很大的影响,常用的检测算法有SIFT、SURF等。比如可选择使用广泛采用的SIFT算法提取特征点。稀疏点云的生成使用的技术主要为SfM(Structure from Motion)。在检测出每张图片所有的特征点后,同样进行特征点匹配,形成稀疏点云。稠密点云的生成使用基于面片的三维立体重建算法。形成稠密点云后,基本能够用肉眼识别三维模型的轮廓。为实现真正的实物三维化,需对其进行表面重建,本发明实施例使用了Delaunay三角化及Power Crust算法。在对点云数据进行表面重建之后,实景实物的轮廓、形状已经清晰可见,最后一步为纹理映射,纹理映射的作用是使得重建的三维模型更接近实景实物,具有颜色、纹理以及细节特点。
第6步 云上耕地三维子模型结果生成
耕地监测场景中,需要同时建立三维模型的地理参考坐标,并生成数字表面模型(Digital Surface Model,DSM)和数字地形模型(Digital Terrain Model,DTM)及正射影像(Orthophoto)。数字地形模型被广泛用于各种耕地面积、体积、坡度计算,并可用于模型中任意两点间的通视判断及任意断面图绘制,在耕地监查应用中被用于绘制等高线、坡度坡向图、立体透视图,制作正射影像图以及地图的修测,亦可作为田块分类的辅助数据。本发明实施例中在计算三维子模型的同时,也生成对应的模型结果文件并保存在重建容器内。具体涉及到的文件比如有点云(LAZ格式,PLY格式), 正射影像(GeoTIFF格式,PNG格式,MBTiles格式,Tiles格式),渲染后的三维子模型(OBJ格式)。
第7步 云上耕地三维模型及图像拼接与增强
在云上三维重建容器生成三维子模型后,进一步地,在云上合并三维子模型,并对三维重建后的正射影像进行镶嵌处理,例如,无缝拼接(Seamless Mosaic)。
拼接完成后,在云上继续进行三维模型及图像增强的处理,如羽化,锐化,对比度增强等步骤。例如,通过线性拉伸,高斯拉伸的方式,增强图像的对比度,调整图像白平衡等。
使用ENVI等处理软件能通过对图像数据采用各种图像增强算法,以便处理结果图像比原始图像更适合于特定的应用要求。
图像增强,能通过空间域增强处理,以增强图像中的现状物体细部部分或者主干部分。其中包括卷积滤波处理,如高通滤波、低通滤波、拉普拉斯算子,方向滤波等;除了空间域增强处理,还能通过辐射域增强处理,对单个像元的灰度值进行变换来增强处理,如直方图匹配、直方图拉伸等;光谱增强处理,基于多光谱数据对波段进行变换达到图像增强的效果,如主成分变换、独立主成分变换、色彩空间变换等。
第8步 边缘计算设备提供三维模型检索及按需下载
边缘计算设备可实时通过浏览器网页的形式跟踪各容器重建进度。在重建完成后,提供检索查找对应位置的三维模型及图像,并可按需下载三维模型及相关文件到本地,供浏览、查看。
本发明实施例中,上述1-4步在边缘计算设备上完成,5-7步在云数据中心内部署的多个三维重建容器内完成,第8步在边缘计算设备上完成。该方法的设计充分考虑边缘计算设备及云数据中心的异构性及处理能力的差异,创新性地提出了协同边缘计算设备和云计算平台完成整个三维重建流程,使三维重建时间缩短了79%,实现了三维重建现场出图。
下面列举一则具体的实施例,旨在定量分析本发明实施例所描述的边缘-云协同计算架构进行耕地三维重建时同传统架构相比所取得的性能提升。
该方案首先测量了使用传统架构的处理时间。传统架构中,耕地三维重建步骤均放置于物理位置非常接近的设备或设备组之上,例如智慧农业项目中在耕地里部署的嵌入式设备,监查人员所配备的图像工作站,监查机构所拥有的计算集群或者云数据中心,在本案例对比实验中选择了对应的设备进行测量。
具体重建过程为获取图像数据后立即在对应设备或设备组对完整数据集进行重建。
表1 传统三维重建完整重建时间比较
Figure PCTCN2020131409-appb-000001
应用本发明实施例提供的方法:
Figure PCTCN2020131409-appb-000002
本方案采用边缘-云协同计算架构,在获取图像数据后由边缘计算设备进行先行计算——对无人机获得的图像数据集提取元数据,进而根据元数据分割图像数据集。边缘计算设备衡量自身及同云数据中心的性能,选择编排部署多个三维重建容器于边缘或云数据中心内进行三维重建。云数据中心内多个重建容器并行执行重建任务,以快速获得三维重建结果。在本案例中,边缘计算设备选择将图像数据集数据分为10组,传输图像交由10个重建容器进行重建,在保证重建质量和交付重建结果的同时,将完整重建时间缩短为32分钟19秒。与传统架构(表1)相比,减少了79%-90%的重建时间。
实验配置及数据集:
实验中在云数据中心使用的机架式服务器是Dell PowerEdge R430机架式服务器。每个服务器都配备两个Intel Xeon E5-2630 v3 2.4GHz物理CPU。在超线程技术的支持下,每个CPU能够具有8个核心和16个线程。该机架式服务器具有256GB的内存。云数据中心内部使用多个Intel以太网控制器10GbE的X540-AT2网卡,支持全双工10Gbps的网络连接。
本实验选取Lenovo塔式服务器作为本次实验的边缘计算设备,其操作系统为Ubuntu 16.04LTS,物理CPU为Intel i7-6700 3.4GHz,内存为8GB,并配置有独立显卡AMDRadeon RX 550。边缘服务器的网卡型号为Realtek RTL8168,本实验是基于8M的上行带宽完成的。
用于本实验的数据集由大疆无人机DJI Phantom 4航拍获得,其所搭载的相机型号是FC300S,光圈是2.8,快门速度为1/640s,焦距是3.6mm,像素大小是4000*3000。该数据集包含了约330亩的土地,研究区位于广东省湛江市坡头区乾塘镇米稔村,该区域地势平坦,以耕地为主,以及少量居民用地。在本实验中无人机飞行约11分钟,航拍高度为59.5米,共获取了347张影像,包含起飞降落过程,覆盖了约330亩地的土地。
实施例详细介绍
具体实施步骤如下:
第1步 边缘计算设备获取无人机拍摄耕地图像
获取无人机图像的过程为将无人机采集到的耕地图像传输到边缘计算设备上的过程。
在本实施例中的具体流程为在对耕地拍摄完成后一次性将所有图像通过WiFi网络传输到Lenovo边缘塔式服务器上。
第2步 边缘计算设备进行图像数据提取及预处理
图像元数据提取为具体分析图像相关的属性,一般这些元数据均以EXIF键值的方式附加在图像文件中。进行后续三维重建时,能利用到的属性有曝光信息,焦距,拍摄时间,拍摄地点(GPS经纬度),高度及其参照等。拍摄时的无人机三轴速度,云台姿态等也对后续的图像及三维模型分析具有很大的帮助,这些信息连同图像文件的存储路径和方式组成了元 数据,按需求存储为若干表格的形式,归档并存储于结构性数据库或静态文件中。
本实施例的具体操作流程为:
A.图像元数据提取:
在Lenovo边缘塔式服务器上编排并启动Jupyter-notebook服务器容器,编写交互式python脚本,批处理对所有图像文件的EXIF信息进行读取并整理。其中Python环境为Python 3.6,导入了一系列广泛采用的工具包:
Python数据科学工具包:pandas(用于数据处理),matplotlib(用于绘图),scipy(计算空间数据)
图像操作工具包PIL(图像操作及EXIF信息读取),os(文件层面操作)
地理信息工具包pygeohash(计算geohash),gmaps(Google地图操作接口)
在此基础上编写相关函数对元数据提取并归档,部分元数据结果如下表2所示(前12条记录);
表2 图像数据提取结果
Figure PCTCN2020131409-appb-000003
所得结果保存在结构型数据库中(具体为SQLite,考虑到边缘计算设备的性能有限)并且备份于静态csv文件中。
B.边缘计算设备进行对应的图像预处理,包含如下步骤:
利用图像元数据中每张图片的高度信息筛选不相关的图像,如起飞降 落的过程。具体方式为计算所有图片中飞行高度的中位数,标记为参考拍摄高度。删除距离与参考拍摄高度相差过大的图片及飞行高度变化过大的图片。
如图3所示,其中,347张耕地图片无人机相对航高变化情况,305张以后为降落过程;即:将305张以后的图片可以删除。
C.航拍中冗余图片的删除:
利用相邻拍摄图片位置及内容覆盖度的差分删除图片过密区域中冗余的图片。具体方式为比较连续五张图片的相对距离,如果首张图片与第五张图片的位置变化小于一定阈值,则删除第二张及第四张图片。
去除重叠度高的图片:读取整组数据集的重叠度,比如当重叠度高于阈值80%时,便会通过删除相应数量的图片,减少冗余数据。在本具体案例中原数据集数据量是1.8GB(347张图片),经筛选处理后,数据量减少至1.19GB(251张图片)。
第3步 边缘计算设备进行耕地图像数据集分割
为了将大规模的耕地三维重建任务分割成若干子任务,边缘计算设备将对图像数据集进行分割,而后交由云数据中心内的各个重建容器进行处理。具体的图像数据集分割方式介绍如下:
获取云数据中心内可用的容器数目,在本实施例中容器数目为10个容器,继而获取图像元数据中拍摄航线及影像覆盖范围。计算全部影像覆盖范围的凸包(Convex Hull),并根据可用容器数目将该凸包均分为与容器数目相等的份数10份,每一份称为一个子区域,且每一子区域的三维重建任务与云上容器一一对应。在第2步中已经将图片数目减少为251张,这251张图片继续被分为十组(分别为30张、23张、18张、18张、14张、33张、25张、26张、26张、38张)。
在本实施例中,第2-3步共耗时14秒。
第4步 云上重建容器编排及图像传输
本方案在云数据中心内采用了容器虚拟化(Containerization)及微/宏服务(Microservice/Macroservice)的手段将处理过程进行封装,通过常规编排的方式可并行处理多个三维重建子任务。由于所连接的云数据中心容器 管理引擎为近年来广泛采用的Docker容器引擎,与之相对应的,在本实施例中按常规编写Dockerfile来描述采用的Docker镜像,安装运行库,以及具体的执行逻辑和执行入口。
边缘计算设备可以连接Kubernetes服务页面监管云上的三维重建容器部署状况。
为了减少上传时间,本案例用三条并行网络路径同时上传前面分割的10组图片,按照“3+3+4”的分组方式,平均地分成三大组(分别为83张、81张和87张),分别由三条并行网络路径同时上传至云数据中心的对应容器(见表3.容器编号),每组的上传时间分别为18分钟42秒、19分钟和18分钟55秒。
第5步 云上耕地三维子模型计算
各重建容器并行执行三维重建流程:图像特征点的检测、稀疏云重建、稠密云重建、三维模型网格化、纹理化,所需时间列举在表中:
表3:实施例边缘设备上传及云中三维并行重建所用时间
Figure PCTCN2020131409-appb-000004
三条并行网络路径上传时间:取耗时最长的时间(19分钟)
云服务器10个容器并行运算时间:取10个容器运算最长时间(12分 25秒)。
第6步 云上耕地三维子模型结果生成
耕地监测场景中,需要同时建立三维模型的地理参考坐标,并生成数字表面模型(Digital Surface Model,DSM)和数字地形模型(Digital Terrain Model,DTM)及正射影像(Orthophoto)。也生成对应的模型结果文件并保存在重建容器内。具体涉及到的文件有点云(LAZ格式,PLY格式),正射影像(GeoTIFF格式,PNG格式,MBTiles格式,Tiles格式),渲染后的三维子模型(OBJ格式)。
如图4所示,其为10个容器中,其中一个的三维建模的结果示意图。
第7步 云上耕地三维模型及图像拼接与增强
在云上三维重建容器生成三维子模型后,进一步地,在云上合并三维子模型,并对三维重建后的正射影像进行镶嵌处理,例如,无缝拼接(Seamless Mosaic),如图5所示。
拼接完成后,在云上继续进行三维模型及图像增强的处理,如羽化,锐化,对比度增强等步骤。例如,通过线性拉伸,高斯拉伸的方式,增强图像的对比度,调整图像白平衡等。
本实施例中在云上容器部署了ENVI软件及运行库,将10个三维子模型传输到该容器中,选取三维重建模型的正射俯视角图像。调用ENVI运行库中无缝拼接(Seamless Mosaic)的功能,逐次添加10个正射俯视角图像;把第1组作为镶嵌的参考图层,其他9张分图作为校正图层,选择进行自动绘制接边线处理;并在输出面板中,选择三次卷积内插法进行重采样处理,执行无缝拼接,无缝拼接过程耗时2分钟10秒,得到如图6所示的拼接后结果。
进一步地,通过高斯拉伸镶嵌结果图,以增强图像的对比度,如图7所示。同传统三维重建方式得到的结果图8进行比较如下:
可以看出:本方案采用边缘+云的方法,其三维重建在时间上较传统方法节约了79%(传统方法耗时2小时30分以上,本方案仅需32分19秒),但两种重建效果完全可比,完全达到农业项目监测的要求。
第8步 边缘计算设备提供三维模型检索及按需下载
边缘计算设备可实时通过浏览器网页的形式跟踪各容器重建进度。在重建完成后,提供检索查找对应位置的三维模型及图像,并可按需下载三维模型及相关文件到本地,供进一步浏览及分析。
在该具体实施例中,边缘的先行预处理+并行上传+并行三维重建+三维重建结果的下载,总耗时计算:
边缘的先行预处理(删除重叠、分组):14秒
经由三条并行网络路径上传时间:取三条中耗时最长的时间(19分钟)
云服务器10个容器并行运算时间:取10个容器运算最长时间(12分25秒)
三维重建结果下载40秒
总耗时=14秒+19分+12分25秒+40秒=32分19秒
本方案共用时32分钟19秒,在图像质量满足农业监测需求的情况下,使用“边缘-云”计算协同的方式减少了79%-90%的重建时间。
本发明实施例提供的耕地现场边云协同的三维重建方法,用无人机从空中航拍,可以获取几百至几千亩耕地的影像材料,进而协同边缘计算设备及运算功能强大的云服务器,将它们最佳组合应用于耕地三维重建计算过程,可实现现场快速重建具有地理位置信息的三维模型,所生成的模型近似于人工的目视效果,很大程度上可以辅助人工目视检查。三维模型中有立体化的田块、房屋、作物、树木、草地、房屋、道路、桥梁、水利沟渠,还可以获取地形地貌的高程和体积信息、有地块的详细地理位置信息。因此,基于该方法重建的三维模型可以清晰地仔细检查道路、沟渠、田块建设不到位状况,以供农业项目现场大规模监测、验收、审核用途。
以上实施例的说明只是用于帮助理解本发明的方法及其核心思想。应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以对本发明进行若干改进和修饰,这些改进和修饰也落入本发明权利要求的保护范围内。对这些实施例的多种修改对本领域的专业技术人员来说是显而易见的,本文中所定义的一般原理可以在不脱离本发明的精神或范围的情况下在其它实施例中实现。因此,本发明将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一 致的最宽的范围。

Claims (8)

  1. 一种耕地现场边云协同的三维重建方法,其特征在于,包括:
    S1、获取无人机拍摄耕地图像及将所述耕地图像传输到边缘计算设备;
    S2、所述边缘计算设备进行图像元数据提取及对应的图像预处理;
    S3、所述边缘计算设备对预处理后的图像数据集进行分割;
    S4、根据分割结果对云数据中心编排部署三维重建容器;所述边缘计算设备将所述分割结果对应子任务所需的图像,传输到对应的容器内;
    S5、云数据中心内每个三维重建容器对于各自分配的图像,执行相同的三维重建步骤以生成三维子模型;
    S6、在所述云数据中心上建立三维模型的地理参考坐标,并生成数字表面模型和数字地形模型及正射影像;
    S7、在云数据中心上合并所有所述三维子模型,并对三维重建后的正射影像进行拼接处理及图像增强的处理;
    S8、所述边缘计算设备提供三维模型检索及按需下载;所述三维模型包括多个三维子模型。
  2. 如权利要求1所述的一种耕地现场边云协同的三维重建方法,其特征在于,步骤S2中所提取的图像元数据以EXIF键值的方式附加在图像中。
  3. 如权利要求1所述的一种耕地现场边云协同的三维重建方法,其特征在于,所述步骤S2包括:
    S21、所述边缘计算设备提取图像相关的属性;包括:曝光信息、焦距、拍摄时间、无人机经纬度、无人机相对高度及其参照物;
    S22、根据图像数据中每张图片拍摄时所述无人机相对高度信息剔除不相关或对三维重建没有帮助的图像;
    S23、根据相邻拍摄图片位置差分删除图片过密区域中冗余的图片;
    S24、根据图片内容覆盖度的差分删除冗余图片。
  4. 如权利要求1所述的一种耕地现场边云协同的三维重建方法,其特 征在于,所述步骤S3包括:
    S31、获取云数据中心内可用容器数目,获取图像元数据中拍摄航线及影像覆盖范围;
    S32、计算影像覆盖范围的凸包;所述凸包为将整个航拍田块的最外层的点连接起来构成的凸多边形,凸多边形内包含点集中所有的点;根据所述可用容器数目将所述凸包均分为与所述容器数目相等的份数;每一份称为一个子区域,且每一子区域的三维重建任务与云数据中心上的容器一一对应;
    S33、当某一子区域内图片数目小于预设阈值时,利用元数据中geohash属性,执行N-最近邻搜索,匹配geohash前缀以搜索附近的图片补足图片数目。
  5. 如权利要求1所述的一种耕地现场边云协同的三维重建方法,其特征在于,步骤S4中,所述边缘计算设备将所述分割结果对应子任务所需的图像,传输到对应的容器内具体包括:所述边缘计算设备将所述分割结果对应子任务所需的图像,通过边缘网络设施传输到对应的容器内,所述边缘网络设施包括已建成的多运营商网络。
  6. 如权利要求1所述的一种耕地现场边云协同的三维重建方法,其特征在于,所述S5步骤包括:
    根据检测算法SIFT提取图像特征点,进行特征点匹配,形成稀疏点云;
    使用基于面片的三维立体重建算法,形成稠密点云;
    再使用Delaunay三角化及Power Crust算法,对其表面重建并进行纹理映射,以生成三维子模型。
  7. 如权利要求1所述的一种耕地现场边云协同的三维重建方法,其特征在于,所述S8步骤包括:
    所述边缘计算设备实时跟踪各容器重建进度;在重建完成后,提供检索查找对应位置的三维模型及图像,并按需下载三维模型及相关文件到本地。
  8. 如权利要求7所述的一种耕地现场边云协同的三维重建方法,其特 征在于,所述边缘计算设备实时跟踪各容器重建进度具体包括:所述边缘计算设备实时通过浏览器网页的形式跟踪各容器重建进度。
PCT/CN2020/131409 2019-12-10 2020-11-25 一种耕地现场边云协同的三维重建方法 WO2021115124A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/594,450 US11763522B2 (en) 2019-12-10 2020-11-25 3D reconstruction method based on on-site edge-cloud collaboration for cultivated land

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911261761.1 2019-12-10
CN201911261761.1A CN111080794B (zh) 2019-12-10 2019-12-10 一种耕地现场边云协同的三维重建方法

Publications (1)

Publication Number Publication Date
WO2021115124A1 true WO2021115124A1 (zh) 2021-06-17

Family

ID=70313978

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/131409 WO2021115124A1 (zh) 2019-12-10 2020-11-25 一种耕地现场边云协同的三维重建方法

Country Status (3)

Country Link
US (1) US11763522B2 (zh)
CN (1) CN111080794B (zh)
WO (1) WO2021115124A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113899320A (zh) * 2021-09-30 2022-01-07 中国科学院光电技术研究所 一种基于空间结构光场的高精度微纳三维形貌测量方法
CN116342685A (zh) * 2023-05-29 2023-06-27 四川凯普顿信息技术股份有限公司 一种基于dom影像的农业耕地地块的面积测量方法

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080794B (zh) * 2019-12-10 2022-04-05 华南农业大学 一种耕地现场边云协同的三维重建方法
CN111768374B (zh) * 2020-06-22 2023-02-10 绍兴文理学院附属医院 一种肺部图像辅助呈现方法
US20220084224A1 (en) * 2020-09-11 2022-03-17 California Institute Of Technology Systems and methods for optical image geometric modeling
CN112189398B (zh) * 2020-09-28 2023-03-07 广州极飞科技股份有限公司 土地平整方法、系统、装置、设备及存储介质
CN112508441B (zh) * 2020-12-18 2022-04-29 哈尔滨工业大学 基于深度学习三维重建的城市高密度区室外热舒适评价方法
CN114323145A (zh) * 2021-12-31 2022-04-12 华南农业大学 一种基于多传感器信息融合的果园地形建模方法及系统
CN114895701B (zh) * 2022-04-18 2023-04-25 深圳织算科技有限公司 一种无人机巡检方法及系统
CN115546671A (zh) * 2022-11-01 2022-12-30 北京数字政通科技股份有限公司 一种基于多任务学习的无人机变化检测方法及其系统
CN115599559B (zh) * 2022-12-14 2023-03-03 环球数科集团有限公司 一种基于元宇宙的多目标三维快速建模及重构系统
CN117475314B (zh) * 2023-12-28 2024-03-12 自然资源部第三地理信息制图院 一种地质灾害隐患立体识别方法、系统及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109658450A (zh) * 2018-12-17 2019-04-19 武汉天乾科技有限责任公司 一种基于无人机的快速正射影像生成方法
US20190180501A1 (en) * 2017-12-11 2019-06-13 Locus Social Inc. System and Method of Highly-Scalable Mapping and 3D terrain Modeling with Aerial Images
CN110379022A (zh) * 2019-07-22 2019-10-25 西安因诺航空科技有限公司 一种航拍地形三维重建系统中的点云及网格分块方法
CN111080794A (zh) * 2019-12-10 2020-04-28 华南农业大学 一种耕地现场边云协同的三维重建方法

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7813591B2 (en) * 2006-01-20 2010-10-12 3M Innovative Properties Company Visual feedback of 3D scan parameters
CN103426165A (zh) * 2013-06-28 2013-12-04 吴立新 一种地面激光点云与无人机影像重建点云的精配准方法
US20150243073A1 (en) * 2014-02-27 2015-08-27 Here Global B.V. Systems and Methods for Refining an Aerial Image
CN105184863A (zh) * 2015-07-23 2015-12-23 同济大学 一种基于无人机航拍序列影像的边坡三维重建方法
CA3012049A1 (en) * 2016-01-20 2017-07-27 Ez3D, Llc System and method for structural inspection and construction estimation using an unmanned aerial vehicle
JP6869264B2 (ja) * 2017-01-20 2021-05-12 ソニーネットワークコミュニケーションズ株式会社 情報処理装置、情報処理方法およびプログラム
US10455222B2 (en) * 2017-03-30 2019-10-22 Intel Corporation Technologies for autonomous three-dimensional modeling
CN107194989B (zh) * 2017-05-16 2023-10-13 交通运输部公路科学研究所 基于无人机飞机航拍的交通事故现场三维重建系统及方法
CN107341851A (zh) * 2017-06-26 2017-11-10 深圳珠科创新技术有限公司 基于无人机航拍影像数据的实时三维建模方法及系统
US10593108B2 (en) * 2017-10-31 2020-03-17 Skycatch, Inc. Converting digital aerial images into a three-dimensional representation utilizing processing clusters
WO2019090480A1 (zh) * 2017-11-07 2019-05-16 深圳市大疆创新科技有限公司 基于无人机航拍的三维重建方法、系统及装置
CN108366118A (zh) * 2018-02-11 2018-08-03 苏州光之翼智能科技有限公司 一种基于云计算的分布式无人机实时测绘系统
CN108765298A (zh) * 2018-06-15 2018-11-06 中国科学院遥感与数字地球研究所 基于三维重建的无人机图像拼接方法和系统
CN109685886A (zh) * 2018-11-19 2019-04-26 国网浙江杭州市富阳区供电有限公司 一种基于混合现实技术的配网三维场景建模方法
US11334986B2 (en) * 2019-01-30 2022-05-17 Purdue Research Foundation System and method for processing images of agricultural fields for remote phenotype measurement

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190180501A1 (en) * 2017-12-11 2019-06-13 Locus Social Inc. System and Method of Highly-Scalable Mapping and 3D terrain Modeling with Aerial Images
CN109658450A (zh) * 2018-12-17 2019-04-19 武汉天乾科技有限责任公司 一种基于无人机的快速正射影像生成方法
CN110379022A (zh) * 2019-07-22 2019-10-25 西安因诺航空科技有限公司 一种航拍地形三维重建系统中的点云及网格分块方法
CN111080794A (zh) * 2019-12-10 2020-04-28 华南农业大学 一种耕地现场边云协同的三维重建方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113899320A (zh) * 2021-09-30 2022-01-07 中国科学院光电技术研究所 一种基于空间结构光场的高精度微纳三维形貌测量方法
CN113899320B (zh) * 2021-09-30 2023-10-03 中国科学院光电技术研究所 一种基于空间结构光场的高精度微纳三维形貌测量方法
CN116342685A (zh) * 2023-05-29 2023-06-27 四川凯普顿信息技术股份有限公司 一种基于dom影像的农业耕地地块的面积测量方法

Also Published As

Publication number Publication date
US11763522B2 (en) 2023-09-19
US20220180600A1 (en) 2022-06-09
CN111080794A (zh) 2020-04-28
CN111080794B (zh) 2022-04-05

Similar Documents

Publication Publication Date Title
WO2021115124A1 (zh) 一种耕地现场边云协同的三维重建方法
US11070725B2 (en) Image processing method, and unmanned aerial vehicle and system
Xiang et al. Mini-unmanned aerial vehicle-based remote sensing: Techniques, applications, and prospects
Torres-Sánchez et al. Assessing UAV-collected image overlap influence on computation time and digital surface model accuracy in olive orchards
CN107194989B (zh) 基于无人机飞机航拍的交通事故现场三维重建系统及方法
WO2020221284A1 (zh) 一种流域性洪涝场景的无人机监测方法及系统
Lefevre et al. Toward seamless multiview scene analysis from satellite to street level
CN109242862B (zh) 一种实时的数字表面模型生成方法
CN111629193A (zh) 一种实景三维重建方法及系统
CN104118561B (zh) 一种基于无人机技术的大型濒危野生动物监测的方法
WO2023280038A1 (zh) 一种三维实景模型的构建方法及相关装置
CN105139350A (zh) 一种无人机侦察图像地面实时重建处理系统
CN110428501B (zh) 全景影像生成方法、装置、电子设备及可读存储介质
CN109612445B (zh) 一种基于无人机的WebGIS平台下高精度地形建立方法
CN115082254A (zh) 一种变电站精益管控数字孪生系统
CN109961043B (zh) 一种基于无人机高分辨率影像的单木高度测量方法及系统
Zhou et al. Application of UAV oblique photography in real scene 3d modeling
CN112907749B (zh) 一种多建筑物的三维重建方法及系统
KR102587445B1 (ko) 드론을 이용하여 시계열정보가 포함된 3차원 지도의 제작 방법
US20220414362A1 (en) Method and system for optimizing image data for generating orthorectified image
CN115797256A (zh) 基于无人机的隧道岩体结构面信息的处理方法以及装置
CN115100296A (zh) 一种光伏组件故障定位方法、装置、设备及存储介质
CN113822914A (zh) 倾斜摄影测量模型单体化方法、计算机装置及产品、介质
Li et al. Low-cost 3D building modeling via image processing
KR20220169342A (ko) 드론을 이용한 3차원 지도 제작 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20899671

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03.11.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20899671

Country of ref document: EP

Kind code of ref document: A1