CN115578656B - Method and system for supporting full-automatic processing of multi-model multispectral camera data - Google Patents

Method and system for supporting full-automatic processing of multi-model multispectral camera data Download PDF

Info

Publication number
CN115578656B
CN115578656B CN202211273396.8A CN202211273396A CN115578656B CN 115578656 B CN115578656 B CN 115578656B CN 202211273396 A CN202211273396 A CN 202211273396A CN 115578656 B CN115578656 B CN 115578656B
Authority
CN
China
Prior art keywords
image
images
multispectral
ground control
control point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211273396.8A
Other languages
Chinese (zh)
Other versions
CN115578656A (en
Inventor
李文娟
吴文斌
余强毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Agricultural Resources and Regional Planning of CAAS
Original Assignee
Institute of Agricultural Resources and Regional Planning of CAAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Agricultural Resources and Regional Planning of CAAS filed Critical Institute of Agricultural Resources and Regional Planning of CAAS
Priority to CN202211273396.8A priority Critical patent/CN115578656B/en
Publication of CN115578656A publication Critical patent/CN115578656A/en
Application granted granted Critical
Publication of CN115578656B publication Critical patent/CN115578656B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Remote Sensing (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method for supporting full-automatic processing of multi-model multispectral camera data, which comprises the following steps: s1, extracting flight information of an unmanned aerial vehicle, and eliminating and registering a dark angle effect on a multispectral image of a research area shot by the unmanned aerial vehicle; s2, splicing and geometrically correcting the multispectral images registered in the S1; s3, cutting and radiometric calibration are carried out on the multispectral image covering the research area. The invention also correspondingly provides a system. The method is suitable for multi-pattern multispectral cameras, can greatly improve the data processing capacity of the multispectral cameras mounted on the unmanned aerial vehicle, and simplifies the operation flow.

Description

Method and system for supporting full-automatic processing of multi-model multispectral camera data
Technical Field
The invention relates to a camera data processing technology, in particular to a method and a system for supporting full-automatic processing of multi-model multispectral camera data.
Background
With the development of unmanned aerial vehicle technology and multispectral imaging cameras, unmanned aerial vehicles with multispectral imaging cameras are increasingly applied to agricultural monitoring. Processing images acquired by a multispectral camera involves a number of steps, including: eliminating the dark angle effect of images, registering multi-band images of the same photographic object, splicing images, geometrically correcting, generating three-dimensional point cloud, carrying out radiation correction, generating orthographic images of all wave bands, cutting multi-angle images of a research area on an original image according to the boundary of the research area, and the like. There are many commercial software available to do this, such as Pix4D, agisoft Metashape, and intelligent map of da jiang. However, current scientific researchers or industrial application personnel still face a plurality of problems when processing multispectral images of unmanned aerial vehicles:
1) Many operations need to be manually completed, for example, a link of geometric correction, a ground control point is found in the spliced images in a manual visual mode, and the measured longitude and latitude information is input. Typically 3-5 ground control points are placed on one fly, and corresponding times of longitude, latitude and altitude data are required to be input. When unmanned aerial vehicles to be processed fly more and the data volume is larger, the link consumes a great deal of labor and time cost.
2) Some software systems do not support registration between multi-band images, while others do so after the generation of the orthographic images, which to some extent affects the accuracy of the orthographic images.
3) In many researches at present, the image acquisition of each research area is directly cut from an orthographic image, the precision and the resolution of the image are directly affected by the orthographic image, and meanwhile, as the orthographic image is an image with a vertically downward observation angle generated by software, the multi-angle information of each picture when the original unmanned aerial vehicle flies is lost, and the method is not beneficial to the later crop parameter extraction and application.
4) There are several commercial multispectral imaging cameras on the market at present, but there is no set of fully-automatic processing method, which can be compatible with different brands of cameras.
Disclosure of Invention
Aiming at the problems in the background technology, the invention provides a method for automatically processing unmanned aerial vehicle data, which can be compatible with multi-model multi-spectrum imaging cameras to acquire, and generates a set of automatic processes of preprocessing, splicing, radiometric calibration and study area cutting of multi-spectrum images.
The method for supporting full-automatic processing of multi-model multispectral camera data comprises the following steps: s1, extracting flight information of an unmanned aerial vehicle, and eliminating and registering a dark angle effect on a multispectral image of a research area shot by the unmanned aerial vehicle; s2, splicing and geometrically correcting the multispectral images registered in the S1; s3, cutting and radiometric calibration are carried out on the multispectral image covering the research area.
The invention also contradicts a system for supporting full-automatic processing of multi-model multispectral camera data, which comprises a processor, wherein the processor can realize the method.
The beneficial effects of the invention include: the method can automatically register multispectral images, automatically search the ground control point positions and give real geographic coordinates, automatically search the radiation calibration plate and extract relevant measured data to realize radiation calibration, automatically cut out images containing a research area and calculate the multi-angle reflectivity, is suitable for a plurality of multispectral cameras, can greatly improve the data processing capacity of the multispectral cameras mounted on the unmanned aerial vehicle, simplifies the operation flow, and ensures that non-image processing or remote sensing professionals can also well process flying images.
Drawings
For easier understanding of the present invention, the present invention will be described in more detail by referring to specific embodiments shown in the drawings. These drawings depict only typical embodiments of the invention and are not therefore to be considered to limit the scope of the invention.
FIG. 1 is a flow chart of one embodiment of the method of the present invention.
Fig. 2 is a flow chart of another embodiment of the method of the present invention.
Fig. 3 is a sample of the vignetting effect correction factor for an image acquired by a multispectral camera.
Fig. 4 shows various images containing ground control points.
Detailed Description
Embodiments of the present invention will be described below with reference to the accompanying drawings so that those skilled in the art can better understand the present invention and implement it, but the examples listed are not limiting to the present invention, and the following examples and technical features of the examples can be combined with each other without conflict, wherein like parts are denoted by like reference numerals.
As shown in fig. 1-2, the method of the present invention includes S1-S3.
S1, extracting flight information of an unmanned aerial vehicle and preprocessing multispectral images shot by the unmanned aerial vehicle: eliminating the vignetting effect and registration. In one embodiment, step S1 includes S11-S14.
S11, acquiring the following flight information from the original unmanned aerial vehicle flight: number of continuous flights, number of bands, vignetting effect correction factor, GPS information of image acquisition (longitude, latitude, altitude and time), radiation reference value of ground radiation calibration plate, ground control point information (longitude, latitude and altitude of each control point).
Specifically, the following information is acquired one by one. Whether it is a plurality of consecutive flights for the same block. Brand of multispectral camera, number of corresponding bands. Whether XMP file information exists in the obtained image or not, and whether the XMP file contains a vignetting effect correction coefficient or not. Whether the XMP file information has EXIF header file information or not, and whether the EXIF information contains GPS information (longitude, latitude, altitude and time) for image acquisition or not. Whether there is a radiation reference value of the ground radiation calibration plate, i.e. reflectivity data of each band. If the data is collected by the non-RTK unmanned aerial vehicle, whether a ground control point file exists or not is also needed to be judged, and whether the ground control point file contains complete information (longitude, latitude and altitude of each control point or not). If the information is complete, continuing to the next step, if the information is missing, outputting the missing information in the code running log, and prompting an operator.
And S12, combining the data of the multiple flights according to the flight times and wave bands aiming at the multispectral images obtained by the multiple continuous flights of the same land.
In one embodiment, the method can be combined into the same folder, and the specific operation steps are as follows: the first flight has N images, the images are named as 1_XXnm. Tif until N_XXnm. Tif, wherein XX represents a wave band, the second flight has M images, the name of the first image is modified to be 1+N_XXnm. Tif, the M th image is modified to be M+N_XXnm. Tif, and so on until multiple flights of the same block are combined into the same file. The images flying for many times are placed in the same folder in sequence, so that the repetition and coverage of image names are avoided.
S13, eliminating the dark corner effect of all images.
And reading the vignetting effect correction coefficient in the image XMP file information, and applying the vignetting effect correction coefficient to the original multispectral image for correction.
The vignetting effect correction factor Vignetting Polynomial and the vignetting effect correction center Vignetting Center of an image acquired by the Sinkiang Phantom4 multispectral camera are shown in FIG. 3. As shown in fig. 2, the correction can be performed using a common polynomial equation, where the numerical value of each pixel in the image before correction is DN, and after correction is DN1:
DN1=DN×(k 6 ×r 6 +k 5 ×r 5 +k 4 ×r 4 +k 3 ×r 3 +k 2 ×r 2 +k×r)
wherein k-k 6 Each of the 6 coefficients in Vignetting Polynomial, r is the geometric distance of each pixel of the image to the correction center Vignetting Center.
S14, carrying out batch registration on images among different wave bands to obtain registered multispectral images.
And for the multiband image of the same target, using a certain band image as a reference, preferentially selecting a green band, and moving the images of other bands according to the similarity of the images of different bands so as to ensure that the matching degree between the images of other bands and the reference band image is the highest, thereby generating a new image. And simultaneously, generating matching degree parameters of each image before and after moving and the reference wave band image, and writing the matching degree parameters into a running log.
The multi-band image registration method automatically registers multi-band images of the same target, ensures that the images of all the bands are registered in place before an orthophoto is generated, and is not limited to a multi-spectrum camera, but is applicable to multi-model multi-spectrum cameras.
S2, splicing and geometrically correcting the multispectral images registered in the S1. In one embodiment, step S2 includes S21-S24.
And S21, splicing the images to generate a three-dimensional point cloud. In one embodiment, the API interface of the commercial software Agisoft Metashape may be used to input the multispectral image obtained in S1, and using an image of a certain band as a reference, the green band is preferentially selected for stitching and three-dimensional reconstruction, and the images of the remaining bands are identical to the reference band. Based on a motion structure recovery technology (Structure from motion, SFM), three-dimensional reconstruction is realized according to the corresponding relation of the images and the characteristics thereof, and three-dimensional point cloud of the land block of the research area is generated.
At present, a plurality of commercial or open source software can realize image splicing and three-dimensional point cloud reconstruction, wherein Agisoft Metashape is the most mature and stable, and an API interface capable of carrying out custom secondary development is provided, so that the call is convenient.
S22, establishing a ground control point pattern deep learning model.
1) A large number (e.g., 2000) of multispectral images including ground control points are acquired in advance prior to processing the drone data.
2) Marking the ground control point pattern. The reference band image (e.g., green band) is selected and all ground control points are framed with a rectangular tool using the online deep learning annotation tool Labelstudio. Fig. 4 shows various images containing ground control points. The image synthesized by red-green-Lan San wave bands is shown in the figure, so that the visual effect is better.
3) And taking all the marked images as a training data set, and training by using a network model to obtain a ground control point pattern deep learning model. Preferably, the Fast R-CNN deep network is used for training, and the network has better performance and can capture more target characteristic information compared with the R-CNN network and the Fast R-CNN network.
During training, 70% of images are randomly selected as a training data set, the rest 30% of images are selected as a verification data set, and after training is completed, the generated model is used for forecasting on the rest 30% of images and comparing with an actual result, so that accuracy evaluation is performed. The accuracy evaluation process is performed a plurality of times (for example, three times) in total, and the result is output to the operation log.
The deep learning technology is used for automatically searching ground control points in the registered images, and corresponding ground actually measured longitudes, latitudes and heights are endowed, so that manual operation can be greatly reduced.
S23, applying the model generated in S22 to an image having a complete ground control point (real accuracy, latitude and altitude information) pattern.
Firstly, selecting an image with a complete ground control point pattern from all reference wave band images, then framing out the pattern of the ground control point, and simultaneously outputting the number of the image and the position of the center of the ground control point pattern on the image. All information is written into the running log.
And S24, optimizing the three-dimensional point cloud generated in the step S21 to obtain the geometrically corrected three-dimensional point cloud.
1) And according to the geographic position of each image in the whole research area obtained by calculation after the point cloud is generated in the step S21, calculating the longitude and latitude information of each control point identified in the step S23. 2) And (3) comparing the longitude and latitude data of each control point in the ground control point file in S11, selecting the nearest actual ground control point, and giving corresponding numbers. 3) All images are geometrically corrected (e.g., by call Agisoft Metashape API). Preferably, the corrected image geometry accuracy should be within 10mm, if this range is exceeded, the error is stopped, otherwise the operation is continued.
S25, performing geometric correction on all images (calling Agisoft Metashape API to continue running) to generate an orthographic image of the whole flight.
S3, cutting and radiometric calibration are carried out on the multispectral image covering the research area. And automatically searching out an image covering the research area from the plurality of images based on the information after image stitching and the input research area range, and cutting according to the research area range to obtain a multi-angle image of the research area. In one embodiment, step S3 includes S31-S33.
S31, selecting an image containing the research area according to the boundary range of the research area obtained in the S11, and according to the three-dimensional point cloud optimized in the S24 and longitude and latitude information of each image, and cutting according to the boundary range of the research area to obtain all multispectral images containing the research area. Preferably, a new folder is created according to the naming of the study area, and all the images after cutting are stored.
S32, establishing a radiation calibration plate deep learning model.
1) A large number (e.g., 2000) of multispectral images including the radiation calibration plate are acquired in advance prior to processing the unmanned aerial vehicle data.
2) And marking the radiation calibration plate. A reference band image (e.g., green band) is selected and all of the radiometric calibration plates are framed out with a rectangular tool using the online deep learning labeling tool LabelStudio.
3) And taking all the marked images as a training data set, and training by using a network model to obtain the deep learning model of the radiometric calibration plate. Preferably, the Fast R-CNN deep network is used for training, and the network has better performance and can capture more target characteristic information compared with the R-CNN network and the Fast R-CNN network.
And the radiation calibration plate placed on the ground is automatically searched by using a deep learning technology, corresponding information is read, and then the radiation calibration operation is performed, so that the manual operation of searching the radiation reference plate is reduced.
S33, applying the deep learning model of the radiation calibration plate to all multispectral images obtained in the S31, and selecting the images containing the radiation calibration plate to obtain multispectral images of the research area after radiation calibration.
If the radiation calibration plate with the two-dimensional code is used, reading the two-dimensional code information and multiband reflectivity data of the radiation calibration plate contained in the two-dimensional code information; and if the multi-band reflectivity is the common radiation calibration plate, reading the multi-band reflectivity input in the step S11. For two kinds of radiation calibration plates, locating to the center, drawing a square buffer area, wherein the side length is a part of the side length of the radiation calibration plate, such as two thirds to ensure that the buffer area is on the radiation calibration plate and is not affected by the boundary, reading the measured DN value of the area, and carrying out the radiation calibration of each image by using the following formula:
Figure BDA0003896054730000081
wherein DN is the multispectral image after S14 registration, t is the integration time corresponding to the image, DN ref For the DN value of the radiation calibration plate read as above, t ref For integration of image correspondences involving radiation calibration plateOmega is a geometric angle and comprises an observation zenith angle, an observation azimuth angle, a solar zenith angle and a solar azimuth angle of each image, lambda is a wave band, and BRF (binary phase shift factor) ref (Ω, λ) is the standard reflectivity of the radiometric calibration plate at band λ, BRF (Ω, λ) is the radiometric calibration reflectivity data for each image.
And finally, generating a radiometric calibration reflectivity image, calculating the reflectivity average value of the image, and outputting the reflectivity average value to an excel table.
S34, aiming at the multispectral image obtained in the S33 after radiation calibration of the research area, obtaining an orthographic image after radiation calibration based on the multi-angle observation reflectivity of the research area and the corresponding geometric information.
Specifically, among all the images including the radiation calibration plate selected in S33, an image in which the zenith angle of observation is closest to the under-satellite is selected, and DN of the radiation calibration plate is extracted ref Using equation (1), the orthographic image generated in S25 is radiometrically calibrated and converted into a multiband reflectance orthographic image.
The invention has been tested on a multi-pattern multispectral camera (Micasense RedEdge-MX, xinjiang Phantom4M, airphen, etc.), and tested in various crops, and has good effect.
The foregoing embodiments, but only the preferred embodiments of the invention, use of the phrases "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments" in this specification may all refer to one or more of the same or different embodiments in accordance with the present disclosure. Common variations and substitutions by those skilled in the art within the scope of the present invention are intended to be included in the scope of the present invention.

Claims (7)

1. A method for supporting full-automatic processing of multi-model multispectral camera data, comprising:
s1, extracting flight information of an unmanned aerial vehicle, and eliminating dark angle effect and registering multispectral images of a research area shot by the unmanned aerial vehicle, wherein the method comprises the following steps: s11, acquiring the following flight information from the original unmanned aerial vehicle flight: whether it is a plurality of consecutive flights for the same land; branding of the multispectral camera, and corresponding wave band quantity; whether XMP file information exists in the obtained image, and whether a dark angle effect correction coefficient is contained in the XMP file; whether XMP file information has EXIF header file information or not, and whether the EXIF information contains longitude, latitude, altitude and time information of image acquisition or not; whether a radiation reference value of the ground radiation calibration plate exists or not; judging whether a ground control point file exists or not according to data acquired by the non-RTK unmanned aerial vehicle, and judging whether the ground control point file contains complete information or not; s12, combining the data of multiple flights according to the flight times and wave bands aiming at the images obtained by multiple continuous flights of the same land; s13, eliminating the dark angle effect of all images; s14, carrying out batch registration on images among different wave bands to obtain registered multispectral images;
s2, including: s21, splicing the multispectral images to generate a three-dimensional point cloud; s22, establishing a ground control point pattern deep learning model, which comprises the following steps: before processing unmanned aerial vehicle data, acquiring a plurality of multispectral images containing ground control points in advance; marking a ground control point pattern; taking all the marked images as a training data set, and training by using a network model to obtain a ground control point pattern deep learning model; searching ground control points in the registered multispectral images by using a deep learning model, and endowing corresponding ground actually measured longitudes, latitudes and heights; s23, aiming at the image with the complete ground control point pattern, applying the ground control point pattern deep learning model generated in S22, selecting the image with the complete ground control point pattern from all the reference wave band images, framing out the pattern of the ground control point, and simultaneously outputting the number of the image and the position of the center of the ground control point pattern on the image; s24, calculating longitude and latitude information of each control point identified in S23 according to the geographic position of each image in the whole research area obtained by calculation after the point cloud is generated in S21; comparing the longitude and latitude data of each control point in the ground control point file in S11, selecting the nearest actual ground control point, and giving corresponding numbers; geometrically correcting all images; s25, generating an orthographic image of the whole flight based on the three-dimensional point cloud of the S24;
s3, cutting and radiometric calibration are carried out on the multispectral image covering the research area.
2. The method according to claim 1, wherein step S3 comprises:
s31, selecting an image containing a research area according to the boundary range of the research area obtained in the S1, the three-dimensional point cloud optimized in the S24 and longitude and latitude information of each image, and cutting according to the boundary range of the research area to obtain all multispectral images containing the research area;
s32, establishing a radiation calibration plate deep learning model;
s33, applying the deep learning model of the radiation calibration plate to all the multispectral images obtained in the S31, and selecting the images containing the radiation calibration plate to obtain multispectral images of the research area after radiation calibration;
s34, aiming at the multispectral image obtained in the S33 after radiation calibration of the research area, obtaining an orthographic image after radiation calibration based on the multi-angle observation reflectivity of the research area and the corresponding geometric information.
3. The method according to claim 1, wherein in step S22, all ground control points are framed with rectangular tools using the green band image.
4. The method according to claim 2, wherein step S32 comprises:
1) Before processing unmanned aerial vehicle data, collecting a plurality of multispectral images containing radiation calibration plates in advance;
2) Marking a radiation calibration plate;
3) And taking all the marked images as a training data set, and training by using a network model to obtain the deep learning model of the radiometric calibration plate.
5. The method of claim 4, wherein in step S32, all ground control points are framed with rectangular tools using the green band image.
6. The method according to claim 1, wherein in step S14, for the multiband image of the same object, using a certain band image as a reference, the other band images are moved according to the similarity of the images between different bands, so as to maximize the matching degree with the reference band image, and a new image is generated.
7. A system supporting fully automatic processing of multi-model multispectral camera data, characterized in that the system comprises a processor in which a computer program is loaded, which computer program, when running, performs the method according to any of claims 1-6.
CN202211273396.8A 2022-10-18 2022-10-18 Method and system for supporting full-automatic processing of multi-model multispectral camera data Active CN115578656B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211273396.8A CN115578656B (en) 2022-10-18 2022-10-18 Method and system for supporting full-automatic processing of multi-model multispectral camera data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211273396.8A CN115578656B (en) 2022-10-18 2022-10-18 Method and system for supporting full-automatic processing of multi-model multispectral camera data

Publications (2)

Publication Number Publication Date
CN115578656A CN115578656A (en) 2023-01-06
CN115578656B true CN115578656B (en) 2023-07-04

Family

ID=84585135

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211273396.8A Active CN115578656B (en) 2022-10-18 2022-10-18 Method and system for supporting full-automatic processing of multi-model multispectral camera data

Country Status (1)

Country Link
CN (1) CN115578656B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683068A (en) * 2015-11-04 2017-05-17 北京文博远大数字技术有限公司 Three-dimensional digital image acquisition method and equipment thereof

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102968631B (en) * 2012-11-22 2015-11-25 中国科学院、水利部成都山地灾害与环境研究所 The automatic geometric of mountain area multispectral remote sensing satellite image is corrected and ortho-rectification method
US11557057B2 (en) * 2017-05-04 2023-01-17 Skydio, Inc. Ground control point center determination
CN108111777B (en) * 2017-12-15 2021-02-02 武汉精立电子技术有限公司 Dark corner correction system and method
US11501104B2 (en) * 2018-11-27 2022-11-15 Here Global B.V. Method, apparatus, and system for providing image labeling for cross view alignment
CN109815916A (en) * 2019-01-28 2019-05-28 成都蝉远科技有限公司 A kind of recognition methods of vegetation planting area and system based on convolutional neural networks algorithm
FR3096499B1 (en) * 2019-05-24 2021-06-18 Inria Inst Nat Rech Informatique & Automatique Shooting processing device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683068A (en) * 2015-11-04 2017-05-17 北京文博远大数字技术有限公司 Three-dimensional digital image acquisition method and equipment thereof

Also Published As

Publication number Publication date
CN115578656A (en) 2023-01-06

Similar Documents

Publication Publication Date Title
Torres-Sánchez et al. Assessing UAV-collected image overlap influence on computation time and digital surface model accuracy in olive orchards
Gabrlik et al. Calibration and accuracy assessment in a direct georeferencing system for UAS photogrammetry
CN107782322B (en) Indoor positioning method and system and indoor map establishing device thereof
CN101685539A (en) On-line ortho-rectification method and system for remote sensing image
US20160343152A1 (en) Methods and systems for object based geometric fitting
CN110225264A (en) Unmanned plane near-earth is taken photo by plane the method for detecting farmland incomplete film
Nocerino et al. Multi-temporal analysis of landscapes and urban areas
CN108759788B (en) Unmanned aerial vehicle image positioning and attitude determining method and unmanned aerial vehicle
CN115331130B (en) Unmanned aerial vehicle inspection method based on geographical marker assisted navigation and unmanned aerial vehicle
CN113340277A (en) High-precision positioning method based on unmanned aerial vehicle oblique photography
CN113325872A (en) Plant inspection method, device and system and aircraft
CN115453555A (en) Unmanned aerial vehicle rapid monitoring method and system for grassland productivity
CN114972545A (en) On-orbit data rapid preprocessing method for hyperspectral satellite
CN112949411B (en) Spectral image correction method and device
CN114926732A (en) Multi-sensor fusion crop deep learning identification method and system
CN112799430B (en) Programmable unmanned aerial vehicle-based road surface image intelligent acquisition method
CN115578656B (en) Method and system for supporting full-automatic processing of multi-model multispectral camera data
CN115082812A (en) Agricultural landscape non-agricultural habitat green patch extraction method and related equipment thereof
CN105260389A (en) Unmanned aerial vehicle reconnaissance image data management and visual display method
Miraki et al. Using canopy height model derived from UAV imagery as an auxiliary for spectral data to estimate the canopy cover of mixed broadleaf forests
Zhang et al. A 250m annual alpine grassland AGB dataset over the Qinghai-Tibetan Plateau (2000–2019) based on in-situ measurements, UAV images, and MODIS Data
CN115272314B (en) Agricultural low-altitude remote sensing mapping method and device
Gotovac et al. A model for automatic geomapping of aerial images mosaic acquired by UAV
CN114390270B (en) Real-time intelligent site panorama exploration method and device and electronic equipment
Khaliq et al. Enhancing NDVI Calculation of Low-Resolution Imagery using ESRGANs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant