CN115578656A - Method and system for supporting full-automatic processing of multi-model multispectral camera data - Google Patents

Method and system for supporting full-automatic processing of multi-model multispectral camera data Download PDF

Info

Publication number
CN115578656A
CN115578656A CN202211273396.8A CN202211273396A CN115578656A CN 115578656 A CN115578656 A CN 115578656A CN 202211273396 A CN202211273396 A CN 202211273396A CN 115578656 A CN115578656 A CN 115578656A
Authority
CN
China
Prior art keywords
images
image
multispectral
research area
ground control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211273396.8A
Other languages
Chinese (zh)
Other versions
CN115578656B (en
Inventor
李文娟
吴文斌
余强毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Agricultural Resources and Regional Planning of CAAS
Original Assignee
Institute of Agricultural Resources and Regional Planning of CAAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Agricultural Resources and Regional Planning of CAAS filed Critical Institute of Agricultural Resources and Regional Planning of CAAS
Priority to CN202211273396.8A priority Critical patent/CN115578656B/en
Publication of CN115578656A publication Critical patent/CN115578656A/en
Application granted granted Critical
Publication of CN115578656B publication Critical patent/CN115578656B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Remote Sensing (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method for supporting full-automatic processing of multi-model multispectral camera data, which comprises the following steps: s1, extracting flight information of an unmanned aerial vehicle, and eliminating vignetting effect and registering multispectral images of a research area shot by the unmanned aerial vehicle; s2, splicing and geometrically correcting the registered multispectral image in the S1; and S3, cutting and radiometric calibration are carried out on the multispectral image covering the research area. The invention also correspondingly provides a system. The method is suitable for multiple multispectral cameras, can greatly improve the data processing capacity of the multispectral cameras carried on the unmanned aerial vehicle, and simplifies the operation process.

Description

Method and system for supporting full-automatic processing of multi-model multispectral camera data
Technical Field
The invention relates to a camera data processing technology, in particular to a method and a system for supporting full-automatic processing of multi-model multispectral camera data.
Background
With the development of unmanned aerial vehicle technology and multispectral imaging cameras, the application of unmanned aerial vehicles carrying multispectral imaging cameras in agricultural monitoring is increasing day by day. Processing images acquired by a multispectral camera involves a number of steps including: eliminating vignetting effect of images, registering multi-band images of the same photographic object, splicing images, performing geometric correction, generating three-dimensional point cloud, performing radiation correction, generating orthoimages of all wave bands, cutting multi-angle images of a research area on an original image according to the boundary of the research area, and the like. Several commercial software programs are available to perform the above operations, such as Pix4D, agisoft Metashape, and the map of majiang wisdom. However, many problems still exist when the current scientific research personnel or the application personnel in the industrial field process the multispectral images of the unmanned aerial vehicle:
1) Many operations need to be manually completed, for example, a link of geometric correction needs to be manually and visually found out a ground control point in a spliced image, and actually measured longitude and latitude information is input. Typically, 3-5 ground control points are placed in a flight, and longitude, latitude and altitude data are input for corresponding times. When the unmanned aerial vehicle to be processed flies more and the data volume is larger, a large amount of labor and time cost can be consumed in the link.
2) Some software systems do not support registration between multi-band images, while others do registration between bands after the generation of the ortho image, to some extent affecting the accuracy of the ortho image.
3) In many current researches, the image acquisition of each research area is directly cut from the orthoimage, the precision and the resolution are directly influenced by the orthoimage, and meanwhile, the orthoimage is an image with an observation angle vertically downward generated by software, so that the multi-angle information of each image of the original unmanned aerial vehicle in flying is lost, and the later-stage crop parameter extraction and application are not facilitated.
4) At present, a plurality of commercial multispectral imaging cameras exist in the market, a set of fully automatic processing method is not available, and the multispectral imaging cameras can be compatible with cameras of different brands.
Disclosure of Invention
Aiming at the problems in the background technology, the invention provides an unmanned aerial vehicle data automatic processing method compatible with multi-model multispectral imaging camera acquisition, and a set of automated processes of multispectral image preprocessing, splicing, radiometric calibration and research area cutting are generated.
The invention discloses a method for supporting full-automatic processing of multi-model multispectral camera data, which comprises the following steps: s1, extracting flight information of an unmanned aerial vehicle, and eliminating vignetting effect and registering multispectral images of a research area shot by the unmanned aerial vehicle; s2, splicing and geometrically correcting the registered multispectral image in the S1; and S3, cutting and radiometric calibration are carried out on the multispectral image covering the research area.
The invention also contradicts a system supporting full-automatic processing of multi-model multispectral camera data, which comprises a processor, wherein the processor can realize the method.
The beneficial effects of the invention include: the method can automatically register the multispectral image, automatically search the ground control point position and endow the ground control point position with real geographic coordinates, automatically search the radiometric calibration plate and extract related measured data to realize radiometric calibration, automatically cut out the image containing a research area and calculate the multi-angle reflectivity, is suitable for a plurality of multispectral cameras, can greatly improve the data processing capacity of the multispectral camera carried on the unmanned aerial vehicle, simplifies the operation process, and enables non-image processing or remote sensing professionals to well process the flying image.
Drawings
In order that the invention may be more readily understood, reference will now be made in detail to the embodiments illustrated in the accompanying drawings. These drawings depict only typical embodiments of the invention and are not therefore to be considered to limit the scope of the invention.
FIG. 1 is a flow chart of one embodiment of the method of the present invention.
FIG. 2 is a flow chart of another embodiment of the method of the present invention.
Fig. 3 is a sample vignetting effect correction factor for an image acquired by a multi-spectral camera.
Figure 4 shows various images containing ground control points.
Detailed Description
Embodiments of the present invention will be described below with reference to the accompanying drawings so that those skilled in the art can better understand the present invention and can carry out the present invention, but the illustrated embodiments are not intended to limit the present invention, and technical features in the following embodiments and embodiments can be combined with each other without conflict, wherein like parts are denoted by like reference numerals.
As shown in fig. 1-2, the method of the present invention includes S1-S3.
S1, extracting flight information of the unmanned aerial vehicle, and preprocessing multispectral images shot by the unmanned aerial vehicle: the vignetting effect and registration are eliminated. In one embodiment, step S1 includes S11-S14.
S11, acquiring the following flight information from the original flight of the unmanned aerial vehicle: the number of continuous flights, the number of wave bands, a vignetting effect correction coefficient, image acquisition GPS information (longitude, latitude, height and time), a radiation reference value of a ground radiation calibration plate, and ground control point information (longitude, latitude and altitude of each control point).
Specifically, the following information is acquired one by one. Whether it is a plurality of consecutive flights directed to the same plot. The brand of the multispectral camera, and the number of corresponding bands. Whether the obtained image has XMP file information or not and whether the XMP file contains the vignetting effect correction coefficient or not. Whether the XMP file information has EXIF header file information or not and whether the EXIF information contains GPS information (longitude, latitude, altitude and time) of image acquisition or not. Whether there is a radiation reference value of the ground radiation calibration plate, namely, reflectivity data of each wave band. If the data is collected by the non-RTK unmanned aerial vehicle, whether a ground control point file exists or not needs to be judged, and whether the ground control point file contains complete information (longitude, latitude and altitude of each control point) or not needs to be judged. If the information is complete, continuing the next step, and if the information is missing, outputting the missing information in the code running log and prompting an operator.
And S12, aiming at multispectral images obtained by multiple continuous flights of the same land, merging data of multiple flights according to the flight times and wave bands.
In one embodiment, the folders may be combined into a same folder, and the specific operation steps are as follows: the first flight has N images named 1_XXnm. Tif until N _ XXnm. Tif, wherein XX represents a wave band, the second flight has M images, the first image is named and modified to 1+ N _XXnm. Tif, the M image is modified to M + N _ XXnm. Tif, and the rest is repeated until multiple flights of the same land are combined into the same file. The images flying for many times are placed in the same folder in sequence, so that the repetition and the coverage of the image names are avoided.
And S13, eliminating vignetting effect of all images.
And reading vignetting effect correction coefficients in the XMP file information of the image, and applying the vignetting effect correction coefficients to the original multispectral image for correction.
FIG. 3 shows the Vignetting effect correction coefficient Vignetting polymonal and the Vignetting effect correction Center of an image collected by the Xintom 4 multispectral camera. As shown in fig. 2, the correction can be performed by using a common polynomial equation, where the value of each pixel in the image before correction is DN, and after correction is DN1:
DN1=DN×(k 6 ×r 6 +k 5 ×r 5 +k 4 ×r 4 +k 3 ×r 3 +k 2 ×r 2 +k×r)
wherein k to k 6 Respectively 6 coefficients in the Vignetting multinomial, and r is the geometric distance of each pixel of the image to the Center of the correction Vignetting Center.
And S14, carrying out batch registration on the images among different wave bands to obtain a registered multispectral image.
For a multiband image of the same target, a certain waveband image is used as a reference, a green waveband is preferentially selected, and other waveband images are moved according to the similarity of images in different wavebands, so that the matching degree between the other waveband images and the reference waveband image is highest, and a new image is generated. And simultaneously, generating a matching degree parameter between each image before and after moving and the reference waveband image, and writing the matching degree parameter into an operation log.
The invention automatically registers the multiband images of the same target, ensures that the images of all the wave bands are registered in place before the orthoscopic image is generated, and the step is not limited to a certain multispectral camera, but is suitable for multispectral cameras of various models.
And S2, splicing and geometrically correcting the registered multispectral image in the S1. In one embodiment, step S2 includes S21-S24.
And S21, splicing the images to generate three-dimensional point cloud. In one embodiment, the multispectral image obtained in S1 may be input using an API interface of commercial software Agisoft Metashape, and a green band is preferentially selected for stitching and three-dimensional reconstruction using an image of a certain band as a reference, and images of the remaining bands are located at the same position as the reference band. Based on the motion Structure recovery (SFM), three-dimensional reconstruction is realized according to the corresponding relation of a plurality of images and the characteristics thereof, and three-dimensional point cloud of a region of interest is generated.
At present, a plurality of commercial or open source software can realize image splicing and three-dimensional point cloud reconstruction, wherein the Agisoft Metashape is the most mature and stable, an API (application programming interface) capable of performing customized secondary development is provided, and calling is convenient.
And S22, establishing a ground control point pattern deep learning model.
1) A large number (e.g., 2000) of multispectral images containing ground control points are collected in advance before processing the drone data.
2) And marking a ground control point pattern. And selecting a reference wave band image (such as a green wave band), and using an online deep learning labeling tool LabelStaudio to box all the ground control points out by a rectangular tool. Figure 4 shows various images containing ground control points. The image synthesized by red-green-blue three wave bands is shown in the figure, so that the visualization effect is better.
3) And taking all the marked images as a training data set, and training by using a network model to obtain a ground control point pattern deep learning model. Preferably, the Fast R-CNN deep network is used for training, and compared with the R-CNN network and the Fast R-CNN network, the network has better performance and can capture more target characteristic information.
During training, 70% of images are randomly selected as a training data set, the rest 30% of images are used as a verification data set, after training is completed, the generated model is used for forecasting on the rest 30% of images, comparison with an actual result is carried out, and precision evaluation is carried out. This accuracy evaluation process is performed a plurality of times (e.g., three times) in total, and the result is output to the operation log.
And automatically searching ground control points in the registered images by using a deep learning technology, and giving corresponding ground actually-measured longitude, latitude and height, so that manual operation can be greatly reduced.
S23, the model generated in S22 is applied to the image with the complete ground control point (true accuracy, latitude and altitude information) pattern.
Firstly, selecting an image with a complete ground control point pattern from all reference waveband images, then framing the pattern of the ground control points, and simultaneously outputting the number of the image and the position of the center of the ground control point pattern on the image. All information is written into the running log.
And S24, optimizing the three-dimensional point cloud generated in the S21 to obtain the three-dimensional point cloud after geometric correction.
1) And calculating the longitude and latitude information of each control point identified in the step S23 according to the geographical position of each image in the whole research area, which is calculated after the point cloud is generated in the step S21. 2) And comparing the longitude and latitude data of each control point in the ground control point file in the S11, selecting the closest actual ground control point, and assigning a corresponding number. 3) Geometric corrections are made to all images (e.g., by calling the Agisoft Metashape API). Preferably, the geometric accuracy of the corrected image should be within 10mm, and if the geometric accuracy exceeds the range, the error reporting is stopped, and otherwise, the operation is continued.
And S25, performing geometric correction on all the images (calling the Agisoft Metashape API to continue running) to generate an orthoimage of the whole flight.
And S3, cutting and radiometric calibration are carried out on the multispectral image covering the research area. And automatically searching an image covering the research area from the plurality of images based on the information after image splicing and the input research area range, and cutting according to the range of the research area to obtain the multi-angle image of the research area. In one embodiment, step S3 includes S31-S33.
And S31, selecting the image containing the research area according to the boundary range of the research area obtained in the S11, the three-dimensional point cloud optimized in the S24 and the longitude and latitude information of each image, and cutting according to the boundary range of the research area to obtain all multispectral images containing the research area. Preferably, a new folder is created according to the name of the study area, and all the images after cropping are stored.
And S32, establishing a deep learning model of the radiometric calibration plate.
1) A large number (e.g., 2000) of multispectral images containing the radiometric calibration plate are collected in advance before processing the drone data.
2) Marking the radiation calibration plate. A reference band image (e.g., green band) is selected and all radiometric calibration plates are boxed with a rectangular tool using the online deep learning labeling tool, labelStaudio.
3) And taking all the marked images as a training data set, and training by using a network model to obtain a deep learning model of the radiometric calibration plate. Preferably, the Fast R-CNN deep network is used for training, and compared with the R-CNN network and the Fast R-CNN network, the network has better performance and can capture more target characteristic information.
The radiation calibration plate placed on the ground is automatically searched by using a deep learning technology, corresponding information is read, radiation calibration operation is further carried out, and manual operation of searching the radiation reference plate is reduced.
And S33, applying the radiometric calibration plate deep learning model to all the multispectral images obtained in the S31, selecting the images containing the radiometric calibration plate, and obtaining the multispectral images subjected to radiometric calibration in the research area.
If the two-dimensional code is the radiation calibration plate with the two-dimensional code, reading the two-dimensional code information and multiband reflectivity data of the radiation calibration plate contained in the two-dimensional code information; if the standard radiation calibration plate is adopted, the multiband reflectivity input in the step S11 is read. Positioning two kinds of radiometric calibration plates to the center, drawing a square buffer zone, wherein the side length is generally a part of the side length of the radiometric calibration plate, for example, two thirds of the side length ensures that the buffer zone is on the radiometric calibration plate and is not influenced by the boundary, reading the actually measured DN value of the zone, and performing radiometric calibration of each image by using the following formula:
Figure BDA0003896054730000081
wherein DN is the multispectral image after S14 registration, t is the integration time corresponding to the image, DN ref For the DN value, t, of the radiometric calibration plate read as above ref Integrating time corresponding to the image containing the radiation calibration plate, wherein omega is a geometric angle and comprises an observation zenith angle, an observation azimuth angle, a sun zenith angle and a sun azimuth angle of each image, lambda is a wave band, and BRF is a function of the sun azimuth angle and the sun azimuth angle ref And (omega o, lambda) is the standard reflectivity of the radiometric calibration plate in the wave band lambda, and BRF (omega, lambda) is the reflectivity data of each image after radiometric calibration.
Finally, a reflectivity image after radiometric calibration can be generated, and the reflectivity mean value of the image is calculated and output to an excel table.
And S34, aiming at the multispectral image obtained in the S33 after the research area is subjected to the radiometric calibration, and obtaining an orthoimage after the radiometric calibration based on the multi-angle observation reflectivity and the corresponding geometric information of the research area.
Specifically, in all the images including the radiometric calibration plate selected in S33, the image in which the observation zenith angle is closest to the infrastar is selected, and the DN of the radiometric calibration plate is extracted ref And (3) carrying out radiation calibration on the orthoimage generated in the step (S25) by using a formula (1), and converting the orthoimage into a multiband reflectivity orthoimage.
The invention has been tested on a plurality of multispectral cameras (Micasense RedEdge-MX, xintom 4M, airphen and the like), and tested in various crops, and has good effect.
The embodiments described above are merely preferred specific embodiments of the present invention, and the present specification uses the phrases "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments in accordance with the present disclosure. General changes and substitutions by those skilled in the art within the technical scope of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for supporting full-automatic processing of multi-model multispectral camera data is characterized by comprising the following steps:
s1, extracting flight information of an unmanned aerial vehicle, and eliminating vignetting effect and registering multispectral images of a research area shot by the unmanned aerial vehicle;
s2, splicing and geometrically correcting the registered multispectral image in the S1;
and S3, cutting and radiometric calibration are carried out on the multispectral image covering the research area.
2. The method according to claim 1, wherein step S1 comprises:
s11, acquiring the following flight information from the original flight of the unmanned aerial vehicle: the method comprises the following steps of (1) continuously flying times, the number of wave bands, a vignetting effect correction coefficient, image acquisition GPS information, a ground radiation calibration board radiation reference value and ground control point information;
s12, aiming at images obtained by multiple continuous flights of the same land, merging data of multiple flights according to the flight times and wave bands;
s13, eliminating vignetting effects of all images;
and S14, carrying out batch registration on the images among different wave bands to obtain a registered multispectral image.
3. The method according to claim 2, wherein step S2 comprises:
s21, splicing the images to generate three-dimensional point cloud;
s22, establishing a ground control point pattern deep learning model;
s23, aiming at the image with the complete ground control point pattern, applying the model generated in the S22;
s24, optimizing the three-dimensional point cloud generated in the S21 to obtain a three-dimensional point cloud after geometric correction;
and S25, performing geometric correction on all the images to generate an orthoimage of the whole flight.
4. The method according to claim 3, wherein step S3 comprises:
s31, selecting an image containing the research area according to the boundary range of the research area obtained in the S11, the three-dimensional point cloud optimized in the S24 and the longitude and latitude information of each image, and cutting according to the boundary range of the research area to obtain all multispectral images containing the research area;
s32, establishing a deep learning model of the radiation calibration plate;
s33, applying the radiometric calibration plate deep learning model to all the multispectral images obtained in the S31, selecting the images containing the radiometric calibration plate, and obtaining the multispectral images subjected to radiometric calibration in the research area;
and S34, aiming at the multispectral image obtained in the S33 after the research area is subjected to the radiometric calibration, and obtaining an orthoimage after the radiometric calibration based on the multi-angle observation reflectivity and the corresponding geometric information of the research area.
5. The method according to claim 3, wherein step S22 comprises:
1) Before unmanned aerial vehicle data is processed, collecting a plurality of multispectral images containing ground control points in advance;
2) Marking a ground control point pattern;
3) And taking all the marked images as a training data set, and training by using a network model to obtain a ground control point pattern deep learning model.
6. The method of claim 5, wherein in step S22, all ground control points are boxed with a rectangular tool using the green band image.
7. The method of claim 4, wherein step S32 comprises:
1) Before unmanned aerial vehicle data is processed, collecting a plurality of multispectral images containing a radiometric calibration board in advance;
2) Labeling a radiation calibration plate;
3) And taking all the marked images as a training data set, and training by using a network model to obtain a deep learning model of the radiometric calibration plate.
8. The method of claim 7, wherein in step S32, all ground control points are boxed with a rectangular tool using the green band image.
9. The method according to claim 2, wherein in step S14, for the multiband images of the same object, a certain waveband image is used as a reference, and other waveband images are moved according to the similarity of the images between different wavebands, so that the matching degree between the other waveband images and the reference waveband image is the highest, and a new image is generated.
10. A system for supporting full automatic processing of multi-model multispectral camera data, the system comprising a processor loaded with a computer program, the computer program when executed, the system performing the method of any one of claims 1-9.
CN202211273396.8A 2022-10-18 2022-10-18 Method and system for supporting full-automatic processing of multi-model multispectral camera data Active CN115578656B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211273396.8A CN115578656B (en) 2022-10-18 2022-10-18 Method and system for supporting full-automatic processing of multi-model multispectral camera data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211273396.8A CN115578656B (en) 2022-10-18 2022-10-18 Method and system for supporting full-automatic processing of multi-model multispectral camera data

Publications (2)

Publication Number Publication Date
CN115578656A true CN115578656A (en) 2023-01-06
CN115578656B CN115578656B (en) 2023-07-04

Family

ID=84585135

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211273396.8A Active CN115578656B (en) 2022-10-18 2022-10-18 Method and system for supporting full-automatic processing of multi-model multispectral camera data

Country Status (1)

Country Link
CN (1) CN115578656B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102968631A (en) * 2012-11-22 2013-03-13 中国科学院、水利部成都山地灾害与环境研究所 Automatic geometric correction and orthorectification method for multispectral remote sensing satellite images of mountainous area
CN106683068A (en) * 2015-11-04 2017-05-17 北京文博远大数字技术有限公司 Three-dimensional digital image acquisition method and equipment thereof
CN108111777A (en) * 2017-12-15 2018-06-01 武汉精立电子技术有限公司 A kind of dark angle correction system and method
CN109815916A (en) * 2019-01-28 2019-05-28 成都蝉远科技有限公司 A kind of recognition methods of vegetation planting area and system based on convolutional neural networks algorithm
US20200167603A1 (en) * 2018-11-27 2020-05-28 Here Global B.V. Method, apparatus, and system for providing image labeling for cross view alignment
US20210407126A1 (en) * 2017-05-04 2021-12-30 Skydio, Inc. Ground control point center determination
US20220319035A1 (en) * 2019-05-24 2022-10-06 Inria Institut National De Recherche En Informa... Shot-processing device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102968631A (en) * 2012-11-22 2013-03-13 中国科学院、水利部成都山地灾害与环境研究所 Automatic geometric correction and orthorectification method for multispectral remote sensing satellite images of mountainous area
CN106683068A (en) * 2015-11-04 2017-05-17 北京文博远大数字技术有限公司 Three-dimensional digital image acquisition method and equipment thereof
US20210407126A1 (en) * 2017-05-04 2021-12-30 Skydio, Inc. Ground control point center determination
CN108111777A (en) * 2017-12-15 2018-06-01 武汉精立电子技术有限公司 A kind of dark angle correction system and method
US20200167603A1 (en) * 2018-11-27 2020-05-28 Here Global B.V. Method, apparatus, and system for providing image labeling for cross view alignment
CN109815916A (en) * 2019-01-28 2019-05-28 成都蝉远科技有限公司 A kind of recognition methods of vegetation planting area and system based on convolutional neural networks algorithm
US20220319035A1 (en) * 2019-05-24 2022-10-06 Inria Institut National De Recherche En Informa... Shot-processing device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TATJANA BÜRGMANN等: ""Matching of TerraSAR-X derived ground control points to optical image patches using deep learning"", 《ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING》, vol. 158 *
赵宝玮: ""机载多光谱相机数据的高质量获取与处理技术研究"", 《中国博士学位论文全文数据库•信息科技辑》, vol. 2016, no. 04, pages 11 *

Also Published As

Publication number Publication date
CN115578656B (en) 2023-07-04

Similar Documents

Publication Publication Date Title
Torres-Sánchez et al. Assessing UAV-collected image overlap influence on computation time and digital surface model accuracy in olive orchards
US20230316555A1 (en) System and Method for Image-Based Remote Sensing of Crop Plants
Habib et al. Improving orthorectification of UAV-based push-broom scanner imagery using derived orthophotos from frame cameras
CN110222903B (en) Rice yield prediction method and system based on unmanned aerial vehicle remote sensing
JP5542530B2 (en) Sampling position determination device
CN101685539A (en) On-line ortho-rectification method and system for remote sensing image
Gilliot et al. An accurate method for predicting spatial variability of maize yield from UAV-based plant height estimation: A tool for monitoring agronomic field experiments
Nocerino et al. Multi-temporal analysis of landscapes and urban areas
US20180089804A1 (en) Method of processing an image
CN110225264A (en) Unmanned plane near-earth is taken photo by plane the method for detecting farmland incomplete film
CN115331130B (en) Unmanned aerial vehicle inspection method based on geographical marker assisted navigation and unmanned aerial vehicle
Fan et al. Low‐cost visible and near‐infrared camera on an unmanned aerial vehicle for assessing the herbage biomass and leaf area index in an Italian ryegrass field
CN115453555A (en) Unmanned aerial vehicle rapid monitoring method and system for grassland productivity
Wierzbicki et al. Method of radiometric quality assessment of NIR images acquired with a custom sensor mounted on an unmanned aerial vehicle
Zhang et al. A 250 m annual alpine grassland AGB dataset over the Qinghai–Tibet Plateau (2000–2019) in China based on in situ measurements, UAV photos, and MODIS data
Calou et al. Estimation of maize biomass using unmanned aerial vehicles
Sandino et al. A novel approach for invasive weeds and vegetation surveys using UAS and Artificial Intelligence
CN115578656B (en) Method and system for supporting full-automatic processing of multi-model multispectral camera data
CN112799430A (en) Programmable unmanned aerial vehicle-based road surface image intelligent acquisition method
CN114926732A (en) Multi-sensor fusion crop deep learning identification method and system
CN113240648A (en) Vegetation growth monitoring and analyzing method and device of multi-temporal visible light image
Zhang et al. A 250m annual alpine grassland AGB dataset over the Qinghai-Tibetan Plateau (2000–2019) based on in-situ measurements, UAV images, and MODIS Data
CN115272314B (en) Agricultural low-altitude remote sensing mapping method and device
Šiljeg et al. Quality Assessment of Worldview-3 Stereo Imagery Derived Models Over Millennial Olive Groves
Mathivanan et al. Utilizing satellite and UAV data for crop yield prediction and monitoring through deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant