CN111967523B - Data fusion agricultural condition detection system and method based on multi-rotor aircraft - Google Patents
Data fusion agricultural condition detection system and method based on multi-rotor aircraft Download PDFInfo
- Publication number
- CN111967523B CN111967523B CN202010836910.9A CN202010836910A CN111967523B CN 111967523 B CN111967523 B CN 111967523B CN 202010836910 A CN202010836910 A CN 202010836910A CN 111967523 B CN111967523 B CN 111967523B
- Authority
- CN
- China
- Prior art keywords
- unit
- data fusion
- image
- electrically connected
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/882—Radar or analogous systems specially adapted for specific applications for altimeters
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/89—Radar or analogous systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/02—Agriculture; Fishing; Forestry; Mining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/143—Sensing or illuminating at different wavelengths
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/188—Vegetation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Computer Networks & Wireless Communication (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Electromagnetism (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Marine Sciences & Fisheries (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Mining & Mineral Resources (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Agronomy & Crop Science (AREA)
- Animal Husbandry (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本发明涉及一种基于多旋翼飞行器的数据融合农情检测系统及方法,属于农业信息化领域。The invention relates to a multi-rotor aircraft-based data fusion agricultural situation detection system and method, belonging to the field of agricultural informatization.
背景技术Background technique
近年来,随着信息产业的第三次浪潮,物联网技术蓬勃发展,在土壤和水资源的可持续利用、生态环境监测、农业生产过程精细化管理、农产品与食品安全可追溯系统和大型农业机械作业服务调度等多个农业领域发展。物联网技术将安置在田间地头的传感器所采集到的各种数据反馈处理,在一定程度上实现了实时农业监测与数据采集传输。In recent years, with the third wave of the information industry, the Internet of Things technology has developed vigorously. The development of many agricultural fields such as mechanical operation service scheduling. The Internet of Things technology feeds back and processes various data collected by sensors placed in the field, and realizes real-time agricultural monitoring and data collection and transmission to a certain extent.
对于植保领域的农业检测数据采集与处理方面,目前市面上广泛采用的做法主要有两种,一种是植保飞行作业前采用无人机航测的方式,利用多光谱相机采集出大致的作物长势状态进行路径规划;另一种是在飞行的过程中利用传感器对地形进行感知,从而单一的调整飞行高度。从产品技术角度看,单一传感器所采集的数据没有进行数据融合处理,仅是对于单一变量的控制的方式并不能实现完整的农业监测与精准施药;从产品性能看,相关产品并没有着力于综合农业监测,植保无人机的功能还有待进一步完善。For the collection and processing of agricultural inspection data in the field of plant protection, there are currently two main methods widely used in the market. One is to use drone aerial surveys before plant protection flight operations, and use multi-spectral cameras to collect approximate crop growth status. Path planning; the other is to use sensors to sense the terrain during the flight, so as to adjust the flight height in a single way. From the perspective of product technology, the data collected by a single sensor has not been processed by data fusion, and only the way of controlling a single variable cannot achieve complete agricultural monitoring and precise pesticide application; from the perspective of product performance, related products have not focused on The functions of comprehensive agricultural monitoring and plant protection drones need to be further improved.
发明内容Contents of the invention
本发明设计开发了一种基于多旋翼飞行器的数据融合农情检测系统,能够克服传统植保无人机无法根据作业地形、植物分布稀疏程度及作物长势实时调整喷洒的作业用量而造成烧苗的问题。The invention designs and develops a data fusion agricultural situation detection system based on multi-rotor aircraft, which can overcome the problem of seedling burning caused by traditional plant protection drones that cannot adjust the amount of spraying in real time according to the operating terrain, sparseness of plant distribution, and crop growth. .
本发明还设计开发了一种基于多旋翼飞行器的数据融合农情检测方法,根据点云图和多光谱图像进行数据融合,对农作物的疏密情况进行评价,实现农药喷洒用量的调整。The invention also designs and develops a multi-rotor aircraft-based data fusion agricultural situation detection method, which performs data fusion based on point cloud images and multispectral images, evaluates the density of crops, and realizes the adjustment of pesticide spraying dosage.
本发明提供的技术方案为:The technical scheme provided by the invention is:
一种基于多旋翼飞行器的数据融合农情检测系统,包括:A data fusion agricultural situation detection system based on multi-rotor aircraft, including:
数据融合单元;data fusion unit;
数字成像雷达单元,其电连接所述数据融合单元,用于向所述数据融合单元提供点云图;A digital imaging radar unit, which is electrically connected to the data fusion unit, and is used to provide a point cloud image to the data fusion unit;
多光谱相机单元,其电连接所述数据融合单元,用于向所述数据融合单元提供可见光图像;a multispectral camera unit, electrically connected to the data fusion unit, for providing visible light images to the data fusion unit;
毫米雷达波单元,其电连接所述数据融合单元,用于测量与地面之间的相对高度;A millimeter radar wave unit, which is electrically connected to the data fusion unit, is used to measure the relative height to the ground;
执行单元,其双向电连接所述数据融合单元,用于调整农药的喷洒用量。The execution unit is electrically connected to the data fusion unit bidirectionally, and is used for adjusting the spraying dosage of the pesticide.
优选的是,所述数字成像雷达单元包括:Preferably, the digital imaging radar unit comprises:
成像雷达;imaging radar;
第一存储单元,其双向电连接所述成像雷达;a first storage unit, which is bidirectionally electrically connected to the imaging radar;
第一处理单元,其双向点连接所述第一存储单元,并与所述数据融合单元电连接。The first processing unit is bidirectionally connected to the first storage unit and electrically connected to the data fusion unit.
优选的是,所述多光谱相机单元包括:Preferably, the multispectral camera unit includes:
多光谱相机;multispectral camera;
第二存储单元,其双向电连接所述多光谱相机;A second storage unit, which is electrically connected to the multispectral camera bidirectionally;
第二处理单元,其双向电连接所述第二存储单元,并与所述数据融合单元电连接。The second processing unit is bidirectionally electrically connected to the second storage unit and electrically connected to the data fusion unit.
优选的是,所述数据融合单元包括:Preferably, the data fusion unit includes:
第三存储单元,其与所述多光谱相机单元和所述数字成像雷达单元同时电连接;a third storage unit electrically connected to the multispectral camera unit and the digital imaging radar unit simultaneously;
第三处理单元,其与所述第三存储单元双向电连接,并与所述执行结构电连接。A third processing unit is electrically connected bidirectionally to the third storage unit and electrically connected to the execution structure.
一种基于多旋翼飞行器的数据融合农情检测方法,使用所述的基于多旋翼飞行器的数据融合农情检测系统,并包括:A multi-rotor aircraft-based data fusion agricultural situation detection method, using the multi-rotor aircraft-based data fusion agricultural situation detection system, and includes:
确定并保持无人机与地面的飞行高度,对无人机下方进行扇形扫描后,向数据融合单元发送点云图;Determine and maintain the flying height between the UAV and the ground, and send the point cloud image to the data fusion unit after sector scanning under the UAV;
对无人机下方进行拍照和分析,向数据融合单元发送可见光图像;Take photos and analysis of the drone below, and send visible light images to the data fusion unit;
将点云图像和多光谱图像进行数据融合,对农作物的疏密分布进行评估并发送给执行机构来实时调整农药的喷洒用量,实现实时变量喷洒。Data fusion of point cloud images and multispectral images is performed to evaluate the density distribution of crops and sent to the executive agency to adjust the amount of pesticide spraying in real time to achieve real-time variable spraying.
优选的是,所述数据融合过程包括:Preferably, the data fusion process includes:
设定点云图为图像A,可见光图像为图像B;Set the point cloud image as image A, and the visible light image as image B;
将图像A和图相B分别进行L层尺度分解,依据融合策略对两组分解系数进行融合,得到融合系数;Decompose the image A and the image B on the L-level scale respectively, and fuse the two sets of decomposition coefficients according to the fusion strategy to obtain the fusion coefficient;
采用多尺度反变融合后的分解系数反向重构得到融合图像;The fused image is obtained by inverse reconstruction of the decomposition coefficients after multi-scale inverse fusion;
进行图像增强,通过评估高亮区域的面积对农作物的疏密情况进行判断,进而调整药液流量。Carry out image enhancement, judge the density of crops by evaluating the area of the highlighted area, and then adjust the liquid flow.
优选的是,Preferably,
所述图像A处于i尺度的分解系数式为: The decomposition coefficient formula of the image A at the i scale is:
所述图像B处于i尺度的分解系数式为: The decomposition coefficient formula of the image B at the i scale is:
所述融合系数式为: The fusion coefficient formula is:
其中,i=1,2,.....,L,F为多尺度反变后所得的系数,L为分解的尺度数。Among them, i=1,2,...,L, F is the coefficient obtained after multi-scale inversion, and L is the scale number of decomposition.
优选的是,所述图像增强包括:Preferably, the image enhancement includes:
步骤1、对所述融合图像的直方图中像素灰度进行统计,得到在RGB色彩灰度范围为[0-255]中,像素数为零的空闲灰度数目;Step 1, performing statistics on the pixel grayscale in the histogram of the fusion image, and obtaining the number of idle grayscales whose pixel number is zero in the RGB color grayscale range [0-255];
将图像直方图修改为: Modify the image histogram to:
统计his'=0的灰度级个数,得到灰度级个数Lr;Count the number of gray levels with his'=0 to obtain the number of gray levels L r ;
式中,his[r]为待处理数据,Δ为阈值,his'[r]为处理后的数据;In the formula, his[r] is the data to be processed, Δ is the threshold, and his'[r] is the processed data;
步骤2、计算出整个灰度范围[0,255]内有效灰度级别的数目Le,
Le=255-Lr,L e =255-L r ,
步骤3、将空闲灰度级分配给有效灰度级,对不为零的有效灰度级在整个灰度范围内做非线性拉伸变换,变换函数为:Step 3. Assign the idle gray level to the effective gray level, and perform nonlinear stretching transformation on the effective gray level that is not zero in the entire gray scale range. The transformation function is:
式中,Sk为增强后的灰度级,T为灰度级系数,rk为需要处理的像素区域,L'i为待处理图像的像素灰度;In the formula, S k is the enhanced gray level, T is the gray level coefficient, r k is the pixel area to be processed, and L' i is the pixel gray level of the image to be processed;
通过计算高亮区域面积实时调整药液流量L,流量调整经验公式为:By calculating the area of the highlighted area, adjust the liquid flow L in real time. The empirical formula for flow adjustment is:
L=Sk×P;L= Sk ×P;
式中,P为基础作业流量。In the formula, P is the basic operation flow.
本发明所述的有益效果:是解决传统植保无人机无法根据作业地形、植物分布稀疏程度及作物长势实时调整喷洒的作业用量而造成烧苗的问题。传统植保无人机使用了毫米波雷达,通过安装在正在飞行中的植保无人机下方的雷达采集到的数据,判断植保无人机与地面的相对距离从而保证植保无人机与地面的相对高度时刻保持不变,解决了植保无人机在作业中飞行安全及无法通过对周围环境实时传感从而无法实现根据作物长势实时调整喷洒药液量的难题。The beneficial effect of the present invention is to solve the problem that traditional plant protection drones cannot adjust the amount of spraying in real time according to the operating terrain, the sparseness of plant distribution, and the growth of crops, resulting in burning seedlings. Traditional plant protection UAVs use millimeter-wave radar. Through the data collected by the radar installed under the flying plant protection UAV, the relative distance between the plant protection UAV and the ground is judged to ensure the relative distance between the plant protection UAV and the ground. The height remains constant at all times, which solves the problem that the plant protection drone can fly safely during operation and cannot realize the real-time adjustment of the amount of spraying liquid according to the growth of crops through real-time sensing of the surrounding environment.
附图说明Description of drawings
图1为本发明所述的基于多旋翼飞行器的数据融合农情检测系统的整体结构示意图。FIG. 1 is a schematic diagram of the overall structure of the multi-rotor aircraft-based data fusion agricultural situation detection system according to the present invention.
图2为本发明所述的数字雷达成像单元的结构示意图。Fig. 2 is a schematic structural diagram of the digital radar imaging unit of the present invention.
图3为本发明所述的多光谱相机单元的结构示意图。Fig. 3 is a schematic structural diagram of a multispectral camera unit according to the present invention.
图4为本发明所述的数据融合单元的结构示意图。Fig. 4 is a schematic structural diagram of a data fusion unit according to the present invention.
图5为本发明所述的执行机构的结构示意图。Fig. 5 is a schematic structural diagram of the actuator according to the present invention.
图6为本发明所述的毫米波雷达单元的结构示意图。FIG. 6 is a schematic structural diagram of a millimeter wave radar unit according to the present invention.
图7为本发明所述的融合算法流程图。Fig. 7 is a flowchart of the fusion algorithm described in the present invention.
图8为本发明所述的简化的面阵波束合成器示意图。Fig. 8 is a schematic diagram of a simplified area beamformer according to the present invention.
图9为本发明所述的神经网络PID框图。Fig. 9 is a block diagram of the neural network PID according to the present invention.
图10为本发明所述的普通PID阶跃响应图。Fig. 10 is a general PID step response diagram according to the present invention.
图11为本发明所述的神经网络PID阶跃响应图。Fig. 11 is a step response diagram of the neural network PID according to the present invention.
图12为本发明所述变量喷洒系统流程图。Fig. 12 is a flowchart of the variable spraying system of the present invention.
具体实施方式Detailed ways
下面结合附图对本发明做进一步的详细说明,以令本领域技术人员参照说明书文字能够据以实施。The present invention will be further described in detail below in conjunction with the accompanying drawings, so that those skilled in the art can implement it with reference to the description.
如图1-12所示,本发明提供一种基于多旋翼飞行器的数据融合农情检测系统,包括:数据融合单元;数字成像雷达单元、多光谱相机单元、执行单元、毫米波雷达单元。As shown in Figures 1-12, the present invention provides a data fusion agricultural situation detection system based on a multi-rotor aircraft, including: a data fusion unit; a digital imaging radar unit, a multispectral camera unit, an execution unit, and a millimeter wave radar unit.
数字成像雷达单元电连接数据融合单元,用于向数据融合单元提供点云图,多光谱相机单元电连接数据融合单元,用于向数据融合单元提供可见光图像,毫米波雷达单元电连接数据融合单元,用于测量系统与地面的相对高度;执行机构与数据融合单元电连接,用于实时调整农药的喷洒用量。The digital imaging radar unit is electrically connected to the data fusion unit for providing point cloud images to the data fusion unit, the multispectral camera unit is electrically connected to the data fusion unit for providing visible light images to the data fusion unit, and the millimeter wave radar unit is electrically connected to the data fusion unit, It is used to measure the relative height between the system and the ground; the actuator is electrically connected to the data fusion unit, and is used to adjust the spraying amount of pesticide in real time.
植保无人机在实际作业飞行中作为执行机构,采用毫米波雷达单元向地面发出毫米波信号,返回的数据计算出与地面的相对高度,将数据传递给数据融合单元,数据融合单元传回飞控系统从而操控植保无人机保持与地面的相对稳定的飞行高度;通过数字成像雷达单元对正在飞行中的植保无人机下方进行扇形扫描并收集返回的数据,最终生成点云图像并提供给数据融合单元;通过多光谱相机对正在飞行中的植保无人机下方进行拍照分析,生成不同区间的可见光图像并提供给数据融合单元。数据融合单元根据点云图像和多光谱图像进行数据融合,评估出农作物的疏密分布情况并交给执行机构实时调整农药的喷洒用量,实现农药的实时变量喷洒。The plant protection UAV is used as an actuator in the actual operation flight, and the millimeter wave radar unit is used to send millimeter wave signals to the ground. Control system to control the plant protection UAV to maintain a relatively stable flight height with the ground; use the digital imaging radar unit to scan the underside of the plant protection UAV in flight and collect the returned data, and finally generate a point cloud image and provide it to Data fusion unit; through the multi-spectral camera, take pictures and analyze the underside of the flying plant protection drone, generate visible light images in different intervals and provide them to the data fusion unit. The data fusion unit performs data fusion based on point cloud images and multi-spectral images, evaluates the density distribution of crops, and sends it to the executive agency to adjust the amount of pesticide spraying in real time to achieve real-time variable spraying of pesticides.
数字成像雷达单元包括:成像雷达、第一存储单元以及第一处理单元,第一存储单元同时与成像雷达和第一处理单元双向电连接,第一处理单元与数据融合单元电连接,第一存储单元用于存储成像雷达采集的原始数据及控制单元对于成像雷达的控制数据。第一处理单元用于处理成像雷达单元采集的原始数据进行运算并生成点云图像,将图像数据传递给数据融合单元。The digital imaging radar unit includes: imaging radar, a first storage unit and a first processing unit, the first storage unit is electrically connected to the imaging radar and the first processing unit in both directions, the first processing unit is electrically connected to the data fusion unit, the first storage unit The unit is used to store the raw data collected by the imaging radar and the control data of the imaging radar by the control unit. The first processing unit is used to process the raw data collected by the imaging radar unit to perform operations and generate point cloud images, and transmit the image data to the data fusion unit.
如图7所示,数字成像雷达单元使用相控阵雷达作为发射源,主要硬件系统构成是一块高速信号处理板,包含8路10bits的AD转换器,一片FPGA,一块10hitsDA转换芯片。在这个系统中,我们采用硬件和软件相结合的方式。在硬件部分,各通道在FPCA的控制下进行数据测量,完成对由SHA模拟中频输入接口送来的中顿信号进行模数转换,形成8路数据流送入FPGA,在FPGA中完成数字正交解调、8路单波束DBP处理、脉冲压缩处理,形成双路的数字I/Q信号,经过求模,形成单路的数字视频信号,送给DA芯片。As shown in Figure 7, the digital imaging radar unit uses phased array radar as the emission source. The main hardware system consists of a high-speed signal processing board, including 8 channels of 10bits AD converters, an FPGA, and a 10hitsDA conversion chip. In this system, we use a combination of hardware and software. In the hardware part, each channel performs data measurement under the control of FPCA, and completes the analog-to-digital conversion of the medium signal sent by the SHA analog intermediate frequency input interface, forming 8 data streams and sending them to the FPGA, and completing digital quadrature in the FPGA Demodulation, 8-channel single-beam DBP processing, and pulse compression processing form a dual-channel digital I/Q signal. After modulo calculation, a single-channel digital video signal is formed and sent to the DA chip.
在数字成像雷达硬件结构里,阵列天线每个单元接收的射频(RF)信号由各自的处理模块中的A/D变换器分别进行模数转换,然后进行下变频与同步检波处理,得到的数字正交基带信号送往DBF处理机,复基带数字信号S与预定的复加权矢量W在DBF形成器中进行相乘累加,然后得到所需要的点云数据输出。In the hardware structure of digital imaging radar, the radio frequency (RF) signal received by each unit of the array antenna is converted from analog to digital by the A/D converter in the respective processing module, and then processed by down-conversion and synchronous detection. The orthogonal baseband signal is sent to the DBF processor, and the complex baseband digital signal S and the predetermined complex weight vector W are multiplied and accumulated in the DBF generator, and then the required point cloud data output is obtained.
多光谱相机单元包括:多光谱相机、第二存储单元以及第二处理单元,第二存储单元同时与多光谱相机和第二处理单元双向电连接。第二存储单元用于存储多光谱相机采集的原始数据及控制单元对多光谱相机的控制数据,第二处理单元用于处理多光谱相机采集的原始数据进行运算并生成不同可见光波段的图像,将图像数据传递给数据融合单元。The multi-spectral camera unit includes: a multi-spectral camera, a second storage unit and a second processing unit, and the second storage unit is electrically connected to the multi-spectral camera and the second processing unit in both directions. The second storage unit is used to store the original data collected by the multi-spectral camera and the control data of the multi-spectral camera by the control unit, and the second processing unit is used to process the original data collected by the multi-spectral camera to perform operations and generate images of different visible light bands. The image data is passed to the data fusion unit.
工作时,光线通过光圈,调节光圈和焦距大小以保证合适的光通量和图像清晰度;然后光经过光线延长器,通过LCTF液晶分光分别对成像范围每隔5nm的波段进行调谐;再次,光通过转接器到CMOs面阵探测器焦平面进行成像,CMOS探测器将采集的图像依次传至计算机存储最后,计算机对所采集的多光谱图像进行分析和处理。When working, the light passes through the aperture, and the aperture and focal length are adjusted to ensure appropriate luminous flux and image clarity; then the light passes through the optical extender, and the imaging range is tuned every 5nm by LCTF liquid crystal light splitting; again, the light passes through the converter The adapter is connected to the focal plane of the CMOs area array detector for imaging, and the CMOS detector sequentially transmits the collected images to the computer for storage. Finally, the computer analyzes and processes the collected multispectral images.
数据融合单元包括第三存储单元和第三处理单元,第三存储单元同时与多光谱相机单元和数字成像雷达单元电连接,并与第三处理单元双向电连接,通过第三处理单元将信号传递给执行结构。第三存储单元用于存储成像雷达单元生成的点云图像数据和多光谱相机单元生产的不同可见光波段的图像数据;第三处理单元用于处理成像雷达单元生成的点云图像数据及多光谱相机单元生成的不同可见光波段的图像数据,将他们合成一副图像并由算法进行评估,最终将评估结果传递给执行机构,由执行机构根据作业地形、植物分布稀疏程度及作物长势实时调整喷洒的作业用量。The data fusion unit includes a third storage unit and a third processing unit, the third storage unit is electrically connected to the multispectral camera unit and the digital imaging radar unit at the same time, and is electrically connected to the third processing unit bidirectionally, and the signal is transmitted through the third processing unit Give the execution structure. The third storage unit is used to store the point cloud image data generated by the imaging radar unit and the image data of different visible light bands produced by the multispectral camera unit; the third processing unit is used to process the point cloud image data generated by the imaging radar unit and the multispectral camera The image data of different visible light bands generated by the unit is synthesized into an image and evaluated by the algorithm, and finally the evaluation result is passed to the executive agency, which adjusts the spraying operation in real time according to the operating terrain, the sparseness of plant distribution and the growth of crops Dosage.
本发明还提供一种基于多旋翼飞行棋的数据融合农情监测方法,根据点云图和多光谱图像进行数据融合,对农作物的疏密情况进行评价,实现农药喷洒用量的调整,包括:The present invention also provides a multi-rotor flying chess-based data fusion agricultural situation monitoring method, which performs data fusion based on point cloud images and multispectral images, evaluates the density of crops, and realizes the adjustment of pesticide spraying dosage, including:
确定并保持无人机与地面的飞行高度,对无人机下方进行扇形扫描后,向数据融合单元发送点云图;Determine and maintain the flying height between the UAV and the ground, and send the point cloud image to the data fusion unit after sector scanning under the UAV;
对无人机下方进行拍照和分析,向数据融合单元发送可见光图像;Take photos and analysis of the drone below, and send visible light images to the data fusion unit;
将点云图像和多光谱图像进行数据融合,对农作物的疏密分布进行评估并发送给执行机构来实时调整农药的喷洒用量,实现实时变量喷洒。Data fusion of point cloud images and multispectral images is performed to evaluate the density distribution of crops and sent to the executive agency to adjust the amount of pesticide spraying in real time to achieve real-time variable spraying.
描述的图像处理过程。图像融合可以为图像分割、目标检测与识别、图像理解等提供更有效的信息。针对所处理的多光谱图像及点云图像的特点,本文采用像素级的融合对采集到的原始图像数据进行融合,然后分析、处理融合的图像数据。融合后的图像包含了植物枝干的分布情况(来自点云图)和植物叶片的密度信息。Describe the image processing procedure. Image fusion can provide more effective information for image segmentation, target detection and recognition, image understanding, etc. According to the characteristics of the processed multi-spectral images and point cloud images, this paper uses pixel-level fusion to fuse the collected original image data, and then analyzes and processes the fused image data. The fused image contains the distribution of plant branches (from the point cloud image) and the density information of plant leaves.
基于多尺度分析的图像融合技术是图像融合领域的研究热点,它采用人类视觉由“粗”到“细”感知客观世界的方式,能够获得很好的融合结果。图像的多尺度分解方法获得图像不同的尺度空间中的分解成分,从而分离出图像中的高低频信息,这类似于人眼处理视觉信号的方式。多尺度分解方法在图像处理的各个领域得到广泛应用,其中,图像边缘细节与全局近似信息分离获得了计算上的便利和可靠性。为利用图像不同层次中的信息特点从而灵活、快速地实现图像信息的评估、选择和融合,本研究使用的图像融合方法为基于多尺度分解的融合方法,其流程如下:Image fusion technology based on multi-scale analysis is a research hotspot in the field of image fusion. It adopts the way of human vision to perceive the objective world from "coarse" to "fine", and can obtain good fusion results. The multi-scale decomposition method of the image obtains the decomposition components in different scale spaces of the image, thereby separating the high and low frequency information in the image, which is similar to the way the human eye processes visual signals. Multi-scale decomposition methods are widely used in various fields of image processing, in which the separation of image edge details and global approximation information obtains computational convenience and reliability. In order to use the information characteristics in different levels of the image to realize the evaluation, selection and fusion of image information flexibly and quickly, the image fusion method used in this study is a fusion method based on multi-scale decomposition. The process is as follows:
如图7所示,设定点云图为图像A,可见光图像为图像B;As shown in Figure 7, set the point cloud image as image A, and the visible light image as image B;
1)为了得到各自的多尺度分解系数,将输入图像A和图像B分别进行L层尺度分解,1) In order to obtain the respective multi-scale decomposition coefficients, the input image A and image B are respectively decomposed into L-level scales,
图像A处于i尺度的分解系数式为: The decomposition coefficient formula of image A at scale i is:
图像B处于i尺度的分解系数式为: The decomposition coefficient formula of image B at scale i is:
2)依据融合策略对两组分解系数进行融合,得到融合系数;2) According to the fusion strategy, the two sets of decomposition coefficients are fused to obtain the fusion coefficients;
融合系数式为: The fusion coefficient formula is:
3)再运用多尺度反变换对融合后的分解系数反向重构融合图像,得到:3) Then use multi-scale inverse transformation to reversely reconstruct the fused image from the fused decomposition coefficients to obtain:
其中,i=1,2,.....,L,F为变换后所得的系数,L为分解的尺度数;Wherein, i=1,2,...,L, F is the coefficient obtained after transformation, and L is the scale number of decomposition;
图像增强方法可分为基于空域和频域的方法。基于空域的增强方法主要有直方图规定化、直方图均衡法、灰度变换法及反锐化掩模法等。直方图均衡法增强了图像中的全部像素,其缺点是不易突出图像中的目标。直方图规定化在理论上能够有针对性对图像中的特定的信息进行增强,而在实际应用中选择一种最佳的直方图很困难。直方图拉伸法能够对图像对比度在一定程度上进行增强,但是变化函数须由经验或实验来确定,该方法在应用中受到较大的制约。基于频域的增强方法主要用于图像噪声去除和边缘等细节内容进行增强。Image enhancement methods can be divided into spatial domain and frequency domain based methods. The enhancement methods based on the spatial domain mainly include histogram specification, histogram equalization method, gray scale transformation method and unsharp mask method, etc. The histogram equalization method enhances all the pixels in the image, but its disadvantage is that it is not easy to highlight the target in the image. In theory, the histogram specification can enhance specific information in the image, but it is difficult to choose an optimal histogram in practical applications. The histogram stretching method can enhance the image contrast to a certain extent, but the change function must be determined by experience or experiments, and this method is subject to greater constraints in application. The enhancement method based on the frequency domain is mainly used for image noise removal and edge enhancement.
我们采用基于非线性拉伸的自适应图像增强算法对图像进行增强,避免了将不同等级灰度级合并,相对于其它线性拉伸和非线性拉伸算法,变换函数、调整参数的确定不需要借助于经验或实验,算法的运算量不小、自适应强,适合于实时处理系统。We use an adaptive image enhancement algorithm based on nonlinear stretching to enhance the image, which avoids combining different levels of gray levels. Compared with other linear stretching and nonlinear stretching algorithms, the determination of transformation functions and adjustment parameters does not require With the help of experience or experiments, the calculation amount of the algorithm is not small, the self-adaptation is strong, and it is suitable for real-time processing system.
具体包括:Specifically include:
步骤1,对原始图像直方图中像素灰度进行统计得到[0-255]中像素数为零的空闲灰度级数目,[0-255]为RGB色彩中的灰度定义,从全白到全黑定义范围为[0-255],理论上频率绝对为零的灰度级别为空闲灰度级别,但是成像设备并非为理想设备,图像中存在一定比例的噪声点,我们将灰度级别出现频率小于一定阈值的灰度级别均归为空闲灰度级别,这样减少了空闲灰度统计中频率很小的噪声灰度级别的干扰。这样,我们获得了最高的图像信噪比,提高了对比度和人眼视觉效果,也即将图像直方图修改为:Step 1: Count the pixel gray levels in the original image histogram to obtain the number of free gray levels with zero pixels in [0-255], [0-255] is the definition of gray levels in RGB colors, from all white to The definition range of full black is [0-255]. Theoretically, the gray level with an absolute frequency of zero is the idle gray level, but the imaging device is not an ideal device. There is a certain proportion of noise points in the image. We will show the gray level as Gray levels with a frequency less than a certain threshold are classified as idle gray levels, which reduces the interference of noise gray levels with small frequencies in idle gray statistics. In this way, we obtain the highest image signal-to-noise ratio, improve the contrast and human visual effect, that is, modify the image histogram to:
统计his'=0的灰度级个数即为Lr The number of gray levels with statistics his'=0 is L r
统计his'=0的灰度级个数,得到灰度级个数Lr;Count the number of gray levels with his'=0 to obtain the number of gray levels L r ;
式中,his[r]为待处理数据,Δ为阈值,his'[r]为处理后的数据In the formula, his[r] is the data to be processed, Δ is the threshold, and his'[r] is the processed data
步骤2、计算出整个灰度范围[0,255]内有效灰度级别的数目Le,
Le=255-Lr;L e =255-L r ;
步骤3、将空闲灰度级分配给有效灰度级。出现频率越小即像素数越少的灰度级分配给越多的空闲灰度级,出现频率越大的灰度值分配的越少。这相当于对直方图在灰度轴向进行了非线性拉伸,也即频率越小的地方拉伸的距离越大,灰度级间距越大相反越小。这样出现频率小的目标细节的灰度级的间隔得到拉伸,细节部分得到增强。避免了直方图均衡化方法中灰度级别出现频率较低的目标段被灰度级别出现频率较高的背景段简并;Step 3. Allocate idle gray levels to valid gray levels. Gray levels with lower frequency of occurrence, that is, fewer pixels, are allocated to more idle gray levels, and gray values with higher frequency of occurrence are allocated less. This is equivalent to nonlinear stretching of the histogram on the gray axis, that is, the smaller the frequency, the greater the stretching distance, and the larger the gray level spacing, the smaller it is. In this way, the interval of the gray levels of the target details with a small frequency of occurrence is stretched, and the details are enhanced. In the histogram equalization method, the target segment with low frequency of gray level is prevented from being degenerated by the background segment with high frequency of gray level;
步骤4、对不为零的有效灰度级在整个灰度范围内做非线性拉伸变换,变换函数为: Step 4. Perform non-linear stretching transformation on the effective gray scale that is not zero in the entire gray scale range, and the transformation function is:
式中,Sk为增强后的灰度级,T为灰度级系数,rk为需要处理的像素区域,L'i为待处理图像的像素灰度;In the formula, S k is the enhanced gray level, T is the gray level coefficient, r k is the pixel area to be processed, and L' i is the pixel gray level of the image to be processed;
图像处理后的灰度级等于区域灰度乘灰度系数,及区域内每个像素与灰度级系数的运算结果。通过将空闲灰度级分配给有效灰度级,出现频率越小即像素数越少的灰度级分配给越多的空闲灰度级,出现频率越大的灰度值分配的越少。这相当于对直方图在灰度轴向进行了非线性拉伸,也即频率越小的地方拉伸的距离越大,灰度级间距越大相反越小。这样出现频率小的目标细节的灰度级的间隔得到拉伸,细节部分得到增强。The gray scale after image processing is equal to the area gray scale multiplied by the gamma coefficient, and the calculation result of each pixel in the area and the gray scale coefficient. By allocating idle gray levels to effective gray levels, gray levels with smaller frequency of occurrence, that is, fewer pixels, are allocated to more idle gray levels, and gray values with higher frequency of occurrence are allocated less. This is equivalent to nonlinear stretching of the histogram on the gray axis, that is, the smaller the frequency, the greater the stretching distance, and the larger the gray level spacing, the smaller it is. In this way, the interval of the gray levels of the target details with a small frequency of occurrence is stretched, and the details are enhanced.
由此,得到了增强后的图像,枝干与叶片部分得到高亮显示,通过计算高亮区域面积实时调整药液流量。As a result, an enhanced image is obtained, and the branches and leaves are highlighted, and the liquid flow is adjusted in real time by calculating the area of the highlighted area.
通过计算高亮区域面积实时调整药液流量L,流量调整经验公式为:By calculating the area of the highlighted area, adjust the liquid flow L in real time. The empirical formula for flow adjustment is:
如图9-11所示,对于基于神经网络机器学习的液泵PID控制算法设计结构如图8所示,在利用神经网络算法的模型中使用了2个不同功能的神经网络模块承担不同的责任,一是NNI在线辨识器,二是NNC自适应PID控制处理器。在对于农业植保中液泵的流量控制器的工作原理是:利用算法模型实时对NNC被控子项目在算法进行的辨识结果的权系进行调整,从而使被控项目产生了自适应性和稳定性。后期通过Matlab运算软件验证该神经网络PID控制系统稳定性及性能,进行了大量的数据仿真验证。实验结果见图9-10不难看出,基于神经网络的PID控制系统比传统的控制系统具有更好的控制特性。As shown in Figure 9-11, the design structure of the PID control algorithm for liquid pumps based on neural network machine learning is shown in Figure 8. In the model using neural network algorithms, two neural network modules with different functions are used to assume different responsibilities , one is the NNI online identifier, and the other is the NNC adaptive PID control processor. The working principle of the flow controller of the liquid pump in agricultural plant protection is: use the algorithm model to adjust the weight system of the identification result of the NNC controlled sub-item in the algorithm in real time, so that the controlled item has self-adaptability and stability sex. In the later period, the stability and performance of the neural network PID control system were verified by Matlab computing software, and a large number of data simulation verifications were carried out. The experimental results are shown in Figure 9-10. It is not difficult to see that the PID control system based on the neural network has better control characteristics than the traditional control system.
如图11所示,对于一个完整的控制系统的开发,硬件只是其中开发的一部分,软件的质量直接影响整体系统功能的实现。该控制系统采用模块化编程,整体都是通过编写C语言代码来实现的。本研究选用FreeRTOS作为软件系统核心,使用Keil集成开发环境完成软件模块化。As shown in Figure 11, for the development of a complete control system, the hardware is only a part of the development, and the quality of the software directly affects the realization of the overall system function. The control system adopts modular programming, and the whole is realized by writing C language code. In this study, FreeRTOS is selected as the core of the software system, and the Keil integrated development environment is used to complete the software modularization.
通过计算高亮度区域面积实时调整药液流量:Adjust the liquid flow in real time by calculating the area of the high-brightness area:
L=Sk×P;L= Sk ×P;
式中,P为基础作业流量。In the formula, P is the basic operation flow.
流量等于当前区域经过增强后的灰度级乘当前作物类型。根据作业农田作物类型由人工决定基础作业流量P,系统根据实施作业区域所拍摄到的作物图像经图像增强算法得到增强后的灰度级,作物越密集灰度级Sk越高,即所需作业流量越大,从而实现实时变量喷洒。The flow rate is equal to the enhanced gray level of the current area multiplied by the current crop type. The basic operation flow P is manually determined according to the type of crops in the operation field. The system obtains the enhanced gray level through the image enhancement algorithm based on the image of the crops captured in the operation area. The denser the crops, the higher the gray level S k , that is, the required The greater the operating flow rate, the real-time variable spraying is realized.
尽管本发明的实施方案已公开如上,但其并不仅仅限于说明书和实施方式中所列运用,它完全可以被适用于各种适合本发明的领域,对于熟悉本领域的人员而言,可容易地实现另外的修改,因此在不背离权利要求及等同范围所限定的一般概念下,本发明并不限于特定的细节和这里示出与描述的图例。Although the embodiment of the present invention has been disclosed as above, it is not limited to the use listed in the specification and implementation, it can be applied to various fields suitable for the present invention, and it can be easily understood by those skilled in the art Therefore, the invention is not limited to the specific details and examples shown and described herein without departing from the general concept defined by the claims and their equivalents.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010836910.9A CN111967523B (en) | 2020-08-19 | 2020-08-19 | Data fusion agricultural condition detection system and method based on multi-rotor aircraft |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010836910.9A CN111967523B (en) | 2020-08-19 | 2020-08-19 | Data fusion agricultural condition detection system and method based on multi-rotor aircraft |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111967523A CN111967523A (en) | 2020-11-20 |
CN111967523B true CN111967523B (en) | 2022-11-15 |
Family
ID=73388977
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010836910.9A Active CN111967523B (en) | 2020-08-19 | 2020-08-19 | Data fusion agricultural condition detection system and method based on multi-rotor aircraft |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111967523B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104835130A (en) * | 2015-04-17 | 2015-08-12 | 北京联合大学 | Multi-exposure image fusion method |
CA3024580A1 (en) * | 2015-05-15 | 2016-11-24 | Airfusion, Inc. | Portable apparatus and method for decision support for real time automated multisensor data fusion and analysis |
CN108519775A (en) * | 2017-10-30 | 2018-09-11 | 北京博鹰通航科技有限公司 | A kind of UAV system and its control method precisely sprayed |
-
2020
- 2020-08-19 CN CN202010836910.9A patent/CN111967523B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104835130A (en) * | 2015-04-17 | 2015-08-12 | 北京联合大学 | Multi-exposure image fusion method |
CA3024580A1 (en) * | 2015-05-15 | 2016-11-24 | Airfusion, Inc. | Portable apparatus and method for decision support for real time automated multisensor data fusion and analysis |
CN108519775A (en) * | 2017-10-30 | 2018-09-11 | 北京博鹰通航科技有限公司 | A kind of UAV system and its control method precisely sprayed |
Non-Patent Citations (2)
Title |
---|
基于多光谱融合图像的飞机导航系统设计;马翰飞;《电子设计工程》;20191231;第161-166页 * |
数据融合及其在农情遥感监测中的应用与展望;钱永兰等;《农业工程学报》;20040731;第286-290页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111967523A (en) | 2020-11-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102623403B1 (en) | Methods for aerial image acquisition and analysis | |
US20170228595A1 (en) | Image filter based on row identification | |
CN109977924A (en) | For real time image processing and system on the unmanned plane machine of crops | |
Rasti et al. | Crop growth stage estimation prior to canopy closure using deep learning algorithms | |
RU2018143340A (en) | RECOGNITION OF WEED IN THE NATURAL ENVIRONMENT | |
US20200250427A1 (en) | Shadow and cloud masking for agriculture applications using convolutional neural networks | |
CN108346143A (en) | A kind of crop disease monitoring method and system based on the fusion of unmanned plane multi-source image | |
CA3138812C (en) | Automatic crop classification system and method | |
Guo et al. | Target recognition method of small UAV remote sensing image based on fuzzy clustering | |
Shankar et al. | Application of UAV for pest, weeds and disease detection using open computer vision | |
Pi et al. | Desertification glassland classification and three-dimensional convolution neural network model for identifying desert grassland landforms with unmanned aerial vehicle hyperspectral remote sensing images | |
Zhang et al. | Hawk‐eye‐inspired perception algorithm of stereo vision for obtaining orchard 3D point cloud navigation map | |
Wang et al. | Research on image capture technology of intelligent terminal and multi exposure fusion to improve the resilience of agriculture production systems | |
Lyu et al. | Development of phenotyping system using low altitude UAV imagery and deep learning | |
CN111967523B (en) | Data fusion agricultural condition detection system and method based on multi-rotor aircraft | |
Heinze et al. | Image exploitation algorithms for reconnaissance and surveillance with UAV | |
US20230274403A1 (en) | Depth-based see-through prevention in image fusion | |
RU2610283C1 (en) | Image decoding method | |
Harder et al. | NightVision: generating nighttime satellite imagery from infra-Red observations | |
Hong et al. | Multispectral and panchromatic image fusion based on genetic algorithm and data assimilation | |
Toh et al. | Classification of oil palm growth status with L band microwave satellite imagery | |
Khidher et al. | Automatic trees density classification using deep learning of unmanned aerial vehicles images | |
Zhang et al. | Intelligent Psyllid Monitoring Based on DiTs-YOLOv10-SOD | |
Kataev et al. | Farm fields UAV images clusterization | |
Vedavyas et al. | An FPGA-Based Adaptive Real-Time Quality Enhancement System for Drone Imagery |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20241029 Address after: No. 479 Zhongshan Road, Qianjin District, Jiamusi City, Heilongjiang Province, 154000 (Qianjin District Committee 71) (managed enterprise) Patentee after: Jiamusi Jiamusi University Innovation and Creative Technology Business Incubator Co.,Ltd. Country or region after: China Patentee after: Qiu Xinwei Address before: 154007 No. 148, Xuefu street, Xiangyang District, Heilongjiang, Jiamusi Patentee before: JIAMUSI University Country or region before: China |
|
TR01 | Transfer of patent right |