CN117456092A - Three-dimensional live-action modeling system and method based on unmanned aerial vehicle aerial survey - Google Patents

Three-dimensional live-action modeling system and method based on unmanned aerial vehicle aerial survey Download PDF

Info

Publication number
CN117456092A
CN117456092A CN202311377081.2A CN202311377081A CN117456092A CN 117456092 A CN117456092 A CN 117456092A CN 202311377081 A CN202311377081 A CN 202311377081A CN 117456092 A CN117456092 A CN 117456092A
Authority
CN
China
Prior art keywords
image
drone
aerial survey
uav
image control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202311377081.2A
Other languages
Chinese (zh)
Inventor
于祥波
朱毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xuzhou Shuoxiang Information Technology Co ltd
Original Assignee
Xuzhou Shuoxiang Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xuzhou Shuoxiang Information Technology Co ltd filed Critical Xuzhou Shuoxiang Information Technology Co ltd
Priority to CN202311377081.2A priority Critical patent/CN117456092A/en
Publication of CN117456092A publication Critical patent/CN117456092A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/02Computing arrangements based on specific mathematical models using fuzzy logic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computer Graphics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Remote Sensing (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Algebra (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Fuzzy Systems (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于无人机航测的三维实景建模方法,包括:地面设置处理站、基点并且合理布设多组像控点,生成带状航测区域,根据像控点生成坐标图,获取导航路线;控制无人机前往像控点进行倾斜摄影,获得带有像控点的图像数据,通过像控点识别分类模型对获取的图像数据进行预处理,筛除废片;将每次获取的图像数据输入图像质量模型进行精度评价,若精度不符合预设值,则重新获取图像,直至实现对所有像控点实景的更新,建模后形成实景模型图。本发明属于无人机航测技术领域,具体是提供了一种基于无人机航测的三维实景建模系统及方法,用于解决现有技术中航测质量精度不够的问题,以及测量的精度欠佳,工作量较大的问题。

The invention discloses a three-dimensional real-scene modeling method based on UAV aerial survey, which includes: setting processing stations and base points on the ground and rationally arranging multiple groups of image control points, generating a strip aerial survey area, generating coordinate maps based on the image control points, and obtaining Navigate the route; control the drone to go to the image control point for oblique photography, obtain image data with image control points, preprocess the acquired image data through the image control point recognition and classification model, and screen out waste films; each acquisition The image data is input into the image quality model for accuracy evaluation. If the accuracy does not meet the preset value, the image is reacquired until the real scene of all image control points is updated, and a real scene model diagram is formed after modeling. The invention belongs to the technical field of UAV aerial survey. Specifically, it provides a three-dimensional real scene modeling system and method based on UAV aerial survey, which is used to solve the problem of insufficient aerial survey quality accuracy and poor measurement accuracy in the existing technology. , the problem of heavy workload.

Description

基于无人机航测的三维实景建模系统及方法Three-dimensional reality modeling system and method based on UAV aerial survey

技术领域Technical field

本发明属于无人机航测技术领域,具体是指一种基于无人机航测的三维实景建模系统及方法。The invention belongs to the technical field of UAV aerial survey, and specifically refers to a three-dimensional real scene modeling system and method based on UAV aerial survey.

背景技术Background technique

在建筑测量领域,随着无人机技术的发展,无人机航测的优势正在逐渐增长,并在市场上占据主导地位。在现有的测绘建模技术中,测量人员首先需要遥控无人机对实景进行拍摄,在无人机拍摄完成后携带数据返回,随后得到勘测模型图,再通过建模等方式。通过无人机进行航测时,通常先生成矩形或不规则多边形区域,然后通过调节区域上的控制点来规划航线,但是,常规操作中则需要多次调节控制点来生成带状的航拍区域,操作极为不方便,而且绘制出来的航拍区域有可能覆盖非测区,增加了冗余数据及数据处理工作量。In the field of construction surveying, with the development of drone technology, the advantages of drone aerial survey are gradually growing and occupying a dominant position in the market. In the existing surveying and mapping modeling technology, surveyors first need to remotely control the drone to shoot the real scene, and then return with the data after the drone shooting is completed, and then obtain the survey model map, and then use modeling and other methods. When conducting aerial survey by drone, a rectangular or irregular polygon area is usually generated first, and then the route is planned by adjusting the control points on the area. However, in conventional operations, it is necessary to adjust the control points multiple times to generate a strip-shaped aerial photography area. The operation is extremely inconvenient, and the drawn aerial photography area may cover non-measurement areas, which increases redundant data and data processing workload.

现有申请号为2019110557828的中国发明专利,公开了一种基于无人机航测的三维实景建模方法。该基于无人机航测的三维实景建模方法包括以下步骤:利用无人机航测获取航测数据,通过空中三角解析法进行图像解析将系列二维航拍图像转换为所述待测建筑工程的三维密集点云,接着进行数据后处理,获得所述待测建筑工程的数字线划地图和数字表面模型,得到实景三维模型;基于所述实景三维模型及真实地表点云进行所述待测建筑工程实景巡查,得到所述待测建筑工程的施工执行数据;基于所述待测建筑工程的三维规划设计和施工执行数据的比对,研究并下达施工调度指令,对调度指令执行效果进行检查和纠偏。该基于无人机航测的三维实景建模方法的效率高且成本低。The existing Chinese invention patent with application number 2019110557828 discloses a three-dimensional reality modeling method based on drone aerial survey. The three-dimensional real-scene modeling method based on UAV aerial survey includes the following steps: using UAV aerial survey to obtain aerial survey data, performing image analysis through aerial triangulation analysis, and converting a series of two-dimensional aerial images into three-dimensional dense images of the construction project to be measured. point cloud, and then perform data post-processing to obtain a digital line map and a digital surface model of the construction project to be measured, and obtain a real-scene three-dimensional model; based on the real-scene three-dimensional model and the real surface point cloud, the real-life scene of the construction project to be measured is obtained Inspect and obtain the construction execution data of the construction project to be tested; based on the comparison of the three-dimensional planning and design of the construction project to be tested and the construction execution data, study and issue construction scheduling instructions, and check and correct the execution effect of the scheduling instructions. This three-dimensional real-scene modeling method based on UAV aerial survey is highly efficient and low-cost.

上述方案中,通过空中三角解析法的方式可以解决人工测量带来的精度问题,但是采用空中三角解析法获取的航飞质量和扫描质量有问题,如果底片片基本身有一定的系统变形,或在航摄、摄影处理、扫描过程中都可能会受到某种应变力的作用而造成动态的几何变形,测量的精度欠佳,而且工作量变大。In the above scheme, the accuracy problem caused by manual measurement can be solved by aerial triangulation. However, there are problems with the flight quality and scanning quality obtained by aerial triangulation. If the film base itself has certain systematic deformation, or During aerial photography, photographic processing, and scanning processes, certain strain forces may occur, causing dynamic geometric deformation, resulting in poor measurement accuracy and increased workload.

发明内容Contents of the invention

针对上述情况,为克服现有技术的缺陷,本发明提供了一种基于无人机航测的三维实景建模系统及方法,用于解决现有技术中航测质量精度不够的问题,以及测量的精度欠佳,工作量较大的问题。In view of the above situation, in order to overcome the shortcomings of the existing technology, the present invention provides a three-dimensional real-scene modeling system and method based on UAV aerial survey, which is used to solve the problem of insufficient aerial survey quality accuracy and measurement accuracy in the prior art. Not good, a problem with heavy workload.

本发明采取的技术方案如下:The technical solutions adopted by the present invention are as follows:

本方案公开了一种基于无人机航测的三维实景建模方法,包括以下步骤:This solution discloses a three-dimensional reality modeling method based on UAV aerial survey, including the following steps:

S1:地面设置处理站、基点并且合理布设多组像控点,处理站预先设置无人机的航拍参数,生成带状航测区域,将所述像控点的位置信息输入给处理站形成坐标图,获取当前位置到所述像控点之间的导航路线;S1: Set up processing stations and base points on the ground and reasonably arrange multiple groups of image control points. The processing station pre-sets the aerial photography parameters of the UAV, generates a strip aerial survey area, and inputs the position information of the image control points to the processing station to form a coordinate map. , obtain the navigation route between the current position and the image control point;

S2:处理站控制无人机从基准坐标控制点前往对应的像控点,通过无人机进行倾斜摄影,获得带有像控点的图像数据,对获取的图像数据进行预处理,筛除废片;S2: The processing station controls the drone to move from the reference coordinate control point to the corresponding image control point, uses the drone to perform oblique photography, and obtains image data with image control points. It preprocesses the acquired image data and screens out waste. piece;

S3:在摄影过程中,将每次获取的图像数据输入图像质量模型进行精度评价;如果精度符合预设值,则继续获取下一个像控点的图像数据;如果精度不符合预设值,则重新获取图像;S3: During the photography process, input the image data obtained each time into the image quality model for accuracy evaluation; if the accuracy meets the preset value, continue to obtain the image data of the next image control point; if the accuracy does not meet the preset value, then Re-acquire the image;

S4:利用重新获取的符合精度的图像覆盖原有的模糊景象,实现对实景的更新,根据采集的图像建模后形成最终的实景模型图。S4: Use the reacquired image that meets the accuracy to cover the original blurred scene to update the real scene, and form the final real scene model diagram based on the collected image modeling.

进一步方案,所述步骤S1中,像控点的布设采用图根点的测量方法以及轴对称的布设方式,在航测区域的复杂边界增加像控点的布设密度。In a further solution, in the step S1, the image control points are laid out using the measurement method of the graph root point and the axially symmetric layout method, so as to increase the layout density of the image control points at the complex boundary of the aerial survey area.

进一步地,所述步骤S1中,航拍参数设置包括:航向重叠度、旁向重叠度、航测高度和基准点高度的设置,根据所述航拍参数和无人机镜头参数生成带状航测区域。Further, in step S1, the aerial photography parameter settings include: setting of heading overlap, side overlap, aerial survey height and reference point height, and a strip aerial survey area is generated based on the aerial photography parameters and UAV lens parameters.

进一步方案中,所述图像质量模型的流程,包括如下步骤:In a further solution, the image quality model process includes the following steps:

步骤一、选择拍摄图像的某一目标点进行实例分析;Step 1: Select a target point of the captured image for example analysis;

步骤二、选择不同的航向重叠率、旁向重叠度、航测高度和基准点高度的布设,统计各实例的高程误差和水平误差;Step 2: Select different layouts of course overlap rate, side overlap rate, aerial survey height and reference point height, and count the elevation error and horizontal error of each instance;

步骤三、通过模糊综合评价模型计算出各种情况的精度和各实例的高程误差和水平误差进行对比分析。Step 3: Calculate the accuracy of various situations through the fuzzy comprehensive evaluation model and conduct comparative analysis of the elevation error and horizontal error of each instance.

进一步地,在所述步骤S2中,采用像控点识别分类模型对所述图像数据中的像控点标识进行识别与图像的预处理工作。Further, in the step S2, an image control point recognition and classification model is used to identify the image control point identifiers in the image data and perform image preprocessing.

进一步地,所述像控点识别分类模型的构建与训练过程,包括以下步骤:Further, the construction and training process of the image control point recognition and classification model includes the following steps:

步骤1:搭建若干像控点标识,拍摄像控点标识的原始图像,对图像进行预处理,包括图像的裁剪、尺寸调整、灰度转换、归一化,确保图像数据的一致性与可用性;Step 1: Build a number of image control point markers, take the original image of the image control point marker, and preprocess the image, including image cropping, size adjustment, grayscale conversion, and normalization to ensure the consistency and usability of the image data;

步骤2:使用深度学习模型中的卷积神经网络自动学习特征,将图像转化为机器学习算法可以处理的特征表达形式;采用基于VGG11网络的编码器-解码器的神经网络并训练。Step 2: Use the convolutional neural network in the deep learning model to automatically learn features and convert the image into a feature expression form that can be processed by the machine learning algorithm; use the encoder-decoder neural network based on the VGG11 network and train it.

步骤3:对预处理后的图像数据集进行标注,形成标注文件,为每个样本分配正确的分类标签,根据任务的特点选择合适的分类模型,标注文件与原始图像相对应形成样本集;Step 3: Annotate the preprocessed image data set to form an annotation file, assign the correct classification label to each sample, select an appropriate classification model according to the characteristics of the task, and form a sample set corresponding to the annotation file and the original image;

步骤4:模型训练时,先对原始图像做图像分类,判断图像中是否像控点,若为是的情况下对图像做图像分割,从而精确到每一个像素的分类;Step 4: When training the model, first perform image classification on the original image to determine whether the image resembles a control point. If so, perform image segmentation on the image to accurately classify each pixel;

步骤5:将样本集划分为训练集和测试集,其中训练集占样本总数的70%,测试集占样本总数的30%;使用测试集对训练好的模型进行评估,计算模型的准确率、精确度,根据评估结果对模型进行调优;Step 5: Divide the sample set into a training set and a test set, where the training set accounts for 70% of the total samples and the test set accounts for 30% of the total samples; use the test set to evaluate the trained model and calculate the accuracy of the model. Accuracy, the model is tuned based on the evaluation results;

步骤6:最后将训练好的模型部署到实际应用环境中,用于实现对图像的像控点识别分类功能。Step 6: Finally, deploy the trained model to the actual application environment to realize the image control point recognition and classification function of the image.

本方案还公开了一种基于无人机航测的三维实景建模系统,包括无人机、处理站和通讯基站,所述无人机通过与通讯基站之间建立连接,从而实现与地面处理站的通信,所述处理站从而对无人机进行控制;This solution also discloses a three-dimensional real-scene modeling system based on UAV aerial survey, including UAV, processing station and communication base station. The UAV establishes a connection with the communication base station, thereby realizing communication with the ground processing station. communication, the processing station thereby controls the drone;

所述无人机包含有无人机控制单元、数据采集单元和处理站控制单元;The drone includes a drone control unit, a data collection unit and a processing station control unit;

所述无人机控制单元包括:无人机处理器、惯性测量系统、定位系统、供电系统、存储系统、无线通信系统,上述系统均与无人机处理器相连接,无人机处理器用于接收并处理信号使用;The drone control unit includes: a drone processor, an inertial measurement system, a positioning system, a power supply system, a storage system, and a wireless communication system. The above systems are all connected to the drone processor, and the drone processor is used to Receive and process signals for use;

所述惯性测量系统由加速度计和陀螺仪组成,用于感测无人机的加速度并通过积分运算获得无人机的速度和姿态等相关数据。其中,加速度计用来测量无人机运动过程中相对于惯性空间的加速度,指示当地垂线方向;陀螺仪则用来测量无人机相对搭载平台转动运动方向的角位移,指示地球自转轴的方向,通过惯性测量系统的设置从而获取无人机姿态,便于角度拍摄的控制。The inertial measurement system is composed of an accelerometer and a gyroscope, and is used to sense the acceleration of the drone and obtain related data such as the speed and attitude of the drone through integral operations. Among them, the accelerometer is used to measure the acceleration of the UAV relative to the inertial space during its movement, indicating the local vertical direction; the gyroscope is used to measure the angular displacement of the UAV relative to the direction of rotation of the mounting platform, indicating the direction of the Earth's rotation axis. Direction, through the setting of the inertial measurement system, the attitude of the drone is obtained, which facilitates the control of angle shooting.

所述定位系统采用GPS定位或者北斗导航定位,用于定位无人机的位置。所述供电系统采用锂离子动力电池,所述存储系统采用存储卡与外接硬盘结合的方式存储数据。所述无线通信系统采用远距离WiFi模块实现通信。The positioning system uses GPS positioning or Beidou navigation positioning to locate the position of the drone. The power supply system uses a lithium-ion power battery, and the storage system uses a memory card combined with an external hard disk to store data. The wireless communication system uses a long-distance WiFi module to achieve communication.

所述数据采集单元包括:摄像机、红外测距仪和电池信息传感器,所述摄像机用于照片以及视频的拍摄,所述红外测距仪用于测量无人机机身与基点、像控点之间的高度距离,所述电池信息传感器用于电池电量信息的采集,在航测的过程中无人机电池的实时电量小于电量设置阈值时,无人机处理器通过无线通信系统向处理站发送信号,提醒工作人员操控无人机返航,避免造成损失。在使用的过程中,供电系统给无人机供电,电池信息传感器用于监测电池电量的信息,根据电量适时返航充电,在无人机飞行的过程中,实时获取无人机的速度、姿态、以及拍摄画面。The data collection unit includes: a camera, an infrared range finder and a battery information sensor. The camera is used for taking photos and videos. The infrared range finder is used to measure the relationship between the drone body and base points and image control points. The battery information sensor is used to collect battery power information. During the aerial survey, when the real-time power of the drone battery is less than the power setting threshold, the drone processor sends a signal to the processing station through the wireless communication system. , reminding staff to control the drone to return to avoid causing losses. During use, the power supply system supplies power to the drone. The battery information sensor is used to monitor the battery power information and return to home for charging according to the power. During the flight of the drone, the speed, attitude, and speed of the drone are obtained in real time. and filming footage.

所述处理站控制单元与处理站通讯连接,所述处理站包含有数据接收单元以及监控屏,所述监控屏以及数据接收单元通过无线通信系统与无人机处理器以及数据采集单元相连接,起到监控、控制与信息处理的作用。The processing station control unit is communicatively connected to the processing station. The processing station includes a data receiving unit and a monitoring screen. The monitoring screen and data receiving unit are connected to the UAV processor and data collection unit through a wireless communication system. It plays the role of monitoring, control and information processing.

本方案公开了一种无人机航测的三维实景建模方法,采用上述方案取得的有益效果如下:This solution discloses a three-dimensional real-scene modeling method for UAV aerial survey. The beneficial effects achieved by using the above solution are as follows:

1、在航测前根据待测区域的特点做好测前准备,像控点的布设采用图根点的测量方法以及轴对称的布设方式,在航测区域的复杂边界增加像控点的布设密度,能够减少带状航测区域中所包含的冗余工作量及包含非测区域的重叠问题。1. Before the aerial survey, make pre-test preparations based on the characteristics of the area to be measured. The image control points are laid out using the measurement method of the root point and the axially symmetric layout method. The density of the image control points is increased at the complex boundaries of the aerial survey area. It can reduce the redundant workload contained in the strip aerial survey area and the overlap problem of non-survey areas.

2、前往相应的像控点并进行倾斜摄影,从而获得带有像控点的图像数据,在像控点识别分类模型的构建与训练过程中,经过神经网络学习,对图像做预处理,并进行图像分类,通过判断图像中是否存在像控点,从而对图像进行分割,用于实现对像控点的分类识别,筛除废片的同时,减少后期的工作量,便于图像的分类与后期的实景建模。2. Go to the corresponding image control points and perform oblique photography to obtain image data with image control points. During the construction and training process of the image control point recognition and classification model, the images are preprocessed through neural network learning, and Carry out image classification and segment the image by judging whether there are image control points in the image. It is used to realize the classification and recognition of the image control points. While screening out the waste films, it reduces the workload in the later stage and facilitates the classification and post-processing of the image. Real-life modeling.

3、将根据每个像控点获取的图像数据输入图像质量模型进行精度评价,该图像质量模型通过不同的航向重叠率、旁向重叠度、航测高度和基准点高度的布设,在模糊综合评价模型的计算下得出各种情况的精度和实施例的高程误差和水平误差进行对比分析,从而判断图像质量的精度,判断是否复合图像需求标准,从而删除冗余图像,从而解决现有技术中工作量大的问题,同时解决了现有技术中航测质量精度不够的问题,以及测量的精度欠佳等问题。3. Input the image data obtained according to each image control point into the image quality model for accuracy evaluation. The image quality model uses different heading overlap rates, side overlap degrees, aerial survey heights and reference point heights to perform fuzzy comprehensive evaluation. Through the calculation of the model, the accuracy of various situations is obtained and the elevation error and horizontal error of the embodiment are compared and analyzed to determine the accuracy of the image quality, determine whether the composite image requires standards, and delete redundant images to solve the problem in the existing technology. The problem of heavy workload is solved at the same time, the problem of insufficient quality accuracy of aerial survey in the existing technology, and the problem of poor measurement accuracy.

4、另外,本方案所提供的系统,包含有实时监控功能,处理站设有用于实时监控的监控屏,起到监控、控制与信息处理的作用,同时包含有无人机电池电量的监控,保证无人机的正常使用。4. In addition, the system provided by this solution includes real-time monitoring functions. The processing station is equipped with a monitoring screen for real-time monitoring, which plays the role of monitoring, control and information processing. It also includes monitoring of drone battery power. Ensure the normal use of the drone.

附图说明Description of the drawings

附图用来提供对本发明的进一步理解,并且构成说明书的一部分,与本发明的实施例一起用于解释本发明,并不构成对本发明的限制。The drawings are used to provide a further understanding of the present invention and constitute a part of the specification. They are used to explain the present invention together with the embodiments of the present invention and do not constitute a limitation of the present invention.

图1为本建模方法的方法流程图;Figure 1 is the method flow chart of this modeling method;

图2为本建模系统的组成图;Figure 2 shows the composition diagram of this modeling system;

图3为实施例中像控点识别分类模型的构建与训练过程的流程图;Figure 3 is a flow chart of the construction and training process of the image control point recognition and classification model in the embodiment;

图4为实施例中图像质量模型评价获取图像精度的流程图。Figure 4 is a flow chart for image quality model evaluation to obtain image accuracy in the embodiment.

具体实施方式Detailed ways

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例;基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some of the embodiments of the present invention, not all of them; based on The embodiments of the present invention and all other embodiments obtained by those of ordinary skill in the art without creative efforts fall within the scope of protection of the present invention.

实施例,如图2所示,本发明提供了一种基于无人机航测的三维实景建模系统,包括无人机、处理站和通讯基站,所述无人机通过与通讯基站之间建立连接,从而实现与地面处理站的通信,所述处理站从而对无人机进行控制。Embodiment, as shown in Figure 2, the present invention provides a three-dimensional real-scene modeling system based on UAV aerial survey, including a UAV, a processing station and a communication base station. The UAV establishes a connection with the communication base station. The connection enables communication with the ground processing station, which controls the drone.

参阅图2所示,所述无人机包含有无人机控制单元、数据采集单元和处理站控制单元。Referring to Figure 2, the drone includes a drone control unit, a data collection unit and a processing station control unit.

所述无人机控制单元用于监测并获取无人机的多种状态,确保无人机正常运行航测工作;所述无人机控制单元包括:无人机处理器、惯性测量系统、定位系统、供电系统、存储系统、无线通信系统,上述系统均与无人机处理器相连接,无人机处理器用于接收并处理信号使用。在优选实施例中,所述惯性测量系统由加速度计和陀螺仪组成,用于感测无人机的加速度并通过积分运算获得无人机的速度和姿态等相关数据。其中,加速度计用来测量无人机运动过程中相对于惯性空间的加速度,指示当地垂线方向;陀螺仪则用来测量无人机相对搭载平台转动运动方向的角位移,指示地球自转轴的方向,通过惯性测量系统的设置从而获取无人机姿态,便于角度拍摄的控制。其中,所述定位系统采用GPS定位或者北斗导航定位,用于定位无人机的位置。所述供电系统采用锂离子动力电池,所述存储系统采用存储卡与外接硬盘结合的方式存储数据。所述无线通信系统采用远距离WiFi模块实现通信。The UAV control unit is used to monitor and obtain various states of the UAV to ensure the normal operation of the UAV in aerial survey work; the UAV control unit includes: a UAV processor, an inertial measurement system, and a positioning system , power supply system, storage system, wireless communication system, the above systems are all connected to the drone processor, and the drone processor is used to receive and process signals. In a preferred embodiment, the inertial measurement system is composed of an accelerometer and a gyroscope, and is used to sense the acceleration of the drone and obtain relevant data such as the speed and attitude of the drone through integral operations. Among them, the accelerometer is used to measure the acceleration of the UAV relative to the inertial space during its movement, indicating the local vertical direction; the gyroscope is used to measure the angular displacement of the UAV relative to the direction of rotation of the mounting platform, indicating the direction of the Earth's rotation axis. Direction, through the setting of the inertial measurement system, the attitude of the drone is obtained, which facilitates the control of angle shooting. Among them, the positioning system adopts GPS positioning or Beidou navigation positioning to locate the position of the drone. The power supply system uses a lithium-ion power battery, and the storage system uses a memory card combined with an external hard disk to store data. The wireless communication system uses a long-distance WiFi module to achieve communication.

所述数据采集单元包括:摄像机、红外测距仪和电池信息传感器,所述摄像机用于照片以及视频的拍摄,所述红外测距仪用于测量无人机机身与基点、像控点之间的高度距离,所述电池信息传感器用于电池电量信息的采集,在航测的过程中无人机电池的实时电量小于电量设置阈值时,无人机处理器通过无线通信系统向处理站发送信号,提醒工作人员操控无人机返航,避免造成损失。在使用的过程中,供电系统给无人机供电,电池信息传感器用于监测电池电量的信息,根据电量适时返航充电,在无人机飞行的过程中,实时获取无人机的速度、姿态、以及拍摄画面。The data collection unit includes: a camera, an infrared range finder and a battery information sensor. The camera is used for taking photos and videos. The infrared range finder is used to measure the relationship between the drone body and base points and image control points. The battery information sensor is used to collect battery power information. During the aerial survey, when the real-time power of the drone battery is less than the power setting threshold, the drone processor sends a signal to the processing station through the wireless communication system. , reminding staff to control the drone to return to avoid causing losses. During use, the power supply system supplies power to the drone. The battery information sensor is used to monitor the battery power information and return to home for charging according to the power. During the flight of the drone, the speed, attitude, and speed of the drone are obtained in real time. and filming footage.

所述处理站控制单元:与处理站通讯连接,所述处理站包含有数据接收单元以及监控屏,所述监控屏以及数据接收单元通过无线通信系统与无人机处理器以及数据采集单元相连接,起到监控、控制与信息处理的作用。The processing station control unit: communicates with the processing station. The processing station includes a data receiving unit and a monitoring screen. The monitoring screen and data receiving unit are connected to the UAV processor and data collection unit through a wireless communication system. , plays the role of monitoring, control and information processing.

参考图1、图3和图4所示,在上述系统的基础上,本方案还公开了一种基于无人机航测的三维实景建模方法,包括以下步骤:Referring to Figures 1, 3 and 4, on the basis of the above system, this solution also discloses a three-dimensional reality modeling method based on UAV aerial survey, which includes the following steps:

S1:地面设置处理站、基点并且合理布设多组像控点,像控点的布设采用图根点的测量方法以及轴对称的布设方式,在航测区域的复杂边界增加像控点的布设密度,处理站预先设置无人机的航拍参数,生成带状航测区域,将所述像控点的位置信息输入给处理站形成坐标图,获取当前位置到所述像控点之间的导航路线;S1: Set up processing stations and base points on the ground and arrange multiple groups of image control points reasonably. The image control points are laid out using the measurement method of the root point and the axially symmetric layout method. The density of the image control points is increased at the complex boundary of the aerial survey area. The processing station pre-sets the aerial photography parameters of the UAV, generates a strip aerial survey area, inputs the position information of the image control point to the processing station to form a coordinate map, and obtains the navigation route between the current position and the image control point;

其中,航拍参数设置包括:航向重叠度、旁向重叠度、航测高度和基准点高度的设置,根据所述航拍参数和无人机镜头参数生成带状航测区域。The aerial photography parameter settings include: heading overlap, side overlap, aerial survey height and reference point height settings, and a strip aerial survey area is generated based on the aerial photography parameters and UAV lens parameters.

S2:处理站控制无人机从基准坐标控制点前往对应的像控点,通过无人机进行倾斜摄影,获得带有像控点的图像数据,采用像控点识别分类模型对所述图像数据中的像控点标识进行识别与图像的预处理工作,筛除废片;S2: The processing station controls the drone to move from the reference coordinate control point to the corresponding image control point, uses the drone to perform oblique photography, and obtains image data with image control points, and uses the image control point recognition and classification model to classify the image data The image control point identification in the image is identified and image pre-processed, and waste films are screened out;

其中,所述像控点识别分类模型的构建与训练过程,包括以下步骤:Wherein, the construction and training process of the image control point recognition and classification model includes the following steps:

S2.1:搭建若干像控点标识,拍摄像控点标识的原始图像,对图像进行预处理,包括图像的裁剪、尺寸调整、灰度转换、归一化,确保图像数据的一致性与可用性;S2.1: Build a number of image control point markers, capture the original images of the image control point markers, and preprocess the images, including image cropping, size adjustment, grayscale conversion, and normalization to ensure the consistency and availability of image data. ;

S2.2:使用深度学习模型中的卷积神经网络自动学习特征,将图像转化为机器学习算法可以处理的特征表达形式;采用基于VGG11网络的编码器-解码器的神经网络并训练;S2.2: Use the convolutional neural network in the deep learning model to automatically learn features and convert the image into a feature expression form that can be processed by the machine learning algorithm; use the encoder-decoder neural network based on the VGG11 network and train it;

S2.3:对预处理后的图像数据集进行标注,形成标注文件,为每个样本分配正确的分类标签,根据任务的特点选择合适的分类模型,标注文件与原始图像相对应形成样本集;S2.3: Annotate the preprocessed image data set to form an annotation file, assign the correct classification label to each sample, select an appropriate classification model according to the characteristics of the task, and form a sample set corresponding to the annotation file and the original image;

S2.4:模型训练时,先对原始图像做图像分类,判断图像中是否像控点,若为是的情况下对图像做图像分割,从而精确到每一个像素的分类;S2.4: When training the model, first perform image classification on the original image to determine whether the image resembles a control point. If so, perform image segmentation on the image to accurately classify each pixel;

S2.5:将样本集划分为训练集和测试集,其中训练集占样本总数的70%,测试集占样本总数的30%;使用测试集对训练好的模型进行评估,计算模型的准确率、精确度,根据评估结果对模型进行调优;S2.5: Divide the sample set into a training set and a test set, where the training set accounts for 70% of the total samples and the test set accounts for 30% of the total samples; use the test set to evaluate the trained model and calculate the accuracy of the model , accuracy, and tune the model based on the evaluation results;

S2.6:最后将训练好的模型部署到实际应用环境中,用于实现对图像的像控点识别分类功能。S2.6: Finally, deploy the trained model to the actual application environment to realize the image control point recognition and classification function of the image.

前往相应的像控点并进行倾斜摄影,从而获得带有像控点的图像数据,在像控点识别分类模型的构建与训练过程中,经过神经网络学习,对图像做预处理,并进行图像分类,通过判断图像中是否存在像控点,从而对图像进行分割,用于实现对像控点的分类识别,筛除废片的同时,减少后期的工作量,便于图像的分类与后期的实景建模。Go to the corresponding image control point and perform tilt photography to obtain image data with image control points. During the construction and training process of the image control point recognition and classification model, the image is preprocessed through neural network learning, and the image is Classification, by judging whether there are image control points in the image, thereby segmenting the image, which is used to realize the classification and recognition of the object control points, while filtering out waste films, reducing the workload in the later stage, and facilitating the classification of the image and the real scene in the later stage. Modeling.

S3:在摄影过程中,将每次获取的图像数据输入图像质量模型进行精度评价;如果精度符合预设值,则继续获取下一个像控点的图像数据,直至采集完成所有的像控点图像数据;如果精度不符合预设值,则重新获取图像;S3: During the photography process, input the image data acquired each time into the image quality model for accuracy evaluation; if the accuracy meets the preset value, continue to acquire the image data of the next image control point until all image control point images are collected. data; if the accuracy does not meet the preset value, reacquire the image;

其中,图像质量模型的评价流程,包括如下步骤:Among them, the evaluation process of the image quality model includes the following steps:

S3.1:选择拍摄图像的某一目标点进行实例分析;S3.1: Select a target point of the captured image for instance analysis;

S3.2:选择不同的航向重叠率、旁向重叠度、航测高度和基准点高度的布设,统计各实例的高程误差和水平误差;S3.2: Select different layouts of course overlap rate, side overlap rate, aerial survey height and reference point height, and count the elevation error and horizontal error of each instance;

S3.3:通过模糊综合评价模型计算出各种情况的精度和各实例的高程误差和水平误差进行对比分析。S3.3: Calculate the accuracy of various situations through the fuzzy comprehensive evaluation model and conduct comparative analysis of the elevation error and horizontal error of each instance.

S4:利用重新获取的符合精度的图像覆盖原有的模糊景象,实现对实景的更新,根据采集的图像建模后形成最终的实景模型图。S4: Use the reacquired image that meets the accuracy to cover the original blurred scene to update the real scene, and form the final real scene model diagram based on the collected image modeling.

根据每个像控点获取的图像数据输入图像质量模型进行精度评价,该图像质量模型通过不同的航向重叠率、旁向重叠度、航测高度和基准点高度的布设,在模糊综合评价模型的计算下得出各种情况的精度和实施例的高程误差和水平误差进行对比分析,从而判断图像质量的精度,判断是否复合图像需求标准,从而删除冗余图像,从而解决现有技术中工作量大的问题,同时解决了现有技术中航测质量精度不够的问题,以及测量的精度欠佳等问题。According to the image data obtained by each image control point, the image quality model is input into the image quality model for accuracy evaluation. The image quality model uses different heading overlap rates, side overlap degrees, aerial survey heights and reference point heights to calculate the fuzzy comprehensive evaluation model. The accuracy of various situations and the elevation error and horizontal error of the embodiment are compared and analyzed to determine the accuracy of the image quality and determine whether the composite image requires standards, thereby deleting redundant images and solving the heavy workload in the existing technology. It also solves the problems of insufficient aerial survey quality accuracy and poor measurement accuracy in the existing technology.

需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。It should be noted that in this article, relational terms such as first and second are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply that these entities or operations are mutually exclusive. any such actual relationship or sequence exists between them. Furthermore, the terms "comprises," "comprises," or any other variations thereof are intended to cover a non-exclusive inclusion such that a process, method, article, or apparatus that includes a list of elements includes not only those elements, but also those not expressly listed other elements, or elements inherent to the process, method, article or equipment.

Claims (7)

1.一种基于无人机航测的三维实景建模方法,其特征在于,包括以下步骤:1. A three-dimensional reality modeling method based on UAV aerial survey, which is characterized by including the following steps: S1:地面设置处理站、基点并且合理布设多组像控点,处理站预先设置无人机的航拍参数,生成带状航测区域,将所述像控点的位置信息输入给处理站形成坐标图,获取当前位置到所述像控点之间的导航路线;S1: Set up processing stations and base points on the ground and reasonably arrange multiple groups of image control points. The processing station pre-sets the aerial photography parameters of the UAV, generates a strip aerial survey area, and inputs the position information of the image control points to the processing station to form a coordinate map. , obtain the navigation route between the current position and the image control point; S2:处理站控制无人机从基准坐标控制点前往对应的像控点,通过无人机进行倾斜摄影,获得带有像控点的图像数据,对获取的图像数据进行预处理,筛除废片;S2: The processing station controls the drone to move from the reference coordinate control point to the corresponding image control point, uses the drone to perform oblique photography, and obtains image data with image control points. It preprocesses the acquired image data and screens out waste. piece; S3:在摄影过程中,将每次获取的图像数据输入图像质量模型进行精度评价;如果精度符合预设值,则继续获取下一个像控点的图像数据,直至采集完成所有的像控点图像数据;如果精度不符合预设值,则重新获取图像;S3: During the photography process, input the image data acquired each time into the image quality model for accuracy evaluation; if the accuracy meets the preset value, continue to acquire the image data of the next image control point until all image control point images are collected. data; if the accuracy does not meet the preset value, reacquire the image; S4:利用重新获取的符合精度的图像覆盖原有的模糊景象,实现对实景的更新,根据采集的图像建模后形成最终的实景模型图。S4: Use the reacquired image that meets the accuracy to cover the original blurred scene to update the real scene, and form the final real scene model diagram based on the collected image modeling. 2.根据权利要求1所述的一种基于无人机航测的三维实景建模方法,其特征在于:所述步骤S1中航拍参数设置包括:航向重叠度、旁向重叠度、航测高度和基准点高度的设置,根据所述航拍参数和无人机镜头参数生成带状航测区域。2. A three-dimensional real scene modeling method based on UAV aerial survey according to claim 1, characterized in that: the aerial photography parameter setting in step S1 includes: heading overlap degree, side overlap degree, aerial survey height and benchmark. Set the point height to generate a strip aerial survey area based on the aerial photography parameters and UAV lens parameters. 3.根据权利要求2所述的一种基于无人机航测的三维实景建模方法,其特征在于:所述步骤2中采用像控点识别分类模型对所述图像数据中的像控点标识进行识别与图像的预处理工作。3. A three-dimensional real scene modeling method based on UAV aerial survey according to claim 2, characterized in that: in step 2, an image control point recognition and classification model is used to identify the image control points in the image data. Perform recognition and image preprocessing work. 4.根据权利要求3所述的一种基于无人机航测的三维实景建模方法,其特征在于:所述像控点识别分类模型的构建与训练过程,包括以下步骤:4. A three-dimensional reality modeling method based on UAV aerial survey according to claim 3, characterized in that: the construction and training process of the image control point recognition and classification model includes the following steps: 步骤1:搭建若干像控点标识,拍摄像控点标识的原始图像,对图像进行预处理,包括图像的裁剪、尺寸调整、灰度转换、归一化,确保图像数据的一致性与可用性;Step 1: Build a number of image control point markers, take the original image of the image control point marker, and preprocess the image, including image cropping, size adjustment, grayscale conversion, and normalization to ensure the consistency and usability of the image data; 步骤2:使用深度学习模型中的卷积神经网络自动学习特征,将图像转化为机器学习算法可以处理的特征表达形式;采用基于VGG11网络的编码器-解码器的神经网络并训练;Step 2: Use the convolutional neural network in the deep learning model to automatically learn features and convert the image into a feature expression form that can be processed by the machine learning algorithm; use the encoder-decoder neural network based on the VGG11 network and train it; 步骤3:对预处理后的图像数据集进行标注,形成标注文件,为每个样本分配正确的分类标签,根据任务的特点选择合适的分类模型,标注文件与原始图像相对应形成样本集;Step 3: Annotate the preprocessed image data set to form an annotation file, assign the correct classification label to each sample, select an appropriate classification model according to the characteristics of the task, and form a sample set corresponding to the annotation file and the original image; 步骤4:模型训练时,先对原始图像做图像分类,判断图像中是否像控点,若为是的情况下对图像做图像分割,从而精确到每一个像素的分类;Step 4: When training the model, first perform image classification on the original image to determine whether the image resembles a control point. If so, perform image segmentation on the image to accurately classify each pixel; 步骤5:将样本集划分为训练集和测试集,其中训练集占样本总数的70%,测试集占样本总数的30%;使用测试集对训练好的模型进行评估,计算模型的准确率、精确度,根据评估结果对模型进行调优;Step 5: Divide the sample set into a training set and a test set, where the training set accounts for 70% of the total samples and the test set accounts for 30% of the total samples; use the test set to evaluate the trained model and calculate the accuracy of the model. Accuracy, the model is tuned based on the evaluation results; 步骤6:最后将训练好的模型部署到实际应用环境中,用于实现对图像的像控点识别分类功能。Step 6: Finally, deploy the trained model to the actual application environment to realize the image control point recognition and classification function of the image. 5.根据权利要求4所述的一种基于无人机航测的三维实景建模方法,其特征在于:所述步骤S3中图像质量模型的评价流程,包括如下步骤:5. A three-dimensional reality modeling method based on UAV aerial survey according to claim 4, characterized in that: the evaluation process of the image quality model in step S3 includes the following steps: 步骤一、选择拍摄图像的某一目标点进行实例分析;Step 1: Select a target point of the captured image for example analysis; 步骤二、选择不同的航向重叠率、旁向重叠度、航测高度和基准点高度的布设,统计各实例的高程误差和水平误差;Step 2: Select different layouts of course overlap rate, side overlap rate, aerial survey height and reference point height, and count the elevation error and horizontal error of each instance; 步骤三、通过模糊综合评价模型计算出各种情况的精度和各实例的高程误差和水平误差进行对比分析。Step 3: Calculate the accuracy of various situations through the fuzzy comprehensive evaluation model and conduct comparative analysis of the elevation error and horizontal error of each instance. 6.根据权利要求1所述的一种基于无人机航测的三维实景建模方法,其特征在于:所述步骤S1中像控点的布设采用图根点的测量方法以及轴对称的布设方式,在航测区域的复杂边界增加像控点的布设密度。6. A three-dimensional real scene modeling method based on UAV aerial survey according to claim 1, characterized in that: the layout of the image control points in step S1 adopts the measurement method of the root point and the axially symmetric layout method. , increasing the layout density of image control points at the complex boundaries of the aerial survey area. 7.根据权利要求1-6任一所述的一种基于无人机航测的三维实景建模方法的建模系统,其特征在于:包括无人机、处理站和通讯基站,所述无人机通过与通讯基站之间建立连接,从而实现与地面处理站的通信,所述处理站从而对无人机进行控制;7. A modeling system based on a three-dimensional real-scene modeling method based on UAV aerial survey according to any one of claims 1 to 6, characterized in that it includes a UAV, a processing station and a communication base station, and the UAV The drone establishes a connection with the communication base station to achieve communication with the ground processing station, and the processing station controls the drone; 所述无人机包含有无人机控制单元、数据采集单元和处理站控制单元;The drone includes a drone control unit, a data collection unit and a processing station control unit; 所述无人机控制单元包括:无人机处理器、惯性测量系统、定位系统、供电系统、存储系统、无线通信系统,上述系统均与无人机处理器相连接;The drone control unit includes: a drone processor, an inertial measurement system, a positioning system, a power supply system, a storage system, and a wireless communication system. The above systems are all connected to the drone processor; 无人机处理器用于接收并处理信号使用;The UAV processor is used to receive and process signals; 所述惯性测量系统由加速度计和陀螺仪组成,用于感测无人机的加速度并通过积分运算获得无人机速度和姿态的相关数据,其中,加速度计用来测量无人机运动过程中相对于惯性空间的加速度,指示当地垂线方向;陀螺仪则用来测量无人机相对搭载平台转动运动方向的角位移,指示地球自转轴的方向,通过惯性测量系统的设置从而获取无人机姿态,便于角度拍摄的控制;The inertial measurement system is composed of an accelerometer and a gyroscope, and is used to sense the acceleration of the drone and obtain data related to the speed and attitude of the drone through integral operations. The accelerometer is used to measure the movement of the drone. The acceleration relative to the inertial space indicates the local vertical direction; the gyroscope is used to measure the angular displacement of the drone relative to the direction of rotation of the mounting platform, indicating the direction of the earth's rotation axis. The drone is obtained through the settings of the inertial measurement system. Attitude, easy to control angle shooting; 所述定位系统采用GPS定位或者北斗导航定位,用于定位无人机的位置;The positioning system uses GPS positioning or Beidou navigation positioning to locate the position of the drone; 所述供电系统采用锂离子动力电池;The power supply system uses lithium-ion power batteries; 所述存储系统采用存储卡与外接硬盘结合的方式存储数据;The storage system uses a combination of a memory card and an external hard disk to store data; 所述无线通信系统采用远距离WiFi模块实现通信;The wireless communication system uses a long-distance WiFi module to achieve communication; 所述数据采集单元包括:摄像机、红外测距仪和电池信息传感器,所述摄像机用于照片以及视频的拍摄,所述红外测距仪用于测量无人机机身与基点、像控点之间的高度距离,所述电池信息传感器用于电池电量信息的采集,在航测的过程中无人机电池的实时电量小于电量设置阈值时,无人机处理器通过无线通信系统向处理站发送信号,提醒工作人员;The data collection unit includes: a camera, an infrared range finder and a battery information sensor. The camera is used for taking photos and videos. The infrared range finder is used to measure the relationship between the drone body and base points and image control points. The battery information sensor is used to collect battery power information. During the aerial survey, when the real-time power of the drone battery is less than the power setting threshold, the drone processor sends a signal to the processing station through the wireless communication system. , remind staff; 所述处理站控制单元与处理站通讯连接,所述处理站包含有数据接收单元以及监控屏,所述监控屏以及数据接收单元通过无线通信系统与无人机处理器以及数据采集单元相连接,起到监控、控制与信息处理的作用。The processing station control unit is communicatively connected to the processing station. The processing station includes a data receiving unit and a monitoring screen. The monitoring screen and data receiving unit are connected to the UAV processor and data collection unit through a wireless communication system. It plays the role of monitoring, control and information processing.
CN202311377081.2A 2023-10-24 2023-10-24 Three-dimensional live-action modeling system and method based on unmanned aerial vehicle aerial survey Withdrawn CN117456092A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311377081.2A CN117456092A (en) 2023-10-24 2023-10-24 Three-dimensional live-action modeling system and method based on unmanned aerial vehicle aerial survey

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311377081.2A CN117456092A (en) 2023-10-24 2023-10-24 Three-dimensional live-action modeling system and method based on unmanned aerial vehicle aerial survey

Publications (1)

Publication Number Publication Date
CN117456092A true CN117456092A (en) 2024-01-26

Family

ID=89588451

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311377081.2A Withdrawn CN117456092A (en) 2023-10-24 2023-10-24 Three-dimensional live-action modeling system and method based on unmanned aerial vehicle aerial survey

Country Status (1)

Country Link
CN (1) CN117456092A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117950355A (en) * 2024-03-27 2024-04-30 西安爱生无人机技术有限公司 Reconnaissance unmanned aerial vehicle supervision control system and reconnaissance unmanned aerial vehicle supervision control method
CN119359937A (en) * 2024-12-26 2025-01-24 四川省地质调查研究院测绘地理信息中心 Building boundary recognition method based on real-scene 3D technology and multi-source fusion data

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117950355A (en) * 2024-03-27 2024-04-30 西安爱生无人机技术有限公司 Reconnaissance unmanned aerial vehicle supervision control system and reconnaissance unmanned aerial vehicle supervision control method
CN117950355B (en) * 2024-03-27 2024-06-11 西安爱生无人机技术有限公司 Reconnaissance unmanned aerial vehicle supervision control system and reconnaissance unmanned aerial vehicle supervision control method
CN119359937A (en) * 2024-12-26 2025-01-24 四川省地质调查研究院测绘地理信息中心 Building boundary recognition method based on real-scene 3D technology and multi-source fusion data

Similar Documents

Publication Publication Date Title
CN112567201B (en) Distance measuring method and device
JP7326720B2 (en) Mobile position estimation system and mobile position estimation method
CN103941746B (en) Image processing system and method is patrolled and examined without man-machine
WO2022078240A1 (en) Camera precise positioning method applied to electronic map, and processing terminal
CN109520500B (en) A precise positioning and street view library collection method based on terminal shooting image matching
CN117456092A (en) Three-dimensional live-action modeling system and method based on unmanned aerial vehicle aerial survey
CN107504957A (en) The method that three-dimensional terrain model structure is quickly carried out using unmanned plane multi-visual angle filming
CN107194989A (en) The scene of a traffic accident three-dimensional reconstruction system and method taken photo by plane based on unmanned plane aircraft
CN110033489A (en) A kind of appraisal procedure, device and the equipment of vehicle location accuracy
CN116719339A (en) Unmanned aerial vehicle-based power line inspection control method and system
CN113359782B (en) A method for autonomous location and landing of unmanned aerial vehicles integrating LIDAR point cloud and image data
WO2020181508A1 (en) Digital surface model construction method, and processing device and system
JP2020153956A (en) Moving body position estimation system and moving body position estimation method
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
CN110998241A (en) System and method for calibrating an optical system of a movable object
CN112815923B (en) Visual positioning method and device
CN114004977A (en) Aerial photography data target positioning method and system based on deep learning
CN106292126A (en) A kind of intelligence aerial survey flight exposal control method, unmanned aerial vehicle (UAV) control method and terminal
TWI444593B (en) Ground target geolocation system and method
CN112991487A (en) System for multithreading real-time construction of orthoimage semantic map
CN107272037A (en) A kind of road equipment position, image information collecting device and the method for gathering information
CN112947526A (en) Unmanned aerial vehicle autonomous landing method and system
CN115164769A (en) Three-dimensional real estate measuring and calculating method based on oblique photography technology
CN113781639A (en) A rapid construction method of digital model of large-scale road infrastructure
CN111612829B (en) High-precision map construction method, system, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20240126