CN118135011A - Meteorological station surrounding obstacle elevation angle measuring method based on visual recognition technology - Google Patents
Meteorological station surrounding obstacle elevation angle measuring method based on visual recognition technology Download PDFInfo
- Publication number
- CN118135011A CN118135011A CN202311833347.XA CN202311833347A CN118135011A CN 118135011 A CN118135011 A CN 118135011A CN 202311833347 A CN202311833347 A CN 202311833347A CN 118135011 A CN118135011 A CN 118135011A
- Authority
- CN
- China
- Prior art keywords
- scale
- obstacle
- elevation angle
- information
- picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 23
- 230000000007 visual effect Effects 0.000 title claims abstract description 12
- 238000001514 detection method Methods 0.000 claims abstract description 38
- 238000004364 calculation method Methods 0.000 claims abstract description 8
- 238000012549 training Methods 0.000 claims abstract description 8
- 238000013136 deep learning model Methods 0.000 claims abstract description 7
- 238000012360 testing method Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 5
- 241000375392 Tana Species 0.000 claims description 3
- 230000004888 barrier function Effects 0.000 claims 2
- 238000002372 labelling Methods 0.000 claims 2
- 238000005259 measurement Methods 0.000 abstract description 7
- 238000003384 imaging method Methods 0.000 abstract description 5
- 238000012545 processing Methods 0.000 abstract description 2
- 239000000284 extract Substances 0.000 description 11
- 238000010586 diagram Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 3
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 2
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 2
- 238000000691 measurement method Methods 0.000 description 2
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 1
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 1
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 1
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域Technical Field
本发明涉及图像处理和机器视觉技术领域,具体涉及一种基于视觉识别技术的气象站周边障碍物仰角的测定方法。The invention relates to the technical field of image processing and machine vision, and in particular to a method for measuring the elevation angle of obstacles around a weather station based on visual recognition technology.
背景技术Background technique
地面气象观测站选址时,需要考虑周围障碍物遮挡对测定的影响,对应的评估指标称作“地平圈遮挡仰角”。对于“地平圈遮挡仰角”的测量,传统方法是将经纬仪水平架设在观测场中心点,镜头距地面1.5m,方位度盘0°对准正北。对可视范围内地形的最大遮挡仰角进行测量,从正北方开始,按顺时针方向,方位每隔2°测量一次,然后再综合测定结果计算遮挡视角因子的累计值。When selecting a site for a ground-based meteorological observation station, it is necessary to consider the impact of surrounding obstacles on the measurement. The corresponding evaluation index is called the "horizon obstruction elevation angle". For the measurement of the "horizon obstruction elevation angle", the traditional method is to set up the theodolite horizontally at the center of the observation field, with the lens 1.5m above the ground and the azimuth disk 0° facing due north. The maximum obstruction elevation angle of the terrain within the visible range is measured, starting from due north, in a clockwise direction, and the azimuth is measured every 2°, and then the cumulative value of the obstruction viewing angle factor is calculated based on the comprehensive measurement results.
然而传统测量方法工作量耗时耗力,无法实现常规化检测,不能及时反映观测场周围障碍物的变化情况。However, traditional measurement methods are time-consuming and labor-intensive, cannot achieve routine detection, and cannot promptly reflect changes in obstacles around the observation field.
发明内容Summary of the invention
发明目的:针对现有技术中存在的问题,本发明提供一种基于视觉识别技术的气象站周边障碍物仰角的测定方法,通过摄像头采集视频图像数据,提取目标障碍物轮廓,根据透视成像原理和三角形相似原理,比例测算获得障碍物仰角信息,该方法可通过对周边障碍物进行360°快速扫描而获得周边障碍物的仰角信息。Purpose of the invention: In view of the problems existing in the prior art, the present invention provides a method for measuring the elevation angles of obstacles around a weather station based on visual recognition technology. The method collects video image data through a camera, extracts the outline of the target obstacle, and obtains the elevation angle information of the obstacle by proportional measurement based on the perspective imaging principle and the triangle similarity principle. The method can obtain the elevation angle information of the surrounding obstacles by performing a 360° rapid scan of the surrounding obstacles.
技术方案:本发明提供了一种基于视觉识别技术的气象站周边障碍物仰角的测定方法,包括如下步骤:Technical solution: The present invention provides a method for measuring the elevation angle of obstacles around a weather station based on visual recognition technology, comprising the following steps:
步骤1:构建常规障碍物和标尺的目标检测模型,所述目标检测模型采用yolo深度学习模型来训练标尺和障碍物的检测;Step 1: Build a target detection model for conventional obstacles and rulers. The target detection model uses the YOLO deep learning model to train the detection of rulers and obstacles.
步骤2:采集摄像头基准位置图像中的标尺信息,提取标尺轮廓线,获得标尺的位置信息和高度信息;Step 2: Collect the scale information in the camera reference position image, extract the scale contour line, and obtain the scale position information and height information;
步骤3:摄像头从基准位置开始在水平面内旋转,获得360°的全景信息,并记录每张图片对应的角度信息;Step 3: The camera rotates in the horizontal plane from the reference position to obtain 360° panoramic information and record the angle information corresponding to each picture;
步骤4:将采集图片逐次输入步骤1训练好的目标检测模型进行目标检测,提取目标障碍物的类别和轮廓信息,并结合步骤2中的标尺信息和已知标尺参数,计算障碍物仰角。Step 4: Input the collected images into the target detection model trained in step 1 one by one for target detection, extract the category and contour information of the target obstacle, and calculate the obstacle elevation angle by combining the scale information in step 2 and the known scale parameters.
进一步地,所述步骤1中包括如下步骤:Furthermore, the step 1 includes the following steps:
步骤1.1:采集图片,建立标尺和障碍物数据集;所述障碍物包括山体、构筑物、树木;设置标尺,所述标尺高度与摄像头中心点高度一致;Step 1.1: Collect pictures and establish a ruler and obstacle data set; the obstacles include mountains, structures, and trees; set a ruler, and the height of the ruler is consistent with the height of the camera center point;
步骤1.2:训练目标检测模型,使用labelme软件对目标障碍物和标尺进行轮廓标注,标注生成json文件,然后使用脚本将其转化成yolo格式的文件;Step 1.2: Train the target detection model, use labelme software to mark the outlines of the target obstacles and rulers, generate a json file, and then use a script to convert it into a yolo format file;
步骤1.3:Python脚本文件将采集的图片制作成训练数据集文件“train.txt”和测试数据集文件“test.txt”;Step 1.3: The Python script file creates the collected images into a training data set file "train.txt" and a test data set file "test.txt";
步骤1.4:将获得的数据集文件输入yolo深度学习模型,进行障碍物目标检测的模型训练。Step 1.4: Input the obtained dataset file into the YOLO deep learning model to perform model training for obstacle target detection.
进一步地,所述步骤2具体步骤为:Furthermore, the specific steps of step 2 are:
步骤2.1:标尺定位:选取与摄像头中心位置高度一致的特征标尺,放置于摄像头的正前方,距离以能在摄像头图像视野范围内呈现完整图片为准;Step 2.1: Ruler positioning: Select a feature ruler that is highly consistent with the center position of the camera and place it in front of the camera. The distance should be such that a complete image can be presented within the camera's field of view.
步骤2.2:图片预处理:通过摄像头获取包含标尺的图片后,以图片中心位置为标定,向左右各取图片的五分之一,确保标尺的中心位置;Step 2.2: Image preprocessing: After obtaining the image containing the ruler through the camera, take the center position of the image as the calibration, and take one-fifth of the image to the left and right to ensure the center position of the ruler;
步骤2.3:将图像输入步骤1训练好的目标检测模型,进行标尺识别,提取标尺轮廓,并根据纵向轮廓计算标尺在图中的高度c'。Step 2.3: Input the image into the object detection model trained in step 1, perform ruler recognition, extract the ruler contour, and calculate the height c' of the ruler in the image based on the longitudinal contour.
进一步地,所述步骤2.2中,在取得中心图片之后,进行不失真的改变图像大小,将图片改为640×640尺寸。Furthermore, in the step 2.2, after obtaining the central image, the image size is changed without distortion, and the image is changed to a size of 640×640.
进一步地,所述步骤3中图片采集频率不大于2°,即至少每隔2°采集一次图像信息。Furthermore, the frequency of image acquisition in step 3 is no more than 2°, that is, image information is acquired at least once every 2°.
进一步地,所述步骤4中计算障碍物仰角的具体步骤为:Furthermore, the specific steps of calculating the obstacle elevation angle in step 4 are:
1)根据步骤2.2中的图像预处理方法获取中心图片,将图片改为640×640尺寸;1) Obtain the central image according to the image preprocessing method in step 2.2 and change the image size to 640×640;
2)将修改尺寸后的图像输入步骤1训练好的目标检测模型,进行目标障碍物的类别轮廓识别,并提取相关信息;2) Input the resized image into the target detection model trained in step 1 to identify the category contours of the target obstacles and extract relevant information;
3)提取特征尺寸:提取每张图片竖向中心线处障碍物轮廓线的最高点坐标,计算最高点坐标与图像中心点纵坐标的差值,其绝对值代表障碍物在图中高于标尺的尺寸b';3) Extract characteristic size: Extract the coordinates of the highest point of the obstacle contour line at the vertical center line of each image, calculate the difference between the coordinates of the highest point and the vertical coordinate of the center point of the image, and its absolute value represents the size b' of the obstacle above the ruler in the image;
4)仰角计算:结合标尺实际高度b和标尺与摄像头的实际距离a,通过相似三角原理计算仰角,相关计算公式为:4) Elevation angle calculation: Combine the actual height b of the ruler and the actual distance a between the ruler and the camera to calculate the elevation angle using the principle of similar triangles. The relevant calculation formula is:
tana=b/a (2)tana=b/a (2)
a=arctan(b/a) (3)a=arctan(b/a) (3)
其中,b'为障碍物在图中高于标尺的尺寸;c'为图中标尺的高度;c为标尺的实际高度;a为标尺与摄像头的实际距离。Wherein, b' is the size of the obstacle above the ruler in the figure; c' is the height of the ruler in the figure; c is the actual height of the ruler; and a is the actual distance between the ruler and the camera.
有益效果:Beneficial effects:
本发明通过摄像头采集视频图像数据,提取目标障碍物的外接矩形框,根据透视成像原理和三角形相似原理,比例测算获得障碍物仰角信息,该方法可通过对周边障碍物进行360°快速扫描而获得气象站周边障碍物的仰角信息。相较于传统人工测量方法,该方法能够提供更高的检测精度,且高效快速,可实现常规化检测,能及时反映观测场周围障碍物的变化情况。The present invention collects video image data through a camera, extracts the circumscribed rectangular frame of the target obstacle, and obtains the elevation angle information of the obstacle by proportional measurement based on the perspective imaging principle and the triangle similarity principle. This method can obtain the elevation angle information of the obstacles around the weather station by performing a 360° rapid scan of the surrounding obstacles. Compared with the traditional manual measurement method, this method can provide higher detection accuracy, is efficient and fast, can realize routine detection, and can timely reflect the changes in obstacles around the observation field.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为本发明实施例采集的障碍物数据集;FIG1 is an obstacle data set collected by an embodiment of the present invention;
图2为本发明实施例选择的标尺与摄像头之间的关系图;FIG2 is a diagram showing the relationship between the scale and the camera selected in an embodiment of the present invention;
图3为本发明实施例摄像头、标尺、障碍物透视成像和三角形相似原理示意图;FIG3 is a schematic diagram of camera, scale, obstacle perspective imaging and triangle similarity principle according to an embodiment of the present invention;
图4为本发明实施例经过目标检测模型后的目标障碍物、标尺的类别轮廓识别图;FIG4 is a diagram showing the category contours of target obstacles and scales after being detected by a target detection model according to an embodiment of the present invention;
图5为本发明yolo深度学习模型框架图。Figure 5 is a framework diagram of the Yolo deep learning model of the present invention.
具体实施方式Detailed ways
下面结合附图对本发明作进一步描述。以下实施例仅用于更加清楚地说明本发明的技术方案,而不能以此来限制本发明的保护范围。The present invention will be further described below in conjunction with the accompanying drawings. The following embodiments are only used to more clearly illustrate the technical solution of the present invention, and cannot be used to limit the protection scope of the present invention.
本发明公开了一种基于视觉识别技术的气象站周边障碍物仰角的测定方法,通过摄像头采集视频图像数据,提取目标障碍物轮廓,根据透视成像原理和三角形相似原理,比例测算获得障碍物仰角信息,该方法可通过对周边障碍物进行360°快速扫描而获得周边障碍物的仰角信息。具体包括如下步骤:The present invention discloses a method for measuring the elevation angle of obstacles around a weather station based on visual recognition technology. The method collects video image data through a camera, extracts the outline of the target obstacle, and obtains the elevation angle information of the obstacle by proportional measurement based on the perspective imaging principle and the triangle similarity principle. The method can obtain the elevation angle information of the surrounding obstacles by performing a 360° rapid scan of the surrounding obstacles. Specifically, the method comprises the following steps:
一、基于YOLO的障碍物目标检测。1. Obstacle target detection based on YOLO.
采集常规的障碍物(山体、构筑物、树木等)图像,建立障碍物数据集,对障碍物图像进行类别标注,构建障碍物目标检测模型。参见图1,图1为选择的障碍物数据集。Collect conventional obstacle images (mountains, structures, trees, etc.), establish an obstacle dataset, label the obstacle images by category, and build an obstacle target detection model. See Figure 1, which shows the selected obstacle dataset.
校准摄像头水平视角:选择与摄像头中心点高度一致的两根标尺,调整摄像头角度,使得与摄像头高度一致的两点在摄像头视角中相互重叠,此时摄像头处于水平视角,具体参见图2。Calibrate the horizontal viewing angle of the camera: Select two rulers that are at the same height as the center point of the camera, and adjust the camera angle so that the two points that are at the same height as the camera overlap each other in the camera viewing angle. At this time, the camera is in a horizontal viewing angle. See Figure 2 for details.
在对目标检测模型进行训练时,具体包括如下步骤:When training the target detection model, the specific steps include:
1)训练目标检测模型,目标检测模型采用yolo深度学习模型来训练标尺和障碍物,参见图5,使用labelme软件对目标障碍物和标尺进行轮廓标注,标注生成json文件,然后使用脚本将其转化成yolo格式的文件。1) Train the target detection model. The target detection model uses the YOLO deep learning model to train the ruler and obstacles, see Figure 5. Use the Labelme software to annotate the contours of the target obstacles and rulers, annotate to generate a JSON file, and then use a script to convert it into a YOLO format file.
2)Python脚本文件将采集的图片制作成训练数据集文件“train.txt”和测试数据集文件“test.txt”。2) The Python script file converts the collected images into a training data set file "train.txt" and a test data set file "test.txt".
采集摄像头基准位置图像中的标尺信息,提取标尺轮廓线,获得标尺的位置信息和高度信息,具体如下:Collect the ruler information in the camera reference position image, extract the ruler contour line, and obtain the ruler position information and height information, as follows:
步骤2.1:标尺定位:选取与摄像头中心位置高度一致的特征标尺,放置于摄像头的正前方,距离以能在摄像头图像视野范围内呈现完整图片为准。Step 2.1: Ruler positioning: Select a feature ruler that is highly consistent with the center position of the camera and place it in front of the camera. The distance should be such that a complete image can be presented within the field of view of the camera.
步骤2.2:图片预处理:通过摄像头获取包含标尺的图片后,以图片中心位置为标定,向左右各取图片的五分之一,确保标尺的中心位置。在取得中心图片之后,进行不失真的改变图像大小,将图片改为640×640尺寸。Step 2.2: Image preprocessing: After obtaining the image containing the ruler through the camera, take the center position of the image as the calibration, and take one-fifth of the image to the left and right to ensure the center position of the ruler. After obtaining the center image, resize the image without distortion and change the image to 640×640 size.
步骤2.3:将图像输入步骤1训练好的目标检测模型,进行标尺识别,提取标尺轮廓,并根据纵向轮廓计算标尺在图中的高度c'。Step 2.3: Input the image into the object detection model trained in step 1, perform ruler recognition, extract the ruler contour, and calculate the height c' of the ruler in the image based on the longitudinal contour.
摄像头从基准位置开始在水平面内旋转,获得360°的全景信息,并记录每张图片对应的角度信息。图片采集频率不大于2°,即至少每隔2°采集一次图像信息。The camera rotates in the horizontal plane from the reference position to obtain 360° panoramic information and record the angle information corresponding to each picture. The picture acquisition frequency is no more than 2°, that is, image information is collected at least once every 2°.
将采集图片逐次输入步骤1训练好的目标检测模型进行目标检测,提取目标障碍物的类别和轮廓信息,并结合步骤2中的标尺信息和已知标尺参数,计算障碍物仰角。The collected images are input into the target detection model trained in step 1 one by one for target detection, the category and contour information of the target obstacle are extracted, and the obstacle elevation angle is calculated by combining the scale information and known scale parameters in step 2.
二、基于目标检测的障碍物仰角计算2. Obstacle elevation angle calculation based on target detection
基本思路:根据三角形相似原理,障碍物的仰角可用下图3中b和a(摄像头与标尺的距离)来计算。b为障碍物在标尺处特征尺寸,通过目标检测和图像特征提取,a为摄像头与标尺的距离。能够获得障碍物在图像中高于标尺的尺寸b'和标尺在图像中的高度c'的比例关系,再根据公式(1)(2)得到仰角数据。Basic idea: According to the triangle similarity principle, the elevation angle of the obstacle can be calculated using b and a (the distance between the camera and the ruler) in Figure 3 below. b is the characteristic size of the obstacle at the ruler. Through target detection and image feature extraction, a is the distance between the camera and the ruler. The proportional relationship between the size b' of the obstacle above the ruler in the image and the height c' of the ruler in the image can be obtained, and then the elevation angle data can be obtained according to formulas (1) and (2).
tana=b/a (2)tana=b/a (2)
a=arctan(b/a) (3)a=arctan(b/a) (3)
其中,b'为障碍物在图中高于标尺的尺寸;c'为图中标尺的高度;c为标尺的实际高度;a为标尺与摄像头的实际距离。Wherein, b' is the size of the obstacle above the ruler in the figure; c' is the height of the ruler in the figure; c is the actual height of the ruler; and a is the actual distance between the ruler and the camera.
1)目标检测:1) Object Detection:
如图4所示,通过目标检测模型可以在图片中检测到标尺和障碍物的boundingbox(边界框),通过特征提取可以获得边界框的坐标位置和长宽数据。As shown in FIG4 , the bounding box of the ruler and obstacles can be detected in the image through the target detection model, and the coordinate position and length and width data of the bounding box can be obtained through feature extraction.
2)特征尺寸提取2) Feature size extraction
图片的中心点纵坐标为摄像头水平面所在的位置,障碍物边界框最高点的纵坐标与中心点坐标纵坐标的差值即为计算公式(1)中的b'的数值;而标尺边界框的图中高度就是公式(1)中的c'。The vertical coordinate of the center point of the image is the position of the camera horizontal plane. The difference between the vertical coordinate of the highest point of the obstacle boundary box and the vertical coordinate of the center point is the value of b' in the calculation formula (1); and the height of the ruler boundary box in the image is c' in formula (1).
3)仰角计算3) Elevation angle calculation
利用公式(1)即可求出障碍物在标尺所在处透视高度b;再利用公式(2)计算仰角的正切值,利用公式(3)计算仰角值。Formula (1) can be used to calculate the perspective height b of the obstacle at the location of the scale; then formula (2) is used to calculate the tangent value of the elevation angle, and formula (3) is used to calculate the elevation angle value.
上述实施方式只为说明本发明的技术构思及特点,其目的在于让熟悉此项技术的人能够了解本发明的内容并据以实施,并不能以此限制本发明的保护范围。凡根据本发明精神实质所做的等效变换或修饰,都应涵盖在本发明的保护范围之内。The above embodiments are only for illustrating the technical concept and features of the present invention, and their purpose is to enable people familiar with the technology to understand the content of the present invention and implement it accordingly, and they cannot be used to limit the protection scope of the present invention. Any equivalent transformation or modification made according to the spirit of the present invention should be included in the protection scope of the present invention.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311833347.XA CN118135011A (en) | 2023-12-27 | 2023-12-27 | Meteorological station surrounding obstacle elevation angle measuring method based on visual recognition technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311833347.XA CN118135011A (en) | 2023-12-27 | 2023-12-27 | Meteorological station surrounding obstacle elevation angle measuring method based on visual recognition technology |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118135011A true CN118135011A (en) | 2024-06-04 |
Family
ID=91230829
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311833347.XA Pending CN118135011A (en) | 2023-12-27 | 2023-12-27 | Meteorological station surrounding obstacle elevation angle measuring method based on visual recognition technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118135011A (en) |
-
2023
- 2023-12-27 CN CN202311833347.XA patent/CN118135011A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI353561B (en) | 3d image detecting, editing and rebuilding system | |
CN110221311A (en) | The high method of high close-stand tree is extracted based on TLS and UAV automation | |
CN105286871B (en) | Video processing-based body height measurement method | |
Zhang et al. | Efficient registration of terrestrial LiDAR scans using a coarse-to-fine strategy for forestry applications | |
CN111444872B (en) | Method for measuring geomorphic parameters of Danxia | |
CN111738229B (en) | Automatic reading method for scale of pointer dial | |
CN111473776A (en) | Landslide crack monitoring method based on single-image close-range photogrammetry | |
CN106651900A (en) | Three-dimensional modeling method of elevated in-situ strawberry based on contour segmentation | |
CN107121074B (en) | A method for length measurement of tailings pond dry beach using machine vision | |
CN108985281A (en) | Method and system are distinguished in a kind of construction land based on timing remote sensing image and farmland | |
CN118691776B (en) | A 3D real scene modeling and dynamic updating method based on multi-source data fusion | |
CN112906719A (en) | Standing tree factor measuring method based on consumption-level depth camera | |
Wu et al. | Real-time measurement of individual tree structure parameters based on augmented reality in an urban environment | |
CN115564717A (en) | Mining area ground surface crack parameter extraction method based on unmanned aerial vehicle image | |
Xin et al. | Landslide surface horizontal displacement monitoring based on image recognition technology and computer vision | |
CN115223061A (en) | UAV data-based method for extracting short-time span growth amount of eucalyptus artificial forest | |
CN114005027A (en) | An urban single tree detection system and method based on UAV images | |
Shi et al. | A method to detect earthquake-collapsed buildings from high-resolution satellite images | |
CN113280764A (en) | Power transmission and transformation project disturbance range quantitative monitoring method and system based on multi-satellite cooperation technology | |
CN116989825A (en) | A roadside lidar-camera-UTM coordinate system joint calibration method and system | |
WO2019242081A1 (en) | Cadastral surveying system based on multi-platform laser radar | |
CN118135011A (en) | Meteorological station surrounding obstacle elevation angle measuring method based on visual recognition technology | |
CN118155088A (en) | Local climate classification method based on resource third-order remote sensing image | |
CN111414867A (en) | Method for measuring and calculating aboveground biomass of plants | |
CN111340763A (en) | Method for rapidly measuring rock mass crushing degree of tunnel excavation face |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |