CN117495918A - River water surface optical flow estimation method based on illumination self-adaptive ORB operator - Google Patents
River water surface optical flow estimation method based on illumination self-adaptive ORB operator Download PDFInfo
- Publication number
- CN117495918A CN117495918A CN202311520874.5A CN202311520874A CN117495918A CN 117495918 A CN117495918 A CN 117495918A CN 202311520874 A CN202311520874 A CN 202311520874A CN 117495918 A CN117495918 A CN 117495918A
- Authority
- CN
- China
- Prior art keywords
- optical flow
- adaptive
- pixel
- feature points
- orb
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003287 optical effect Effects 0.000 title claims abstract description 72
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000005286 illumination Methods 0.000 title claims abstract description 27
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 title claims description 26
- 230000003044 adaptive effect Effects 0.000 claims abstract description 47
- 239000013598 vector Substances 0.000 claims abstract description 20
- 238000005457 optimization Methods 0.000 claims abstract description 7
- 238000004364 calculation method Methods 0.000 claims description 21
- 238000000605 extraction Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 5
- 230000004044 response Effects 0.000 claims description 4
- 238000012935 Averaging Methods 0.000 claims description 2
- 230000002159 abnormal effect Effects 0.000 claims 1
- 238000012163 sequencing technique Methods 0.000 claims 1
- NTHWMYGWWRZVTN-UHFFFAOYSA-N sodium silicate Chemical compound [Na+].[Na+].[O-][Si]([O-])=O NTHWMYGWWRZVTN-UHFFFAOYSA-N 0.000 claims 1
- 238000001514 detection method Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 10
- 239000002245 particle Substances 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 7
- 238000005259 measurement Methods 0.000 description 7
- 238000005316 response function Methods 0.000 description 6
- 239000012530 fluid Substances 0.000 description 4
- 238000000917 particle-image velocimetry Methods 0.000 description 4
- 230000002265 prevention Effects 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000004438 eyesight Effects 0.000 description 2
- 230000004313 glare Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 239000000700 radioactive tracer Substances 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000000037 particle-tracking velocimetry Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000000827 velocimetry Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/269—Analysis of motion using gradient-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/62—Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种基于光照自适应ORB算子的河流水面光流估计方法,使用自适应阈值ORB算子的方法提取特征点,光照自适应ORB阈值通过基于KSW熵的全局自适应阈值和基于图像对比度的局部自适应阈值分别乘以权重系数取得。所述方法首先输入相邻两张视频帧RGB图像,灰度化处理之后截取ROI并构建图像金字塔。然后对第一帧图像金字塔逐层逐像素计算光照自适应ORB阈值并以该阈值按照FAST快速优化方案进行ORB特征点的判定与提取。最后使用Harris评分算法对特征点进行质量排序,选取一定数量的优质特征点,使用稀疏LK光流法计算出特征点运动至第二帧图像上的坐标值,最终计算出特征点在相邻两帧图像上的光流运动矢量。本发明适用于光照复杂多变条件下的河流水面流场观测。
The invention discloses a method for estimating optical flow on the river surface based on the illumination adaptive ORB operator. The adaptive threshold ORB operator is used to extract feature points. The illumination adaptive ORB threshold is passed through the global adaptive threshold based on KSW entropy and the method based on KSW entropy. The local adaptive thresholds of image contrast are obtained by multiplying the weight coefficients respectively. The method first inputs two adjacent video frame RGB images, and then intercepts the ROI and constructs an image pyramid after grayscale processing. Then the illumination adaptive ORB threshold is calculated layer by layer and pixel by pixel in the first frame of the image pyramid, and the ORB feature points are determined and extracted using this threshold according to the FAST rapid optimization plan. Finally, the Harris scoring algorithm is used to sort the feature points by quality, and a certain number of high-quality feature points are selected. The sparse LK optical flow method is used to calculate the coordinate values of the feature points moving to the second frame of the image, and finally the distance between the two adjacent feature points is calculated. Optical flow motion vectors on frame images. The invention is suitable for observing the flow field of the river surface under complex and changeable illumination conditions.
Description
技术领域Technical field
本发明涉及图像法流场测量技术领域,尤其涉及一种基于光照自适应ORB算子的河流水面光流估计方法。The invention relates to the technical field of image method flow field measurement, and in particular to a method for estimating optical flow on the river surface based on illumination adaptive ORB operator.
背景技术Background technique
河流流场原型观测为水旱灾害监测预警、水库大坝安全运行、堤防边坡工程病害防治、河流动力学模型验证率定等提供重要基础数据,直接关系到科学决策防洪抢险、水利工程失事风险防控成效。基于粒子图像测速(PIV,Particle Image Velocimetry)原理的流场测量技术具有非接触式瞬时全场流速测量的特点,在实验室水槽、河工模型及河流原型观测中得到应用。其通过粒子图像序列的分析和计算获得局部流体运动位移及速度的大小、方向、特征和分布情况,极大提高了实验室环境下各种复杂流动的测量能力。但是根据粒子图像估计流体的运动矢量是PIV技术的核心和难点,实现的方法不仅依赖于硬件系统而且取决于待测流体的特点。基于大尺度粒子图像测速(LSPIV,Large ScaleParticleImage Velocimetry)的流场测量技术不依赖于人工示踪粒子,定位于河流天然漂浮物计算全局表面流场。其通过估计示踪物在流场中的运动,以灰度相关匹配技术为原理,以矢量平均的方式建立时均流场模型。但是该方法空域互相关计算量较大,速度场分辨率不高。基于大尺度粒子图像跟踪测速(LSPTV,Large Scale Particle TrackingVelocimetry)的流场测量技术在稀疏粒子的情况下表现良好,其通过定位密度较低、粒子较大的示踪粒子,获取图像中粒子的空间分布情况,并进一步获取粒子的亚像素质心位置,通过匹配不同时间帧中的粒子质心来估计流场。但是其无法处理流场中粒子浓度较高的情况,且对部分需要人工设置的参数比较敏感。基于时空图像测速(STIV,Space-time Image Velocimetry)的流场测量技术具有空间分辨率高、实时性较强的特点,其通过合成时空图像,根据傅里叶变换检测时时空图像检测的纹理主方向计算一维时均流场。但是其时间分辨率不高,对复杂光照较为敏感。Prototype observations of river flow fields provide important basic data for monitoring and early warning of flood and drought disasters, safe operation of reservoirs and dams, prevention and control of embankment and slope engineering diseases, verification and calibration of river dynamics models, etc., and are directly related to scientific decision-making on flood prevention and rescue, and the risk of water conservancy project accidents. Prevention and control effectiveness. The flow field measurement technology based on the principle of particle image velocimetry (PIV) has the characteristics of non-contact instantaneous full-field flow velocity measurement, and has been applied in laboratory flumes, river engineering models and river prototype observations. It obtains the magnitude, direction, characteristics and distribution of local fluid motion displacement and velocity through the analysis and calculation of particle image sequences, which greatly improves the measurement capabilities of various complex flows in laboratory environments. However, estimating the motion vector of a fluid based on particle images is the core and difficult point of PIV technology. The implementation method depends not only on the hardware system but also on the characteristics of the fluid to be measured. The flow field measurement technology based on Large Scale Particle Image Velocimetry (LSPIV) does not rely on artificial tracer particles and is positioned on natural floating objects in the river to calculate the global surface flow field. It estimates the movement of tracers in the flow field and uses grayscale correlation matching technology as the principle to establish a time-averaged flow field model in the form of vector averaging. However, this method requires a large amount of spatial cross-correlation calculations and the resolution of the velocity field is not high. The flow field measurement technology based on Large Scale Particle Tracking Velocimetry (LSPTV) performs well in the case of sparse particles. It obtains the space of particles in the image by locating tracer particles with lower density and larger particles. distribution, and further obtain the sub-pixel centroid position of the particles, and estimate the flow field by matching the particle centroids in different time frames. However, it cannot handle the situation of high particle concentration in the flow field, and is sensitive to some parameters that need to be manually set. The flow field measurement technology based on Space-time Image Velocimetry (STIV) has the characteristics of high spatial resolution and strong real-time performance. It synthesizes space-time images and detects the texture principal of space-time image detection based on Fourier transform. The direction calculates the one-dimensional time-averaged flow field. However, its temporal resolution is not high and it is sensitive to complex lighting.
光流法最初在计算机视觉领域提出,主要用于从图像序列中估计明显的刚性运动,因其能够从图像对中获取密集的速度矢量场,也用于估计河流运动场景。在流体运动估计领域,Horn和Schunck提出了基于亮度守恒假设的经典变分光流估计模型。但是这种亮度恒定约束太过于理想化而无法满足实际情况,所以传统的H-S光流法对光照变化十分敏感,光流的鲁棒性很差。The optical flow method was originally proposed in the field of computer vision. It is mainly used to estimate obvious rigid motion from image sequences. Because it can obtain dense velocity vector fields from image pairs, it is also used to estimate river motion scenes. In the field of fluid motion estimation, Horn and Schunck proposed a classical variational optical flow estimation model based on the brightness conservation assumption. However, this constant brightness constraint is too ideal to meet the actual situation. Therefore, the traditional H-S optical flow method is very sensitive to illumination changes, and the robustness of optical flow is very poor.
为了提高对光照变化的鲁棒性,计算机视觉领域在经典变分光流法的基础上进行了诸多改进,例如采用结构——纹理分解方法进行预处理,将图像分解为结构部分和包含精细比例细节的纹理部分,然后使用纹理部分代替原始灰度图像进行后续光流计算。同样采用结构——纹理分解的思路,特征光流法将图像信息分解为目标特征点和背景,通过图像序列相邻帧同一特征点在时间域上的位置变化,从而估计特征点的运动,进而估计河流运动。特征光流法的主要过程包含特征提取和跟踪,常用的特征提取算法包括SIFT(Scale-Invariant Feature Transform)、SURF(Speed-Up Robust Feature)、FAST(FeaturesfromAccelerated Segment Test)和ORB(Oriented FAST and Rotated BinaryRobustIndependent Elementary Feature)等。SIFT和SURF通过构造尺度空间并计算特征主方向是算法具有尺度不变性和旋转不变性,但是对光照变化较为敏感,且计算速度慢;FAST算法利用检测像素点与周围像素的灰度结构信息确定特征点,算法简单,计算速度较快。ORB算法在FAST的基础上进一步改进,加入图像金字塔,使算法具有尺度不变特性,并进一步提高了计算速度。FAST和ORB算法均使用单一固定阈值进行特征点的检测,该阈值依赖人工经验值设定,当河流表面光照条件变化,河流水面出现光照不均匀、局部耀光等情况时,需要人工调整阈值参数以保证提取到足够数量的特征点进行光流的计算。In order to improve the robustness to illumination changes, many improvements have been made in the field of computer vision based on the classic variational optical flow method. For example, the structure-texture decomposition method is used for preprocessing to decompose the image into structural parts and contain fine-scale details. The texture part is then used instead of the original grayscale image for subsequent optical flow calculations. Also using the idea of structure-texture decomposition, the feature optical flow method decomposes the image information into target feature points and background, and estimates the motion of the feature points through the position changes of the same feature point in the time domain in adjacent frames of the image sequence, and then Estimating river movement. The main process of the feature optical flow method includes feature extraction and tracking. Commonly used feature extraction algorithms include SIFT (Scale-Invariant Feature Transform), SURF (Speed-Up Robust Feature), FAST (Features from Accelerated Segment Test) and ORB (Oriented FAST and Rotated BinaryRobustIndependent Elementary Feature) etc. SIFT and SURF construct a scale space and calculate the main direction of features. The algorithm has scale invariance and rotation invariance, but is more sensitive to illumination changes and has a slow calculation speed. The FAST algorithm uses the grayscale structure information of detected pixels and surrounding pixels to determine Feature points, the algorithm is simple and the calculation speed is fast. The ORB algorithm is further improved on the basis of FAST and adds an image pyramid, which makes the algorithm scale-invariant and further improves the calculation speed. Both the FAST and ORB algorithms use a single fixed threshold to detect feature points. This threshold relies on manual experience value setting. When the lighting conditions on the river surface change and uneven illumination, local glare, etc. occur on the river surface, the threshold parameters need to be manually adjusted. To ensure that a sufficient number of feature points are extracted for optical flow calculation.
发明内容Contents of the invention
发明目的:本发明旨在提供一种基于光照自适应ORB算子的河流水面光流估计方法,利用全局KSW熵和局部灰度对比度分别求得全局自适应阈值和局部自适应阈值实现光照自适应,能够在野外复杂光照条件下稳定提取河流水面特征点用于水面光流估计。Purpose of the invention: The present invention aims to provide a method for estimating optical flow on the river surface based on the illumination-adaptive ORB operator. The global KSW entropy and local gray contrast are used to obtain the global adaptive threshold and the local adaptive threshold respectively to achieve illumination adaptation. , which can stably extract river water surface feature points for water surface optical flow estimation under complex lighting conditions in the wild.
技术方案:本发明所述的基于光照自适应ORB算子的河流水面光流估计方法,包括如下步骤:Technical solution: The river surface optical flow estimation method based on illumination adaptive ORB operator according to the present invention includes the following steps:
(1)将相邻两帧河流水面视频帧RGB图像灰度化,在河流水面区域截取ROI图像;(1) Grayscale the RGB images of two adjacent river water surface video frames, and intercept the ROI image in the river water surface area;
(2)对两帧ROI图像按照比例因子s进行下采样,构建图像金字塔,使图像具有尺度不变性。(2) Downsample the two frames of ROI images according to the scale factor s, and construct an image pyramid to make the image scale invariant.
(3)针对第一帧图像金字塔,逐层逐像素计算光照自适应ORB阈值T,通过ORB特征提取法从第一帧图像中提取M个特征点;(3) For the first frame image pyramid, calculate the illumination adaptive ORB threshold T layer by layer and pixel by pixel, and extract M feature points from the first frame image through the ORB feature extraction method;
(4)通过Harris评分算法对M个特征点进行质量排序,去除Harris角点响应值小于0的特征点,得到N个优质特征点;(4) Use the Harris scoring algorithm to sort the quality of M feature points, remove feature points with Harris corner response values less than 0, and obtain N high-quality feature points;
(5)通过稀疏LK光流法估计光流运动矢量。(5) Estimating the optical flow motion vector through the sparse LK optical flow method.
进一步的,步骤(3)中,光照自适应ORB阈值T为Further, in step (3), the illumination adaptive ORB threshold T is
T=α*T1+β*T2 T=α*T 1 +β*T 2
其中,α和β为权重系数,T1为对第一帧图像金字塔逐层计算的全局自适应阈值,T2为逐像素计算的局部自适应阈值。Among them, α and β are weight coefficients, T 1 is the global adaptive threshold calculated layer by layer for the first frame image pyramid, and T 2 is the local adaptive threshold calculated pixel by pixel.
进一步的,通过KSW熵法计算全局自适应阈值T1,具体如下:Further, the global adaptive threshold T 1 is calculated through the KSW entropy method, as follows:
设金字塔图像的灰度分布直方图的灰度级区间为[0,L],归一化处理所有灰度级出现的频率,将灰度级区间划分为L1=[0,t]和L2=[t+1,L],L1和L2对应的熵之和S:Suppose the gray level interval of the gray level distribution histogram of the pyramid image is [0, L], normalize the frequency of occurrence of all gray levels, and divide the gray level interval into L 1 = [0, t] and L 2 =[t+1,L], the sum of entropies S corresponding to L 1 and L 2 :
S=H(L1)+H(L2)S=H(L 1 )+H(L 2 )
则当前图像金字塔的全局自适应阈值T1为:Then the global adaptive threshold T 1 of the current image pyramid is:
T1=|tmax+tmin|T 1 =|t max +t min |
其中,tmax为熵之和S最大时对应的灰度值,tmin为熵之和最小时对应的灰度值。Among them, t max is the gray value corresponding to the maximum sum of entropy S, and t min is the gray value corresponding to the minimum sum of entropy.
进一步的,L1和L2对应的熵值分别为H(L1)和H(L2):Furthermore, the entropy values corresponding to L 1 and L 2 are H(L 1 ) and H(L 2 ) respectively:
其中,pi为灰度级区间中各个灰度级出现的频率,为L1中所有灰度级出现的频率的和,/>为L2中所有灰度级出现的频率的和。Among them, p i is the frequency of occurrence of each gray level in the gray level interval, is the sum of the frequencies of all gray levels in L 1 ,/> is the sum of the frequencies of all gray levels in L 2 .
进一步的,局部自适应阈值T2,计算方法如下:Further, the local adaptive threshold T 2 is calculated as follows:
每层金字塔图像去除奇数个像素宽的边缘像素点后,以当前像素点P为圆心,m个像素为半径画圆,获取圆周上n个像素点的灰度值,去掉最大值和最小值以期排除异常值,对剩余像素点求均值得到当前像素点的局部自适应阈值T2:After removing edge pixels with an odd number of pixels in width from each layer of the pyramid image, draw a circle with the current pixel point P as the center and m pixels as the radius, obtain the grayscale values of n pixels on the circumference, and remove the maximum and minimum values in order to Exclude outliers and average the remaining pixels to obtain the local adaptive threshold T 2 of the current pixel:
其中,Ii为圆周上编号为i的像素点的灰度值,Imax与Imin分别为n个像素点中的最大灰度值与最小灰度值,m和n为正整数。Among them, I i is the gray value of the pixel numbered i on the circle, I max and I min are respectively the maximum gray value and the minimum gray value among n pixels, m and n are positive integers.
优选的,m为3和n为16。Preferably, m is 3 and n is 16.
优选的,步骤(3)中,为加快检测效率,在特征点的判定过程中,采用FAST快速优化方案。Preferably, in step (3), in order to speed up detection efficiency, the FAST rapid optimization scheme is used in the determination process of feature points.
进一步的,所述FAST快速优化方案具体如下:Further, the FAST rapid optimization plan is as follows:
计算当前像素点P与圆周上水平线和垂直线处4个像素点的灰度值之差的绝对值,若有3个绝对值大于光照自适应ORB阈值T,则继续判断圆周上其他像素点,否则,直接舍弃该像素点,当前像素点P的标记值Pl为Calculate the absolute value of the difference between the current pixel point P and the gray value of the four pixels at the horizontal and vertical lines on the circle. If three absolute values are greater than the illumination adaptive ORB threshold T, continue to judge other pixels on the circle. Otherwise, the pixel is discarded directly, and the mark value P l of the current pixel P is
其中,若Pl等于1,则保留该像素点,记为特征点,标记为1;否则,舍弃该像素点,标记为0;I0为当前像素点P的灰度值,Ii为圆周上的第i处像素点的灰度值。Among them, if P l is equal to 1, then retain the pixel, record it as a feature point, and mark it as 1; otherwise, discard the pixel and mark it as 0; I 0 is the gray value of the current pixel point P, and I i is the circle The gray value of the i-th pixel on .
进一步的,步骤(5)中,稀疏LK光流法估计光流运动矢量的过程:Further, in step (5), the process of estimating the optical flow motion vector by the sparse LK optical flow method:
通过稀疏LK光流法跟踪第一帧图像上的优质特征点,确定其运动至第二帧图像上的坐标,计算优质特征点在两帧图像上的光流运动矢量。The sparse LK optical flow method is used to track the high-quality feature points on the first frame of the image, determine their movement to the coordinates of the second frame of the image, and calculate the optical flow motion vector of the high-quality feature points on the two frames of images.
进一步的,优质特征点的光流运动矢量如下:Further, the optical flow motion vector of high-quality feature points is as follows:
其中,vx为优质特征点的横向像素光流运动矢量,vy为优质特征点的纵向像素光流运动矢量,Δt为相邻两帧图像的时间间隔,x1为优质特征点在第一帧图像上的横向像素坐标,y1为优质特征点在第一帧图像上的纵向像素坐标,x2为优质特征点在第二帧图像上的横向像素坐标,y2为优质特征点在第二帧图像上的纵向像素坐标。 Among them , v The horizontal pixel coordinates on the frame image, y 1 is the longitudinal pixel coordinate of the high-quality feature point on the first frame image, x 2 is the horizontal pixel coordinate of the high-quality feature point on the second frame image, y 2 is the high-quality feature point on the second frame image. Vertical pixel coordinates on the two frames of images.
有益效果:本发明与现有技术相比,其显著优点是:本发明中的光照自适应ORB算法的河流水面光流估计方法无需人工调参,可根据全局KSW熵和局部灰度对比度对ROI图像每一个像素点计算其自适应阈值,能够在河流水面光照非均匀变化时,稳定检测并提取足够数量的河流水面特征点,用于进一步稀疏LK光流估计,且光流估计结果误差保持在一定范围内;解决了复杂野外河流水面光照条件下,光照变化引起河流水面光照不均匀,单一固定阈值的ORB特征点提取算法需进行人工调节参数以获取ROI图像的特征点的问题。Beneficial effects: Compared with the existing technology, the significant advantages of the present invention are: the river surface optical flow estimation method of the illumination adaptive ORB algorithm in the present invention does not require manual parameter adjustment, and can calculate the ROI according to the global KSW entropy and local gray contrast. Each pixel of the image calculates its adaptive threshold, which can stably detect and extract a sufficient number of river surface feature points when the illumination of the river surface changes non-uniformly for further sparse LK optical flow estimation, and the error of the optical flow estimation result remains within Within a certain range; it solves the problem that under complex outdoor river water surface lighting conditions, illumination changes cause uneven illumination on the river water surface, and the ORB feature point extraction algorithm with a single fixed threshold requires manual adjustment of parameters to obtain the feature points of the ROI image.
附图说明Description of drawings
图1为本发明的流程图;Figure 1 is a flow chart of the present invention;
图2为当前像素点及邻域圆环像素信息图;Figure 2 is the current pixel and neighborhood ring pixel information map;
图3为相邻两帧视频图像中的第一帧视频图像;Figure 3 shows the first frame of video image among two adjacent frames of video images;
图4为相邻两帧视频图像中的第二帧视频图像;Figure 4 is the second frame of video image among two adjacent frames of video images;
图5为第一帧视频图像的ROI区域示意图;Figure 5 is a schematic diagram of the ROI area of the first frame of video image;
图6为光照自适应ORB算法对第一帧ROI图像的特征点检测结果;Figure 6 shows the feature point detection results of the first frame of ROI image by the illumination adaptive ORB algorithm;
图7为固定阈值ORB算法对第一帧ROI图像的特征点检测结果;Figure 7 shows the feature point detection results of the first frame ROI image by the fixed threshold ORB algorithm;
图8为FAST算法对第一帧ROI图像的特征点检测结果;Figure 8 shows the feature point detection results of the first frame of ROI image by the FAST algorithm;
图9为光照自适应ORB算法计算的光流图;Figure 9 shows the optical flow diagram calculated by the lighting adaptive ORB algorithm;
图10为固定阈值的ORB算法计算的光流图;Figure 10 shows the optical flow diagram calculated by the fixed threshold ORB algorithm;
图11为FAST算法计算的光流图;Figure 11 shows the optical flow diagram calculated by the FAST algorithm;
图12为人工标记的特征点在第一帧图像上的坐标位置图;Figure 12 shows the coordinate position map of the manually marked feature points on the first frame of the image;
图13为LK光流法计算的特征点在第二帧图像位置的标记图;Figure 13 is a marker map of the position of the feature points calculated by the LK optical flow method in the second frame of the image;
图14为人工标记特征点运动至第二帧图像的坐标位置图。Figure 14 is a coordinate position diagram of manually marked feature points moving to the second frame image.
具体实施方式Detailed ways
下面结合附图对本发明作进一步说明。The present invention will be further described below in conjunction with the accompanying drawings.
如图1所示,本发明具体实现如下:As shown in Figure 1, the specific implementation of the present invention is as follows:
Step1:取野外河流场景的监测视频中相邻两帧RGB图像,如图3和图4所示。将两帧RGB图像转换为灰度图,在河流水面区域截取出ROI图像,ROI区域的光照条件较为复杂,如图5所示,区域1是河流水面用于计算的ROI区域,区域1全部处于岸边树木在水面上形成的阴影中,包含在区域1中的区域2因太阳照射形成水面耀光。Step1: Take two adjacent frames of RGB images from the monitoring video of the wild river scene, as shown in Figures 3 and 4. Convert the two frames of RGB images into grayscale images, and intercept the ROI image in the river water surface area. The lighting conditions in the ROI area are relatively complex, as shown in Figure 5. Area 1 is the ROI area used for calculation of the river water surface. Area 1 is all in Among the shadows formed by the trees on the shore on the water, Area 2 included in Area 1 creates a glare on the water due to the sun.
Step2:对两帧ROI图像按比例进行下采样。将每一帧原始高分辨率ROI图像作为第一层,为尽可能地减少信息丢失,按照比例因子s=1.2进行下采样,构建8层金字塔图像,使图像具有尺度不变性;Step2: Downsample the two frames of ROI images proportionally. Each frame of the original high-resolution ROI image is used as the first layer. In order to reduce information loss as much as possible, downsampling is performed according to the scale factor s=1.2 to construct an 8-layer pyramid image to make the image scale invariant;
Step3:遍历每一层金字塔图像,遍历每一个像素点,对不同的像素点计算ORB自适应阈值T,并根据阈值T使用ORB特征提取法提取特征点,最大特征点数量设为M=3000。其中,在特征点的判定过程中,为加快检测效率,使用FAST快速优化方案。Step3: Traverse each layer of the pyramid image, traverse each pixel, calculate the ORB adaptive threshold T for different pixels, and use the ORB feature extraction method to extract feature points based on the threshold T. The maximum number of feature points is set to M=3000. Among them, in the process of determining feature points, in order to speed up detection efficiency, the FAST rapid optimization scheme is used.
其中,ORB自适应阈值T由全局自适应阈值T1和局部自适应阈值T2分别乘以权重系数后求和得到,如式(1)所示;Among them, the ORB adaptive threshold T is the sum of the global adaptive threshold T 1 and the local adaptive threshold T 2 multiplied by the weight coefficient respectively, as shown in Equation (1);
T=α*T1+β*T2 (1)T=α*T 1 +β*T 2 (1)
其中,α和β为权重系数,通过大量实验论证,取α=0.2,β=0.2时特征点提取效果最优。Among them, α and β are weight coefficients. Through a large number of experiments, it has been demonstrated that the feature point extraction effect is optimal when α = 0.2 and β = 0.2.
Step3.1:对第一帧ROI图像金字塔的每一层图像计算一个全局自适应阈值T1,全局自适应阈值T1根据当前金字塔图像的KSW熵计算得到,具体实现方法如下:Step3.1: Calculate a global adaptive threshold T 1 for each layer of the ROI image pyramid in the first frame. The global adaptive threshold T 1 is calculated based on the KSW entropy of the current pyramid image. The specific implementation method is as follows:
先计算当前金字塔图像的一维灰度分布直方图,分为0~L共L+1个灰度级,并将所有灰度级出现的频率进行归一化处理。然后,预设一个灰度值t,此时t将灰度范围分为L1=[0,t],L2=[t+1,L]两部分,L1中各个灰度级出现的频率为{p1,p2,p3,…pt},L2中各个灰度级出现的频率为{pt+1,pt+2,pt+3,…pL}。对L1中所有灰度级的频率进行求和,记为则L1和L2两部分灰度范围所对应的熵值分别为H(L1)和H(L2),具体计算公如式(2)、式(3)所示。First calculate the one-dimensional grayscale distribution histogram of the current pyramid image, which is divided into a total of L+1 grayscale levels from 0 to L, and the frequency of occurrence of all grayscale levels is normalized. Then, a grayscale value t is preset. At this time, t divides the grayscale range into two parts: L 1 =[0,t] and L 2 =[t+1,L]. Each grayscale level in L 1 appears. The frequency is {p 1 , p 2 , p 3 ,…p t }, and the frequency of each gray level in L 2 is {p t+1 , p t+2 , p t+3 ,…p L }. Sum the frequencies of all gray levels in L 1 and write it as Then the entropy values corresponding to the gray scale ranges of L 1 and L 2 are H(L 1 ) and H(L 2 ) respectively. The specific calculation formulas are as shown in equations (2) and (3).
将两部分熵值求和,记熵之和为S,则S的计算公式如式(4)所示。Sum the entropy values of the two parts and record the sum of entropies as S. The calculation formula of S is as shown in Equation (4).
S=H(L1)+H(L2) (4)S=H(L 1 )+H(L 2 ) (4)
然后,在灰度级范围0~L中遍历每一个灰度值t,分别根据上式计算对应的熵之和S,记熵之和最大时对应的灰度值为tmax,熵之和最小时对应的灰度值为tmin。Then, traverse each gray value t in the gray level range 0 to L, calculate the corresponding sum of entropy S according to the above formula, record the corresponding gray value when the sum of entropy is maximum as t max , and the sum of entropy is the maximum The gray value corresponding to hour is t min .
那么最终求得的全局自适应阈值T1即等于两者之差求绝对值,公式如式(5)所示。Then the final global adaptive threshold T 1 is equal to the absolute value of the difference between the two, and the formula is shown in Equation (5).
T1=|tmax+tmin| (5)T 1 =|t max +t min | (5)
Step3.2:对第一帧图像金字塔计算出每一层的全局自适应阈值T1后,每层金字塔去除7个像素宽的边缘像素点,对剩余的所有像素点进行遍历,对每一个像素点计算一个局部自适应阈值T2,每获取一个局部自适应阈值T2,根据改进后的ORB自适应阈值T进行当前像素点是否为特征点的判定。局部自适应阈值T2的具体计算方法如下:Step3.2: After calculating the global adaptive threshold T 1 of each layer for the first frame image pyramid, remove edge pixels 7 pixels wide from each layer of the pyramid, traverse all the remaining pixels, and Calculate a local adaptive threshold T 2 for each point. Each time a local adaptive threshold T 2 is obtained, whether the current pixel point is a feature point is determined based on the improved ORB adaptive threshold T. The specific calculation method of the local adaptive threshold T 2 is as follows:
如图2所示,以当前像素点为圆心,3像素为半径,获取圆上16个像素点的灰度值,去掉最大值和最小值以期排除异常值,对剩余14个像素点求均值得到当前像素点的局部自适应阈值T2。具体计算公式如式(6)所示。As shown in Figure 2, with the current pixel as the center of the circle and 3 pixels as the radius, obtain the grayscale values of 16 pixels on the circle, remove the maximum and minimum values in order to exclude outliers, and average the remaining 14 pixels to obtain The local adaptive threshold T 2 of the current pixel. The specific calculation formula is shown in Equation (6).
其中,Ii为当前像素点圆周上第i个像素点灰度值,Imax与Imin分别为圆周上16个像素点中的灰度最大值与最小值,n为固定值16。Among them, I i is the gray value of the i-th pixel on the circumference of the current pixel point, I max and I min are respectively the maximum and minimum gray values among the 16 pixels on the circumference, and n is a fixed value of 16.
Step3.3:由于ORB是在FAST的基础上改进的,具体特征点判定规则仍与FAST保持一致,故为加快计算效率,在判断当前像素点是否为特征点时,使用FAST快速优化方案。具体实现规则为,优先测试图1所示当前像素点圆周上第1、5、9、13处像素点的灰度值与当前像素点的灰度值之差的绝对值是否大于先前计算所得的自适应阈值T,若4个像素点中至少有3个像素点满足条件,则进一步判断圆周上其余像素点,否则,舍弃该像素点,具体公式如式(7)所示。Step3.3: Since ORB is improved on the basis of FAST, the specific feature point determination rules are still consistent with FAST. Therefore, in order to speed up the calculation efficiency, when judging whether the current pixel point is a feature point, use the FAST fast optimization scheme. The specific implementation rule is to first test whether the absolute value of the difference between the gray value of the pixel points 1, 5, 9, and 13 on the current pixel circle shown in Figure 1 and the gray value of the current pixel point is greater than the previously calculated value. Adaptive threshold T, if at least 3 pixels among the 4 pixels meet the condition, then the remaining pixels on the circle will be further judged, otherwise, the pixel will be discarded. The specific formula is shown in Equation (7).
其中,P为图像当前像素点,Pl为当前像素点的标记值,若P等于1,则保留该像素点,记为特征点;否则,舍弃该像素点。I0为当前像素点的灰度值,Ii为圆周上的第i处像素点的灰度值,T为上文计算所得的改进后的ORB自适应阈值。Among them, P is the current pixel of the image, and P l is the mark value of the current pixel. If P is equal to 1, the pixel is retained and recorded as a feature point; otherwise, the pixel is discarded. I 0 is the gray value of the current pixel, I i is the gray value of the i-th pixel on the circle, and T is the improved ORB adaptive threshold calculated above.
Step4:检测到M=3000个特征点后,使用Harris评分算法分别计算每个特征点的Harris角点响应函数值,根据特征点角点响应的大小对所有的特征点进行质量排序。角点的响应函数为大数值正数,边缘的响应函数为大数值负数,平坦区域的响应函数为小数值,故视Harris角点响应函数值较大的特征点为优质特征点。先将Harris角点响应值小于0的特征点记为错误特征点并去除,之后保留N=500个Harris角点响应函数值最大的优质特征点,其分布图如图6所示。其中,圆圈标记的即为特征点的位置。Step 4: After detecting M = 3000 feature points, use the Harris scoring algorithm to calculate the Harris corner response function value of each feature point, and rank all feature points in quality order according to the size of the corner response of the feature point. The response function of the corner point is a large positive number, the response function of the edge is a large negative number, and the response function of the flat area is a small value. Therefore, the feature point with a larger Harris corner response function value is regarded as a high-quality feature point. First, the feature points with Harris corner response value less than 0 are recorded as wrong feature points and removed, and then N=500 high-quality feature points with the largest Harris corner response function value are retained. The distribution diagram is shown in Figure 6. Among them, the circle marks the location of the feature point.
对比算法一——固定阈值的ORB特征点检测算法提取的特征点分布图如图7所示,圆圈标记的即为特征点的位置,方框标记的为该方法未检测到而本发明方法成功提取的特征点。Comparison algorithm one - the feature point distribution diagram extracted by the fixed threshold ORB feature point detection algorithm is shown in Figure 7. The circles marked are the positions of the feature points, and the boxes marked are those that were not detected by the method but were successfully detected by the method of the present invention. extracted feature points.
对比算法二——FAST特征点检测算法的提取的特征点分布图如图8所示,圆圈标记的即为特征点的位置,方框标记的为该方法未检测到而本发明方法成功提取的特征点。Comparison algorithm two - the extracted feature point distribution diagram of the FAST feature point detection algorithm is shown in Figure 8. The circles marked are the positions of the feature points, and the boxes marked are the ones that were not detected by the method but successfully extracted by the method of the present invention. Feature points.
由图5可得,本发明算法对ROI区域内可明显观察到的持续性运动的漂浮物均有完整提取,而对比算法有遗漏,即方框标记的特征点均为水面上的明显漂浮物,本应提取为特征点用于光流估计,但两种对比算法均有遗漏,未成功提取。It can be seen from Figure 5 that the algorithm of the present invention has completely extracted the floating objects with continuous movement that can be clearly observed in the ROI area, while the comparison algorithm has omissions, that is, the feature points marked by the box are all obvious floating objects on the water surface. , should be extracted as feature points for optical flow estimation, but both comparison algorithms have omissions and were not successfully extracted.
为验证算法稳定性,使用三种方法对包含图3视频帧在内的30s视频共750帧图像进行特征点检测,之后进行LK光流跟踪。对750帧图像提取的特征点数量、错误特征点数量和成功跟踪特征点数量求平均,结果如表1所示。In order to verify the stability of the algorithm, three methods were used to detect feature points in a total of 750 frames of the 30s video including the video frame in Figure 3, and then LK optical flow tracking was performed. The number of feature points extracted from 750 frames of images, the number of error feature points and the number of successfully tracked feature points were averaged. The results are shown in Table 1.
表1三种检测算法的性能对比Table 1 Performance comparison of three detection algorithms
由表1可得,本发明方法基于光照自适应的ORB特征点检测方法提取到了足够数量的特征点可用于计算。FAST方法提取到的错误特征点为7个。三种方法的特征点均有较好的成功跟踪率。It can be seen from Table 1 that the ORB feature point detection method based on illumination adaptation of the present invention has extracted a sufficient number of feature points that can be used for calculation. There are 7 error feature points extracted by the FAST method. The feature points of the three methods all have good successful tracking rates.
Step5:得到第一帧图像上的N=500个优质特征点后,输入两帧金字塔图像及第一帧图像上的优质特征点,使用稀疏LK光流法对第一帧图像上的优质特征点进行跟踪,估计其运动至第二张图像的位置。对于被跟踪到的特征点,根据其在第一帧图像上的像素坐标Pi(x1,y1)及其运动至第二帧图像上的像素坐标Pi(x2,y2),计算其在相邻两张图像上的光流运动矢量。其中,i表示特征点的标号,x1表示特征点在第一帧图像上的横向像素坐标,y1表示特征点在第一帧图像上的纵向像素坐标,x2表示特征点在第二帧图像上的横向像素坐标,y2表示特征点在第二帧图像上的纵向像素坐标。根据同一特征点在两帧图像上的对应图像坐标计算出该特征点的光流运动矢量的具体计算公式如式(8)、式(9)所示。Step5: After obtaining N=500 high-quality feature points on the first frame of the image, input the two frames of pyramid images and the high-quality feature points on the first frame of the image, and use the sparse LK optical flow method to identify the high-quality feature points on the first frame of the image. Track it and estimate its movement to the second image. For the tracked feature point, according to its pixel coordinate Pi (x 1 , y 1 ) on the first frame image and its movement to the pixel coordinate Pi (x 2 , y 2 ) on the second frame image, Calculate its optical flow motion vector on two adjacent images. Among them, i represents the label of the feature point, x 1 represents the horizontal pixel coordinate of the feature point on the first frame image, y 1 represents the longitudinal pixel coordinate of the feature point on the first frame image, x 2 represents the feature point in the second frame The horizontal pixel coordinate on the image, y 2 represents the vertical pixel coordinate of the feature point on the second frame image. The specific calculation formulas for calculating the optical flow motion vector of the feature point based on the corresponding image coordinates of the same feature point on the two frames of images are as shown in Equations (8) and (9).
式(8)、式(9)中,vx为特征点的横向像素光流运动矢量,vy为特征点的纵向像素光流运动矢量,Δt为相邻两帧图像的时间间隔,本发明取 In Equations (8 ) and (9), v Pick
图6为自适应ORB、固定阈值的ORB及FAST提取的特征点的LK计算的光流结果图,本发明方法计算的光流结果图如图9所示,固定阈值的ORB算法计算的光流结果图如图10所示,FAST算法计算的光流结果图如图11所示。其中圆点表示的是光流矢量的终点,线条表示的是特征点在相邻两帧图像上的运动轨迹。为方便可视化,图像中的光流大小均扩大至原来的11倍。由图9、图10和图11可得,三种算法均对特征点提取阶段提取的特征点计算出一定数量的光流,根据相同y像素坐标的光流大小应基本一致的原则,本发明方法计算的光流计算结果相对符合实际情况。Figure 6 is an optical flow result diagram of the LK calculation of the feature points extracted by adaptive ORB, fixed threshold ORB and FAST. The optical flow result diagram calculated by the method of the present invention is shown in Figure 9. The optical flow calculated by the fixed threshold ORB algorithm The result graph is shown in Figure 10, and the optical flow result graph calculated by the FAST algorithm is shown in Figure 11. The dots represent the end point of the optical flow vector, and the lines represent the motion trajectories of feature points on two adjacent frames of images. To facilitate visualization, the size of the optical flow in the image is expanded to 11 times its original size. It can be seen from Figure 9, Figure 10 and Figure 11 that all three algorithms calculate a certain number of optical flows for the feature points extracted in the feature point extraction stage. According to the principle that the optical flow sizes of the same y pixel coordinates should be basically consistent, the present invention The optical flow calculation results calculated by this method are relatively consistent with the actual situation.
从500个特征点中选取5个特征点计算出光流矢量进行性能评价,其中特征点a、b、c、d为500个特征点中的4个优质特征点,特征点e为本发明方法成功提取但两种对比算法未检测到的特征点。由于河流水面图像特征点的运动信息没有真实值,故按照特征点邻域信息在相邻两帧图像上基本保持一致的原则,人工标记出特征点在相邻两帧图像上的坐标位置,将人工标记值作为真值进行性能评价。5个特征点在第一帧图像上的人工标记位置图如图12所示,按照上述一致性运动原则进行人工标记的特征点运动至第二帧图像的坐标真值如图13所示。将LK光流法计算出的特征点运动至第二帧图像的坐标位置进行标记,如图14所示。其中,图12、图13和图14的区域3中左侧明显黑色标记像素点为特征点a的坐标位置,右侧明显黑色标记像素点为特征点b的坐标位置,区域4中的明显黑色标记像素点为特征点c的坐标位置,区域5中明显黑色标记像素点为特征点d的位置,区域6中明显白色标记像素点为特征点e的位置。三种算法计算的5个特征点在第一帧ROI图像上的坐标,人工估计的特征点位于第一帧图像ROI上的坐标、LK光流法根据第一帧特征点坐标计算的特征点运动至第二帧图像ROI上的坐标,以及人为估计的特征点运动至第二帧图像ROI上的坐标如表2所示。Select 5 feature points from 500 feature points to calculate the optical flow vector for performance evaluation. Feature points a, b, c, and d are 4 high-quality feature points among the 500 feature points. Feature point e is the success of the method of the present invention. Feature points extracted but not detected by the two comparison algorithms. Since the motion information of the feature points of the river water surface image has no real value, according to the principle that the neighborhood information of the feature points is basically consistent on the two adjacent images, the coordinate positions of the feature points on the two adjacent images are manually marked. Manually labeled values are used as ground truth values for performance evaluation. The manually marked position map of the five feature points on the first frame of the image is shown in Figure 12. The movement of the manually marked feature points to the true coordinates of the second frame of the image is shown in Figure 13 according to the above consistent motion principle. Move the feature points calculated by the LK optical flow method to the coordinate position of the second frame image for marking, as shown in Figure 14. Among them, the obviously black marked pixel on the left in area 3 of Figure 12, Figure 13 and Figure 14 is the coordinate position of feature point a, the obviously black marked pixel on the right is the coordinate position of feature point b, and the obviously black marked pixel in area 4 is the coordinate position of feature point a. The marked pixel point is the coordinate position of the feature point c, the obviously black marked pixel point in area 5 is the position of the feature point d, and the obvious white marked pixel point in area 6 is the position of the feature point e. The coordinates of the five feature points on the ROI image of the first frame calculated by the three algorithms, the coordinates of the manually estimated feature points on the ROI of the first frame image, and the motion of the feature points calculated by the LK optical flow method based on the coordinates of the feature points of the first frame. The coordinates on the ROI of the second frame image and the artificially estimated feature point movement to the coordinates on the ROI of the second frame image are shown in Table 2.
表2野外河流场景下实验特征点在两帧图像上的坐标计算结果Table 2 Coordinate calculation results of experimental feature points on two frames of images in the wild river scene
由表2可得,自适应ORB算法提取的5个特征点在进行光流运动估计方面都具有一定的有效性,5个实验特征点的位置坐标单向估计误差均在1像素以内,最大为0.56像素。It can be seen from Table 2 that the five feature points extracted by the adaptive ORB algorithm have a certain effectiveness in optical flow motion estimation. The one-way estimation errors of the position coordinates of the five experimental feature points are all within 1 pixel, with a maximum of 0.56 pixels.
根据式(8)(9)计算出光照自适应ORB算法提取的5个实验特征点的光流运动矢量,及人工估计的光流运动矢量,并得出二者的计算误差值,结果如表3所示。According to equations (8) and (9), the optical flow motion vectors of the five experimental feature points extracted by the illumination adaptive ORB algorithm and the manually estimated optical flow motion vector are calculated, and the calculation error values of the two are obtained. The results are as shown in Table 3 shown.
表3野外河流场景下实验特征点在相邻两帧图像上的光流计算结果Table 3 Optical flow calculation results of experimental feature points on two adjacent frames of images in the wild river scene
由表3可得,自适应ORB算法计算的5个实验特征点的Vx光流运动矢量相差不大,与人工估计的光流矢量真值的误差均在6像素每秒以内,且均大于人工标记值,这和相同起点距的相似漂浮物基本保持相同流速的规律一致。实验数据充分证明了本发明基于ORB光照自适应算子的河流水面光流估计方法的可行性。It can be seen from Table 3 that the V Manually marked value, which is consistent with the rule that similar floating objects at the same starting point distance basically maintain the same flow rate. The experimental data fully proves the feasibility of the river surface optical flow estimation method based on the ORB illumination adaptive operator of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311520874.5A CN117495918A (en) | 2023-11-15 | 2023-11-15 | River water surface optical flow estimation method based on illumination self-adaptive ORB operator |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311520874.5A CN117495918A (en) | 2023-11-15 | 2023-11-15 | River water surface optical flow estimation method based on illumination self-adaptive ORB operator |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117495918A true CN117495918A (en) | 2024-02-02 |
Family
ID=89670548
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311520874.5A Pending CN117495918A (en) | 2023-11-15 | 2023-11-15 | River water surface optical flow estimation method based on illumination self-adaptive ORB operator |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117495918A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117714691A (en) * | 2024-02-05 | 2024-03-15 | 佳木斯大学 | An adaptive transmission system for AR augmented reality piano teaching |
-
2023
- 2023-11-15 CN CN202311520874.5A patent/CN117495918A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117714691A (en) * | 2024-02-05 | 2024-03-15 | 佳木斯大学 | An adaptive transmission system for AR augmented reality piano teaching |
CN117714691B (en) * | 2024-02-05 | 2024-04-12 | 佳木斯大学 | An adaptive transmission system for AR augmented reality piano teaching |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109784333B (en) | Three-dimensional target detection method and system based on point cloud weighted channel characteristics | |
CN101739551B (en) | moving object identification method and system | |
CN107145874B (en) | Ship target detection and identification method in complex background SAR image | |
CN106204572B (en) | Depth estimation method of road target based on scene depth mapping | |
CN109325935B (en) | A transmission line detection method based on UAV images | |
CN108805904B (en) | A moving ship detection and tracking method based on satellite image sequence | |
CN105182350B (en) | A kind of multibeam sonar object detection method of application signature tracking | |
CN111369597B (en) | A Particle Filter Target Tracking Method Based on Multi-feature Fusion | |
CN110728697A (en) | Infrared dim target detection tracking method based on convolutional neural network | |
CN103996049B (en) | Ship overlength and overwidth detection method based on video image | |
CN110298216A (en) | Vehicle deviation warning method based on lane line gradient image adaptive threshold fuzziness | |
CN107301649B (en) | An Algorithm for Shoreline Detection in Region Merged SAR Images Based on Superpixels | |
CN107742306B (en) | Moving target tracking algorithm in intelligent vision | |
CN104463877A (en) | Shoreline registration method based on information of radar image and electronic sea chart | |
Shen et al. | Adaptive pedestrian tracking via patch-based features and spatial–temporal similarity measurement | |
CN104036526A (en) | Gray target tracking method based on self-adaptive window | |
CN103632376A (en) | Method for suppressing partial occlusion of vehicles by aid of double-level frames | |
CN117218546A (en) | River channel surface water flow speed detection method based on video stream image processing | |
CN117495918A (en) | River water surface optical flow estimation method based on illumination self-adaptive ORB operator | |
CN116934808A (en) | River surface flow velocity measurement method based on water surface floater target tracking | |
CN109284663A (en) | A method for detection of obstacles on the sea surface based on normal and uniform mixed distribution models | |
CN105809673A (en) | SURF (Speeded-Up Robust Features) algorithm and maximal similarity region merging based video foreground segmentation method | |
CN116466104A (en) | Video current measurement method and device based on LK tracking and gray statistics feature method | |
CN103914840B (en) | A kind of human body contour outline extraction method for non-simple background | |
CN103578112B (en) | A kind of aerator working state detecting method based on video image characteristic |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |