WO2020253618A1 - 一种视频抖动的检测方法及装置 - Google Patents
一种视频抖动的检测方法及装置 Download PDFInfo
- Publication number
- WO2020253618A1 WO2020253618A1 PCT/CN2020/095667 CN2020095667W WO2020253618A1 WO 2020253618 A1 WO2020253618 A1 WO 2020253618A1 CN 2020095667 W CN2020095667 W CN 2020095667W WO 2020253618 A1 WO2020253618 A1 WO 2020253618A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- frame
- motion vector
- feature point
- video
- sequence
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/147—Scene change detection
Definitions
- the present invention relates to the technical field of computer vision, in particular to a method and device for detecting video jitter.
- video jitter detection is the basis of video post-adjustment and processing
- researchers have conducted a lot of research based on video analysis in the fields of video processing, video image stabilization, and computer vision.
- video shake detection methods the existing detection algorithms are not very accurate. Some are not sensitive to videos shot under the condition of large lens displacement and strong shaking in a short time, and some are not suitable for rotation. Motion detection, and some are not suitable for scenes where the camera moves slowly.
- the following commonly used video jitter detection methods have more or less defects:
- Block matching method currently the most commonly used algorithm in video image stabilization systems. This method divides the current frame into blocks, each pixel in the block has the same motion vector, and then searches for the best match in a specific range of the reference frame for each block, thereby estimating the global motion vector of the video sequence.
- the block matching method usually needs to be divided into blocks, and the global motion vector is estimated according to the motion vector in each block. Then the effect is not good when detecting the video jitter problem of some specific scenes. For example, in a picture, the picture is divided into 4 cells, of which 3 cells do not move.
- the object in 1 grid is moving; in addition, the block matching method usually requires Kalman filtering to process the calculated motion vector, which has a large computational cost and poor real-time performance, and cannot adapt to the scene of large lens displacement and strong jitter in a short time.
- Gray projection method Based on the principle of consistency of gray distribution in overlapping similar areas in the image, the local gray information of adjacent video frames is used to obtain the vector motion relationship.
- the algorithm mainly consists of gray levels in two directions in different areas. Projection related calculation composition.
- the gray projection method is effective for scenes with only translational jitter, and cannot estimate the rotational motion vector.
- the embodiments of the present invention provide a method and device for detecting video jitter to overcome the low accuracy of the existing detection algorithm in the prior art, and the condition of large displacement and strong jitter of the lens in a short time.
- the video shot below is not sensitive and other issues.
- the technical solution adopted by the present invention is:
- a method for detecting video jitter includes the following steps:
- the feature value of the video to be detected is used as the input signal of the detection model to obtain an output signal by calculation, and it is determined whether the video to be detected shakes according to the output signal.
- the method further includes the step of preprocessing the frame sequence:
- the performing feature point detection frame by frame on the frame sequence is performing feature point detection frame by frame on the preprocessed frame sequence.
- the performing feature point detection on the frame sequence frame by frame, and acquiring the feature points of each frame includes:
- feature point detection is performed on the frame sequence frame by frame, and feature points of each frame are obtained.
- the calculation of the frame feature point sequence matrix based on the optical flow tracking algorithm to obtain the motion vector of each frame includes:
- the initial motion vector of each frame is adjusted to obtain the motion vector of each frame.
- the obtaining the feature value of the video to be detected according to the motion vector of each frame includes:
- the unbiased standard deviation of each element and the weighted value are used as the feature value of the video to be detected.
- a device for detecting video jitter includes:
- the framing processing module is used to perform framing processing on the video to be detected to obtain a frame sequence
- the feature point detection module is used to perform feature point detection on the frame sequence frame by frame, obtain feature points of each frame, and generate a frame feature point sequence matrix;
- a vector calculation module configured to perform an operation on the frame feature point sequence matrix based on an optical flow tracking algorithm to obtain a motion vector of each frame;
- the feature value extraction module is configured to obtain the feature value of the video to be detected according to the motion vector of each frame;
- the jitter detection module is configured to use the feature value of the video to be detected as the input signal of the detection model to obtain an output signal through calculation, and determine whether the video to be detected jitters according to the output signal.
- the device further includes:
- the data preprocessing module includes:
- a grayscale processing unit configured to perform grayscale processing on the sub-frame sequence to obtain a grayscale frame sequence
- a denoising processing unit configured to perform denoising processing on the grayscale frame sequence
- the feature point detection module is used to perform feature point detection on the preprocessed frame sequence frame by frame.
- the feature point detection module is also used for:
- feature point detection is performed on the frame sequence frame by frame, and feature points of each frame are obtained.
- the vector calculation module includes:
- the optical flow tracking unit is configured to perform optical flow tracking calculation on the frame feature point sequence matrix of each frame to obtain the initial motion vector of each frame;
- a cumulative calculation unit configured to obtain a corresponding cumulative motion vector according to the initial motion vector
- a smoothing processing unit configured to perform smoothing processing on the cumulative motion vector to obtain a smoothed motion vector
- the vector adjustment unit is configured to use the accumulated motion vector and the smoothed motion vector to adjust the initial motion vector of each frame to obtain the motion vector of each frame.
- the feature value extraction module includes:
- a matrix conversion unit configured to merge and convert the motion vectors of all frames into a matrix
- a standard deviation calculation unit for calculating the unbiased standard deviation of each element in the matrix
- the weighted fusion unit is used to perform weighted fusion processing on the unbiased standard deviation of each element to obtain a weighted value.
- the video jitter detection method and device obtained the motion vector of each frame according to the frame feature point sequence matrix based on the optical flow tracking algorithm, which effectively solves the problem of excessive changes between two adjacent frames.
- the problem of tracking failure when the camera shake detection is performed under the condition of slow lens movement, it has good tolerance and adaptability, and it has good tolerance and adaptability.
- the camera shake detection is performed under the conditions of sudden large displacement, strong shaking, large rotation, etc. When, it has good sensitivity and robustness;
- the video jitter detection method and device use a feature point detection algorithm based on the fusion of FAST features and SURF features, that is, the feature point extraction algorithm is optimized, which takes into account the global features of the image, and is sufficient It retains its local features, and has low computational overhead, and is robust to blurred images and poor lighting conditions, which further improves the real-time and accuracy of detection;
- the video jitter detection method and device provided by the embodiment of the present invention extract at least four dimensional features from the video to be detected, and use the SVM model as the detection model, so that the video jitter detection method provided by the embodiment of the present invention is more general
- the chemistry is more advantageous, which further improves the accuracy of detection.
- Fig. 1 is a flow chart showing a method for detecting video jitter according to an exemplary embodiment
- Fig. 2 is a flow chart showing preprocessing the frame sequence according to an exemplary embodiment
- Fig. 3 is a flow chart of obtaining a motion vector of each frame by calculating a frame feature point sequence matrix based on an optical flow tracking algorithm according to an exemplary embodiment
- Fig. 4 is a flow chart of obtaining the feature value of the video to be detected according to the motion vector of each frame according to an exemplary embodiment
- Fig. 5 is a schematic structural diagram of a device for detecting video jitter according to an exemplary embodiment.
- Fig. 1 is a flowchart showing a method for detecting video jitter according to an exemplary embodiment. Referring to Fig. 1, the method includes the following steps:
- S1 Perform framing processing on the video to be detected to obtain a frame sequence.
- S2 Perform feature point detection on the frame sequence frame by frame, acquire feature points of each frame, and generate a frame feature point sequence matrix.
- the optical flow tracking algorithm is used to perform optical flow tracking calculation on the frame feature point sequence matrix, that is, the transformation from the feature point in the current frame to the next frame is tracked.
- the tracking frame feature point i the Z i sequence matrix transform to the i + 1
- the motion vector acquired Motion vector The expression is:
- dx i represents the Euclidean column offset from the i-th frame to the i+1-th frame
- dy i represents the Euclidean row offset from the i-th frame to the i+1-th frame
- dr i represents the Euclidean column offset from the i-th frame to the i+1-th frame Angular offset
- the extracted feature values include at least feature values of four dimensions.
- one feature value dimension is added, which makes the method for detecting video jitter provided by the embodiment of the present invention more generalizable, and further improves the accuracy of detection.
- S5 Use the feature value of the video to be detected as an input signal of the detection model to obtain an output signal through calculation, and determine whether the video to be detected shakes according to the output signal.
- the feature value of the video to be detected obtained in the above steps is input as an input signal into the detection model to perform calculations, to obtain an output signal, and to determine whether the video to be detected shakes according to the output signal.
- the detection model in the embodiment of the present invention is pre-trained.
- the method in the embodiment of the present invention can be used to perform corresponding processing on the sample video data in the selected training data set to obtain the feature value of the sample video data.
- the detection model is trained until the model training is completed, and the final detection model is obtained.
- the m-th video sample in a jittered video data set with annotations is processed through the above steps to extract the feature value of the m-th video sample. That is, first perform framing processing on the m-th video sample to obtain the frame sequence, and then perform feature point detection on the frame sequence frame by frame, obtain the feature points of each frame, and generate the frame feature point sequence matrix, and then perform the analysis based on the optical flow tracking algorithm
- the frame feature point sequence matrix is operated to obtain the motion vector of each frame, and finally the feature value of the m-th video sample is obtained according to the motion vector of each frame.
- the detection model may be an SVM model, that is, the feature values of the video to be detected obtained through the above steps are input into the trained SVM model to obtain the output result. If the output result is 0, it means that the video to be detected does not shake. If the output result is 1, it means that the video to be detected shakes.
- the trainable SVM model is used as the video jitter judger, which can perform jitter detection for videos in different scenes, and after adopting this model, the generalization is better and the detection accuracy is higher.
- Fig. 2 is a flow chart showing the preprocessing of the frame sequence according to an exemplary embodiment.
- the method further includes the step of preprocessing the frame sequence:
- S102 Perform denoising processing on the grayscale frame sequence.
- the denoising method can be arbitrarily selected, and there is no restriction on this here.
- the performing feature point detection on the frame sequence frame by frame is performing feature point detection frame by frame on the preprocessed frame sequence.
- the performing feature point detection on the frame sequence frame by frame, and acquiring the feature points of each frame includes:
- feature point detection is performed on the frame sequence frame by frame, and feature points of each frame are obtained.
- the embodiment of the present invention Optimize the feature point extraction algorithm.
- a feature point detection algorithm based on the fusion of FAST features and SURF features is adopted.
- the SURF algorithm is based on the improvement of the SIFT algorithm.
- SIFT is a feature description method with good robustness and constant scale. While the SURF algorithm maintains its advantages, it improves the SIFT algorithm with large calculation data and high time complexity. The algorithm takes a long time.
- FAST feature detection is a corner detection method.
- the most prominent advantage of this algorithm is its computational efficiency and can describe the global features of the image well. Therefore, the feature point detection algorithm based on the fusion of FAST feature and SURF feature is used for feature point extraction, which not only takes into account the global features of the image, but also fully retains its local features, and has low computational overhead, blurs the image, and poor lighting conditions. The robustness further improves the real-time and accuracy of detection.
- Fig. 3 is a flow chart showing the operation of the frame feature point sequence matrix based on the optical flow tracking algorithm to obtain the motion vector of each frame according to an exemplary embodiment.
- the calculation of the frame feature point sequence matrix based on the optical flow tracking algorithm to obtain the motion vector of each frame includes:
- S301 Perform optical flow tracking calculation on the frame feature point sequence matrix of each frame, and obtain an initial motion vector of each frame.
- the pyramid optical flow tracking Lucas-Kanade (LK) algorithm can be used.
- the tracking frame feature point i the Z i sequence matrix transform to the i + 1, the motion vector acquired Where the motion vector
- LK Lucas-Kanade
- dx i represents the Euclidean column offset from the i-th frame to the i+1-th frame
- dy i represents the Euclidean row offset from the i-th frame to the i+1-th frame
- dr i represents the Euclidean column offset from the i-th frame to the i+1-th frame Angle offset.
- LK Lucas-Kanade
- the initial motion vector of each frame obtained in step S301 Perform cumulative integral transformation to obtain the cumulative motion vector of each frame, denoted as Among them, the cumulative motion vector
- the expression is:
- a moving average window is used to convert the motion vector obtained in step S302 Perform smoothing to get the smoothed motion vector Its expression is:
- n the total number of frames of the video
- r the radius of the smoothing window
- a moving average window with a very small computational cost is used to smooth the motion vector, instead of using complex calculations such as Kalman filtering, the computational cost can be further reduced without loss of accuracy. Improve real-time.
- S304 Use the cumulative motion vector and the smoothed motion vector to adjust the initial motion vector of each frame to obtain the motion vector of each frame.
- the adjusted motion vector obtained Participate in subsequent calculations as the motion vector of each frame, making the calculation result more accurate, even if the detection result of video jitter is more accurate.
- Fig. 4 is a flow chart of obtaining the feature value of the video to be detected according to the motion vector of each frame according to an exemplary embodiment.
- the obtaining the feature value of the video to be detected according to the motion vector of each frame includes:
- S401 Combine the motion vectors of all frames into a matrix, and calculate the unbiased standard deviation of each element in the matrix.
- the motion vectors of all frames obtained through the above steps are combined and converted into a matrix, for example, for the motion vector Convert to matrix
- the specific calculation formula is as follows:
- the unbiased standard deviation of each element in the matrix can be obtained by the above formula, denoted as ⁇ [ ⁇ (dx)], ⁇ [ ⁇ (dy)] and ⁇ [ ⁇ (dr)], where A represents the sample mean.
- weights are set for the unbiased standard deviations of the above elements, and the unbiased standard deviations of each element are weighted and fused according to the weights.
- the weights of the unbiased standard deviations of each element can be dynamically adjusted according to actual needs. . For example, if the weight of ⁇ [ ⁇ (dx)] is set to 3, the weight of ⁇ [ ⁇ (dy)] is 3, and the weight of ⁇ [ ⁇ (dr)] is set to 10, the fusion formula is as follows:
- the characteristic value of the video S to be detected is the unbiased standard deviation of each element obtained in the above step and its weighted value, which is recorded as:
- Fig. 5 is a schematic structural diagram of a device for detecting video jitter according to an exemplary embodiment. Referring to Fig. 5, the device includes:
- the framing processing module is used to perform framing processing on the video to be detected to obtain a frame sequence
- the feature point detection module is used to perform feature point detection on the frame sequence frame by frame, obtain feature points of each frame, and generate a frame feature point sequence matrix;
- a vector calculation module configured to perform an operation on the frame feature point sequence matrix based on an optical flow tracking algorithm to obtain a motion vector of each frame;
- the feature value extraction module is configured to obtain the feature value of the video to be detected according to the motion vector of each frame;
- the jitter detection module is configured to use the feature value of the video to be detected as the input signal of the detection model to obtain an output signal through calculation, and determine whether the video to be detected jitters according to the output signal.
- the device further includes:
- the data preprocessing module includes:
- a grayscale processing unit configured to perform grayscale processing on the sub-frame sequence to obtain a grayscale frame sequence
- a denoising processing unit configured to perform denoising processing on the grayscale frame sequence
- the feature point detection module is used to perform feature point detection on the preprocessed frame sequence frame by frame.
- the feature point detection module is also used for:
- feature point detection is performed on the frame sequence frame by frame, and feature points of each frame are obtained.
- the vector calculation module includes:
- the optical flow tracking unit is configured to perform optical flow tracking calculation on the frame feature point sequence matrix of each frame to obtain the initial motion vector of each frame;
- a cumulative calculation unit configured to obtain a corresponding cumulative motion vector according to the initial motion vector
- a smoothing processing unit configured to perform smoothing processing on the cumulative motion vector to obtain a smoothed motion vector
- the vector adjustment unit is configured to use the accumulated motion vector and the smoothed motion vector to adjust the initial motion vector of each frame to obtain the motion vector of each frame.
- the feature value extraction module includes:
- a matrix conversion unit configured to merge and convert the motion vectors of all frames into a matrix
- a standard deviation calculation unit for calculating the unbiased standard deviation of each element in the matrix
- the weighted fusion unit is used to perform weighted fusion processing on the unbiased standard deviation of each element to obtain a weighted value.
- the video jitter detection method and device obtained the motion vector of each frame according to the frame feature point sequence matrix based on the optical flow tracking algorithm, which effectively solves the problem of excessive changes between two adjacent frames.
- the problem of tracking failure when the camera shake detection is performed under the condition of slow lens movement, it has good tolerance and adaptability, and it has good tolerance and adaptability.
- the camera shake detection is performed under the conditions of sudden large displacement, strong shaking, large rotation, etc. When, it has good sensitivity and robustness;
- the video jitter detection method and device use a feature point detection algorithm based on the fusion of FAST features and SURF features, that is, the feature point extraction algorithm is optimized, which takes into account the global features of the image, and is sufficient It retains its local features, and has low computational overhead, and is robust to blurred images and poor lighting conditions, which further improves the real-time and accuracy of detection;
- the video jitter detection method and device provided by the embodiment of the present invention extract at least four dimensional features from the video to be detected, and use the SVM model as the detection model, so that the video jitter detection method provided by the embodiment of the present invention is more general
- the chemistry is more advantageous, which further improves the accuracy of detection.
- any solution of the present application does not necessarily need to achieve all the advantages described above at the same time.
- the video jitter detection device provided in the above embodiment triggers the detection service
- only the division of the above functional modules is used as an example for illustration.
- the above functions can be allocated to different functional modules according to needs. Complete, that is, divide the internal structure of the device into different functional modules to complete all or part of the functions described above.
- the video jitter detection device provided in the above embodiment and the video jitter detection method embodiment belong to the same concept, that is, the device is based on the video jitter detection method.
- the specific implementation process please refer to the method embodiment. Repeat.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims (10)
- 一种视频抖动的检测方法,其特征在于,所述方法包括如下步骤:对待检测视频进行分帧处理得到帧序列;对所述帧序列逐帧进行特征点检测,获取每一帧的特征点,并生成帧特征点序列矩阵;基于光流跟踪算法对所述帧特征点序列矩阵进行运算得到每一帧的运动向量;根据所述每一帧的运动向量,获取所述待检测视频的特征值;将所述待检测视频的特征值作为检测模型的输入信号以运算得到输出信号,并根据所述输出信号判断所述待检测视频是否发生抖动。
- 根据权利要求1所述的视频抖动的检测方法,其特征在于,在进行特征点检测之前,所述方法还包括对所述帧序列进行预处理的步骤:对所述帧序列进行灰度化处理,获取灰度化帧序列;对所述灰度化帧序列进行去噪处理;所述对所述帧序列逐帧进行特征点检测为对预处理后的帧序列逐帧进行特征点检测。
- 根据权利要求1或2所述的视频抖动的检测方法,其特征在于,所述对所述帧序列逐帧进行特征点检测,获取每一帧的特征点包括:采用基于FAST特征和SURF特征相融合的特征点检测算法,对所述帧序列逐帧进行特征点检测,获取每一帧的特征点。
- 根据权利要求1或2所述的视频抖动的检测方法,其特征在于,所述基于光流跟踪算法对所述帧特征点序列矩阵进行运算得到每一帧的运动向量包括:对每一帧的所述帧特征点序列矩阵进行光流跟踪计算,获取每一帧的初始运动向量;根据所述初始运动向量获取对应的累积运动向量;对所述累积运动向量进行平滑处理,获取平滑后的运动向量;利用所述累积运动向量以及所述平滑后的运动向量,对所述每一帧的初始运动向量进行调整,获取每一帧的运动向量。
- 根据权利要求1或2所述的视频抖动的检测方法,其特征在于,所述根据所述每一帧的运动向量,获取所述待检测视频的特征值包括:将所有帧的所述运动向量合并转化成矩阵,并计算所述矩阵中各元素的无偏标准差;对所述各元素的无偏标准差进行加权融合处理,获取加权值;将所述各元素的无偏标准差以及所述加权值作为所述待检测视频的特征值。
- 一种视频抖动的检测装置,其特征在于,所述装置包括:分帧处理模块,用于对待检测视频进行分帧处理得到帧序列;特征点检测模块,用于对所述帧序列逐帧进行特征点检测,获取每一帧的特征点,并生成帧特征点序列矩阵;向量计算模块,用于基于光流跟踪算法对所述帧特征点序列矩阵进行运算得到每一帧的运动向量;特征值提取模块,用于根据所述每一帧的运动向量,获取所述待检测视频的特征值;抖动检测模块,用于将所述待检测视频的特征值作为检测模型的输入信号以运算得到输出信号,并根据所述输出信号判断所述待检测视频是否发生抖动。
- 根据权利要求6所述的视频抖动的检测装置,其特征在于,所述装置还包括:数据预处理模块,用于对所述帧序列进行预处理的步骤;所述数据预处理模块包括:灰度处理单元,用于对所述分帧序列进行灰度化处理,获取灰度化帧序列;去噪处理单元,用于对所述灰度化帧序列进行去噪处理;所述特征点检测模块用于对预处理后的帧序列逐帧进行特征点检测。
- 根据权利要求6或7所述的视频抖动的检测装置,其特征在于,所述特征点检测模块还用于:采用基于FAST特征和SURF特征相融合的特征点检测算法,对所述帧序列逐帧进行特征点检测,获取每一帧的特征点。
- 根据权利要求6或7所述的视频抖动的检测装置,其特征在于,所述向量计算模块包括:光流跟踪单元,用于对每一帧的所述帧特征点序列矩阵进行光流跟踪计算,获取每一帧的初始运动向量;累积计算单元,用于根据所述初始运动向量获取对应的累积运动向量;平滑处理单元,用于对所述累积运动向量进行平滑处理,获取平滑后的运动向量;向量调整单元,用于利用所述累积运动向量以及所述平滑后的运动向量,对所述每一帧的初始运动向量进行调整,获取每一帧的运动向量。
- 根据权利要求6或7所述的视频抖动的检测装置,其特征在于,所述特征值提取模块包括:矩阵转化单元,用于将所有帧的所述运动向量合并转化成矩阵;标准差计算单元,用于计算所述矩阵中各元素的无偏标准差;加权融合单元,用于对所述各元素的无偏标准差进行加权融合处理,获取加权值。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA3172605A CA3172605C (en) | 2019-06-21 | 2020-06-11 | Video jitter detection method and device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910546465.XA CN110248048B (zh) | 2019-06-21 | 2019-06-21 | 一种视频抖动的检测方法及装置 |
CN201910546465.X | 2019-06-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020253618A1 true WO2020253618A1 (zh) | 2020-12-24 |
Family
ID=67888794
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/095667 WO2020253618A1 (zh) | 2019-06-21 | 2020-06-11 | 一种视频抖动的检测方法及装置 |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN110248048B (zh) |
CA (1) | CA3172605C (zh) |
WO (1) | WO2020253618A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114155254A (zh) * | 2021-12-09 | 2022-03-08 | 成都智元汇信息技术股份有限公司 | 基于图像校正的切图方法、电子设备及介质 |
CN115103084A (zh) * | 2022-06-27 | 2022-09-23 | 北京华录新媒信息技术有限公司 | 一种基于光流的vr视频稳定方法 |
CN117152214A (zh) * | 2023-08-30 | 2023-12-01 | 哈尔滨工业大学 | 一种基于改进光流检测的缺陷识别方法 |
CN117576692A (zh) * | 2024-01-17 | 2024-02-20 | 大连云智信科技发展有限公司 | 基于图像识别的畜牧业水源污染检测方法 |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110248048B (zh) * | 2019-06-21 | 2021-11-09 | 苏宁云计算有限公司 | 一种视频抖动的检测方法及装置 |
CN110971895B (zh) | 2019-12-18 | 2022-07-08 | 北京百度网讯科技有限公司 | 视频抖动检测方法和装置 |
CN111614895B (zh) * | 2020-04-30 | 2021-10-29 | 惠州华阳通用电子有限公司 | 一种图像成像抖动补偿方法、系统及设备 |
CN112887708A (zh) * | 2021-01-22 | 2021-06-01 | 北京锐马视讯科技有限公司 | 视频抖动检测方法和装置、设备及存储介质 |
CN113115109B (zh) * | 2021-04-16 | 2023-07-28 | 深圳市帧彩影视科技有限公司 | 视频处理方法、装置、电子设备及存储介质 |
CN116193257B (zh) * | 2023-04-21 | 2023-09-22 | 成都华域天府数字科技有限公司 | 一种消除手术视频图像画面抖动的方法 |
CN117969554B (zh) * | 2024-01-25 | 2024-09-03 | 陕西科技大学 | 一种管道缺陷检测机器人及检测方法 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103826032A (zh) * | 2013-11-05 | 2014-05-28 | 四川长虹电器股份有限公司 | 深度图后期处理方法 |
CN104135597A (zh) * | 2014-07-04 | 2014-11-05 | 上海交通大学 | 一种视频抖动自动检测方法 |
CN104301712A (zh) * | 2014-08-25 | 2015-01-21 | 浙江工业大学 | 一种基于视频分析的监控摄像头抖动检测方法 |
CN108292362A (zh) * | 2016-01-05 | 2018-07-17 | 英特尔公司 | 用于光标控制的手势识别 |
JP2019020839A (ja) * | 2017-07-12 | 2019-02-07 | キヤノン株式会社 | 画像処理装置、画像処理方法、及びプログラム |
CN110248048A (zh) * | 2019-06-21 | 2019-09-17 | 苏宁云计算有限公司 | 一种视频抖动的检测方法及装置 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101511024A (zh) * | 2009-04-01 | 2009-08-19 | 北京航空航天大学 | 实时电子稳像中基于运动状态识别的运动补偿方法 |
US9055223B2 (en) * | 2013-03-15 | 2015-06-09 | Samsung Electronics Co., Ltd. | Digital image stabilization method and imaging device using the same |
CN104144283B (zh) * | 2014-08-10 | 2017-07-21 | 大连理工大学 | 一种基于改进的卡尔曼滤波的实时数字视频稳像方法 |
CN105681663B (zh) * | 2016-02-26 | 2018-06-22 | 北京理工大学 | 一种基于帧间运动几何平滑性的视频抖动检测方法 |
JP6823469B2 (ja) * | 2017-01-20 | 2021-02-03 | キヤノン株式会社 | 像ブレ補正装置及びその制御方法、撮像装置、プログラム、記憶媒体 |
US10491832B2 (en) * | 2017-08-16 | 2019-11-26 | Qualcomm Incorporated | Image capture device with stabilized exposure or white balance |
CN108366201B (zh) * | 2018-02-12 | 2020-11-06 | 天津天地伟业信息系统集成有限公司 | 一种基于陀螺仪的电子防抖方法 |
-
2019
- 2019-06-21 CN CN201910546465.XA patent/CN110248048B/zh active Active
-
2020
- 2020-06-11 WO PCT/CN2020/095667 patent/WO2020253618A1/zh active Application Filing
- 2020-06-11 CA CA3172605A patent/CA3172605C/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103826032A (zh) * | 2013-11-05 | 2014-05-28 | 四川长虹电器股份有限公司 | 深度图后期处理方法 |
CN104135597A (zh) * | 2014-07-04 | 2014-11-05 | 上海交通大学 | 一种视频抖动自动检测方法 |
CN104301712A (zh) * | 2014-08-25 | 2015-01-21 | 浙江工业大学 | 一种基于视频分析的监控摄像头抖动检测方法 |
CN108292362A (zh) * | 2016-01-05 | 2018-07-17 | 英特尔公司 | 用于光标控制的手势识别 |
JP2019020839A (ja) * | 2017-07-12 | 2019-02-07 | キヤノン株式会社 | 画像処理装置、画像処理方法、及びプログラム |
CN110248048A (zh) * | 2019-06-21 | 2019-09-17 | 苏宁云计算有限公司 | 一种视频抖动的检测方法及装置 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114155254A (zh) * | 2021-12-09 | 2022-03-08 | 成都智元汇信息技术股份有限公司 | 基于图像校正的切图方法、电子设备及介质 |
CN114155254B (zh) * | 2021-12-09 | 2022-11-08 | 成都智元汇信息技术股份有限公司 | 基于图像校正的切图方法、电子设备及介质 |
CN115103084A (zh) * | 2022-06-27 | 2022-09-23 | 北京华录新媒信息技术有限公司 | 一种基于光流的vr视频稳定方法 |
CN117152214A (zh) * | 2023-08-30 | 2023-12-01 | 哈尔滨工业大学 | 一种基于改进光流检测的缺陷识别方法 |
CN117576692A (zh) * | 2024-01-17 | 2024-02-20 | 大连云智信科技发展有限公司 | 基于图像识别的畜牧业水源污染检测方法 |
CN117576692B (zh) * | 2024-01-17 | 2024-03-29 | 大连云智信科技发展有限公司 | 基于图像识别的畜牧业水源污染检测方法 |
Also Published As
Publication number | Publication date |
---|---|
CN110248048A (zh) | 2019-09-17 |
CN110248048B (zh) | 2021-11-09 |
CA3172605A1 (en) | 2020-12-24 |
CA3172605C (en) | 2024-01-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020253618A1 (zh) | 一种视频抖动的检测方法及装置 | |
WO2020192483A1 (zh) | 图像显示方法和设备 | |
US10121229B2 (en) | Self-portrait enhancement techniques | |
US9615039B2 (en) | Systems and methods for reducing noise in video streams | |
US20220222776A1 (en) | Multi-Stage Multi-Reference Bootstrapping for Video Super-Resolution | |
CN113286194A (zh) | 视频处理方法、装置、电子设备及可读存储介质 | |
WO2020199831A1 (zh) | 图像处理模型的训练方法、图像处理方法、网络设备及存储介质 | |
US10764496B2 (en) | Fast scan-type panoramic image synthesis method and device | |
WO2020259474A1 (zh) | 追焦方法、装置、终端设备、计算机可读存储介质 | |
US9307148B1 (en) | Video enhancement techniques | |
US11303793B2 (en) | System and method for high-resolution, high-speed, and noise-robust imaging | |
WO2021057294A1 (en) | Method and apparatus for detecting subject, electronic device, and computer readable storage medium | |
WO2021082883A1 (zh) | 主体检测方法和装置、电子设备、计算机可读存储介质 | |
CN111798485B (zh) | 一种利用imu增强的事件相机光流估计方法及系统 | |
WO2020171379A1 (en) | Capturing a photo using a mobile device | |
WO2023169281A1 (zh) | 图像配准方法、装置、存储介质及电子设备 | |
CN108111760B (zh) | 一种电子稳像方法及系统 | |
CN116486250A (zh) | 一种基于嵌入式的多路图像采集与处理方法及系统 | |
KR101202642B1 (ko) | 배경의 특징점을 이용한 전역 움직임 추정 방법 및 장치 | |
CN114429191A (zh) | 基于深度学习的电子防抖方法、系统及存储介质 | |
TW201523516A (zh) | 移動攝影機之畫面穩定方法 | |
JP2022099120A (ja) | 被写体追尾装置およびその制御方法 | |
WO2023185096A1 (zh) | 图像模糊度的确定方法及其相关设备 | |
CN117575966B (zh) | 一种用于无人机高空悬停拍摄场景的视频稳像方法 | |
TW202338734A (zh) | 用於處理影像資料的方法及影像處理器單元 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20826293 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20826293 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 3172605 Country of ref document: CA |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20826293 Country of ref document: EP Kind code of ref document: A1 |