CN115336434B - Fast video interframe rotation motion estimation method - Google Patents

Fast video interframe rotation motion estimation method Download PDF

Info

Publication number
CN115336434B
CN115336434B CN201218008124.6A CN201218008124A CN115336434B CN 115336434 B CN115336434 B CN 115336434B CN 201218008124 A CN201218008124 A CN 201218008124A CN 115336434 B CN115336434 B CN 115336434B
Authority
CN
China
Prior art keywords
gradient
theta
projection
grad
motion estimation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201218008124.6A
Other languages
Chinese (zh)
Inventor
齐蕴光
邵新杰
任国全
毛向东
田广
张培林
王怀光
李冬伟
曹凤利
周景涛
刘金华
叶合明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Army Engineering University of PLA
Original Assignee
Army Engineering University of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Army Engineering University of PLA filed Critical Army Engineering University of PLA
Priority to CN201218008124.6A priority Critical patent/CN115336434B/en
Application granted granted Critical
Publication of CN115336434B publication Critical patent/CN115336434B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Studio Circuits (AREA)

Abstract

The invention discloses a fast video interframe rotation motion estimation method, namely a gradient direction projection method, which comprises the following steps: firstly to the video sequenceCarrying out filtering smoothing pretreatment on the column frames; then calculating the gradient size grad (i, j) and the direction theta (i, j) of each pixel point (i, j) in the filtered reference frame and the current frame; then will [0 degree, 360 degree ]]Dividing the step length into m sections, if the gradient direction theta (i, j) belongs to [ theta ] n ,θ n+1 ]Interval, then projecting the gradient magnitude grad (i, j) to [ θ ] n ,θ n+1 ]An interval; calculating the gradient accumulated value in each phase interval to obtain two one-dimensional gradient direction projection curves G of the current frame and the reference frame with the phase as the abscissa and the gradient value as the ordinate r And G k (ii) a And finally, performing correlation operation and valley value detection on the two gradient direction projection curves to obtain the rotation offset delta theta of the image. The invention utilizes the gradient value of the image pixel as the characteristic quantity, and simultaneously creatively provides a rotary projection mode of projection along the gradient direction of the pixel point, thereby improving the estimation precision and speed of the rotary motion under the conditions of low contrast and weak texture images and under the condition that moving objects are contained in video frames.

Description

Fast video interframe rotation motion estimation method
Technical Field
The invention relates to a method for fast estimating the inter-frame rotation motion of a video, in particular to the realization of a gradient direction projection method.
Background
With the application of high and new technologies, the modern war puts higher requirements on a fire control system, and digital modification is imperative in order to complete the important mission of equipment given by the future war. In the process of automatic target detection and tracking, unstable and even fuzzy motion caused by random vibration and attitude change of a camera carrier can cause instability and even blurring of a video image sequence, so that the imaging quality of a system can be influenced, visual fatigue of an observer can be caused, and difficulty can be caused in detection and tracking of a moving target. Electronic Image Stabilization (EIS) is a technique for removing Image disturbance caused by random motion of a camera from an input video Image sequence by an Image processing means and stabilizing an output Image sequence. The inter-frame global motion estimation is a key link for realizing electronic image stabilization, can be used for obtaining the overall motion condition between two adjacent frames of images in a video sequence, and is the most key technical link in the image stabilization process. The accuracy and precision of global motion estimation directly determine the image stabilization quality of the system, and meanwhile, the calculated amount of the global motion estimation generally accounts for more than 90% of the calculated amount of the system, so the speed of the global motion estimation determines the real-time processing capacity of the image stabilization system.
In the multi-temporal and multi-pose imaging process of the same scene or target, geometric transformation relations such as translation, rotation, scaling, affine and the like often exist. The estimation of global rotational motion is always a difficult point, and the general idea of the current rotational motion estimation is as follows: selecting a plurality of characteristic points (blocks) in an image, calculating the translation amount of each characteristic point (block) through characteristic matching, excluding the characteristic points (blocks) possibly selected in a moving target area, and substituting the position coordinates and the translation motion parameters of each characteristic point (block) into an affine model for solving, thereby obtaining the rotation amount. From the above process, it can be seen that: errors are possibly introduced into any link of feature selection, feature matching, local motion elimination, least square solution and the like, so that the precision of the rotation amount estimation is reduced, and meanwhile, the problem that the real-time performance is difficult to meet due to large calculated amount exists.
The gray projection method can fully utilize the change rule of the overall distribution of the image gray, accurately estimate the translational motion vector of the image, and has the characteristics of high precision and small calculated amount, but the algorithm also has the defects of poor image processing effect on single gray value, easy matching error caused by rotation motion, influence on projection matching caused by the motion of an internal object and the like. There are two approaches to apply the projection method to the rotational motion estimation at present:
(1) A block method: document "an electronic image stabilization method based on a gray projection method [ J ]. Electronic newspaper, 2005 (08): 1266-1269 "think" when the matching area is small, it can be approximately think that only translational motion is contained in the sub-area and the rotational motion is ignored ", and adopt the method of partitioning the image to recognize the rotational motion. Firstly, a plurality of small areas are searched at the edge of an image, the motion vector of each block is calculated by utilizing a gray projection method, and then an affine change model is substituted to realize the calculation of the rotation amount. The method has a complex calculation process, the calculation precision is easily interfered by the partition size, moving objects and the like, and meanwhile, the large rotation amount cannot be estimated, so the practicability is not strong.
(2) The method comprises the following steps: document "fast angle vector estimation algorithm based on row gray projection [ J ]. Photoelectric engineering, 2008, 11 (35): 91-96' provides a traversal gray level projection algorithm for estimating rotary motion, and the specific realization idea is as follows: and traversing-180 degrees and 180 degrees of rotation is carried out on a certain region of the current frame image, gray projection correlation calculation is carried out on the certain region and the corresponding region of the reference frame image, the minimum correlation value is found, and then the angular displacement vector is estimated. The method ignores the possibility that a moving target exists in the image, and if a projection matching region is selected on a moving object, the detection precision is reduced; meanwhile, as the matching of the traversal angles is carried out, multiple projection operations are required, the time consumption of the algorithm is high, and the requirement on real-time performance is difficult to meet.
Generally, the rotation amount is estimated based on a gray projection method in the prior art, and the research of the invention is mainly aimed at improving the estimation precision and speed of the inter-frame global motion rotation amount under the condition that a motion target exists between frames under the condition of a low-quality image. The invention provides a rapid video interframe rotation motion estimation method by improving projection characteristic quantity and a projection mode on the basis of a gray projection method.
Disclosure of Invention
The invention mainly aims to solve the problems of low estimation precision, large calculation amount and incapability of estimating large rotation amount when the existing motion estimation method carries out global rotation motion estimation, and provides a rapid video inter-frame rotation motion estimation method which comprises the following steps: the invention relates to a Gradient Oriented Projection Method (GOPA), which can eliminate the influence of translational motion in an image sequence on rotational motion estimation, improve the precision and speed of the rotational motion estimation under the condition of low-quality video and meet the real-time requirement.
In order to achieve the above object, the present invention adopts the following technical solutions.
A fast video frame rotation motion estimation method-gradient direction projection method is characterized by comprising the following steps:
(1) Respectively carrying out filtering smoothing pretreatment on the current frame and the reference frame according to the magnitude of the signal-to-noise ratio of the image;
(2) Calculating the gradient size grad (i, j) and the direction theta (i, j) of each pixel point (i, j) in the current frame and the reference frame after filtering in the step (1);
(3) Will [0 degree, 360 degree ]]Dividing into m phase intervals [ theta ] by equal step size [ theta ] n ,θ n+1 ]Representing the nth phase interval, if the gradient direction theta (i, j) of the pixel point (i, j) belongs to [ theta ] n ,θ n+1 ]Projecting the gradient magnitude grad (i, j) of the pixel point to [ theta ] in the phase interval n ,θ n+1 ]A phase interval;
(4) Performing gradient direction projection according to the method in the step (3), calculating a gradient magnitude accumulated value in each phase interval, and respectively obtaining a one-dimensional gradient direction projection curve G of the current frame and the reference frame by taking the phase as a horizontal coordinate and the gradient magnitude as a vertical coordinate r And G k
(5) Projecting curve G in the gradient direction of the current frame and the reference frame obtained in the step (4) r And G k And performing correlation operation, and performing valley detection on a correlation operation result C (theta), wherein the phase corresponding to the valley is the interframe global rotation offset delta theta.
The filtering method selected in step (1) is selected from one of gaussian filtering, mean filtering, median filtering and wavelet filtering, and aims to reduce the influence of noise on subsequent gradient calculation.
In the step (2), the magnitude and direction of the gradient of the pixel point (i, j) can be obtained by the following formula:
grad(i,j)=(grad x (i,j) 2 +grad y (i,j) 2 ) 1/2
θ(i,j)=arc tan(grad y (i,j)/grad x (i,j))
wherein, grad x (i,j),grad y (i, j) is the magnitude of the gradient in the horizontal and vertical directions at the point (i, j), respectively.
In the step (3), the step length θ determines the calculation accuracy of the inter-frame global rotation offset, the step length θ can be set as required, and the interval number m is 360 degrees divided by the step length θ.
In the step (4), in order to improve the accuracy of the algorithm, threshold judgment needs to be performed on the pixel points participating in projection during projection, and the pixel points with the gradient smaller than the threshold do not participate in projection, so that the influence of a large number of low-texture region pixel points on the calculation result is reduced. The threshold is set to 3 to 5 times the average gradient strength.
In the step (5), the correlation result C (θ) can be obtained by the following formula:
Figure BBM2022072000800000031
in order to make the domain of the gradient direction projection curve G (θ) meaningful in the above formula, G (θ) needs to be periodically extended by taking 360 ° as a period, that is:
G(θ+360°)=G(θ)。
search space-theta for inter-frame global rotational offset delta theta s θ s ]Can be set according to actual requirements.
The positive and negative of the valley detection result indicate the rotation direction of the current frame relative to the reference frame: when the current frame is a positive value, the current frame is rotated clockwise by theta DEG relative to the reference frame; a negative value indicates that the current frame is rotated counterclockwise by θ ° with respect to the reference frame.
The invention utilizes the gradient value of the image pixel as the characteristic quantity, and innovatively provides a rotary projection mode of projection along the gradient direction of the pixel point, thereby improving the estimation precision and speed of the rotary motion under the conditions of low contrast, weak texture image and video frames containing moving objects.
Compared with the prior art, the invention has the advantages that:
1. according to the technical scheme, the image pixel gradient value capable of well reflecting the overall change trend of the image motion is used as the characteristic quantity, and the motion estimation precision under the weak texture and low-contrast image is improved.
2. According to the method, the directional characteristics of the pixel gradients are utilized, the projection is carried out in a mode of carrying out projection along the gradient direction of the pixel points, the influence of moving objects existing between frames is eliminated, the rotation offset can be obtained only by carrying out one-time matching, the calculated amount cannot be increased due to the increase of the estimated rotation amount, the algorithm has high efficiency, and the real-time requirement can be well met.
3. The method is applied to the application fields of video stabilization and the like, which need to calculate the interframe global rotation offset, and can greatly improve the speed and the precision of global rotation motion estimation under the conditions of low quality and complex video. Relevant test results show that the method has good effect on the global rotation motion estimation of various scene images in image stabilization.
Drawings
FIG. 1 is a schematic process diagram of a method according to the invention;
fig. 2 is a schematic diagram of an original image (a 1), a translated image (b 1), and a rotated image (c 1), and corresponding gradient direction projection curves (a 2), (b 2), and (c 2);
fig. 3 (a) and (b) are schematic diagrams illustrating the detection of gradient direction projection curve correlation between the translated and rotated images and the original image, respectively;
FIG. 4 is a two-frame low-light image for testing, in which (a) and (b) are a current frame and a reference frame, respectively;
fig. 5 shows (a) and (b) gradient direction projection curves of a reference frame and a current frame, respectively;
FIG. 6 shows (a) and (b) an expanded view of the projection curve of the gradient direction of the test image and the related test results, respectively;
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, the present invention is described in further detail with reference to the accompanying drawings and detailed description:
FIG. 1 depicts the steps, processes and principles of the present invention: firstly, carrying out filtering smoothing pretreatment on a current frame and a reference frame to reduce the influence of noise on gradient operation; then calculating the gradient size and direction of each pixel point in the reference frame and the current frame; projecting the pixel gradient to a corresponding interval according to the gradient direction, and calculating a gradient accumulated value in each phase interval to obtain two one-dimensional gradient direction projection curves of the current frame and the reference frame by taking the phase as a horizontal coordinate and the gradient value as a vertical coordinate; and finally, performing correlation operation and valley value detection on the two gradient direction projection curves to obtain the rotation offset of the image.
As shown in fig. 2, a specific experiment is first shown to illustrate the basic principle and the algorithm advantage of the gradient direction projection method, which is a fast video inter-frame rotation motion estimation method according to the present invention.
In fig. 2, (a 1), (b 1), and (c 1) are images of the same approximate rectangle under different camera motion conditions, (b 1) a translational motion occurs relative to (a 1), and (c 1) a 45 ° rotational motion occurs relative to (a 1).
First, with 1 ° as a unit, the gradient direction of each pixel in the image is projected, and the gradient value is projected to the corresponding phase interval, so as to obtain the projection curve in the gradient direction, as shown in (a 2), (b 2), and (c 2) in fig. 2. It can be observed that the projection curves of (a 2) and (b 2) are almost the same, and the projection curve of (c 2) is changed. It is thus known that the projection curve obtained in the gradient direction is only sensitive to rotational movements, while translational movements do not have an influence on the projection curve obtained in the gradient direction.
Then, correlation is performed between (a 2) and the projection curves (b 2) and (c 2), respectively (as shown in fig. 3), and valley detection is performed. It is understood that (b 1) does not rotate relative to (a 1); and (c 1) is rotated clockwise by 45 ° with respect to (a 1). The detection result is consistent with the preset rotation magnitude value, and the feasibility and the correctness of the idea of solving the rotation amount between frames are proved.
Through the analysis, the gradient direction projection method fundamentally solves the problem that the estimation precision is not high due to the fact that the image contains translational motion when the image interframe rotation amount is calculated by the existing algorithm, and meanwhile, the rotation amount value can be obtained only by carrying out one-time correlation operation, so that the calculation amount of the algorithm is greatly reduced, and the real-time requirement can be met.
In order to test the rotary motion estimation effect of the invention under the conditions of single gray value, poor contrast image and moving object existing between frames, two continuous frames of images of the low-light-level image sequence with the moving object are selected for example description. As shown in fig. 4, (a), (b) are a reference frame and a current frame, respectively, which undergo a rotational motion of 3 ° clockwise with respect to the reference frame, and there is a tank traveling in the horizontal direction, the image size is 300 × 400, and the frame rate is 24 frames/sec.
The first step is as follows: in order to reduce noise influence and carry out filtering smoothing pretreatment on an image, the invention adopts a Gaussian filtering method to carry out filtering treatment on a reference frame and a current frame, and the functional expression is formula (1):
Figure BBM2022072000800000051
the second step: calculating the gradient size grad (i, j) and the direction theta (i, j) of each pixel point in the reference frame and the current frame, wherein the calculation formulas are (2) and (3) respectively;
grad(i,j)=(grad x (i,j) 2 +grad y (i,j) 2 ) 1/2 (2)
θ(i,j)=arc tan(grad y (i,j)/grad x (i,j)) (3)
grad x (i,j),grad y (i, j) is the gradient value in the x, y direction at point (i, j). In formulas (2) and (3):
grad x (i,j)=f(i+1,j)-f(i-1,j) (4)
grad y (i,j)=f(i,j+1)-f(i,j-1) (5)
the third step: the projection is performed along the direction of the pixel gradient. In order to improve the accuracy of the algorithm, when performing projection, a threshold judgment needs to be performed on the points participating in the projection, and in this example, the threshold is set to be 5 times of the average value of the image gradient intensity. Projecting each pixel point in the image along the gradient direction thereof, and calculating a gradient accumulated value in each phase interval (in this example, 3600 intervals are set according to a step length of 0.1 °) to obtain a one-dimensional gradient direction projection curve with the phases of the two frames of images as abscissa and the gradient values as ordinate, as shown in fig. 5;
the fourth step: and performing correlation operation and valley value detection on the gradient direction projection curves of the two frames of images to obtain the rotation offset delta theta of the images. The correlation operation formulas are shown as formulas (6) and (7):
Figure BBM2022072000800000052
C(θ+360)=C(θ) (7)
in this example, the search space for θ is set to [ -15 ° 15 ° ]. As shown in fig. 6 (a), the gradient direction projection curves of fig. 5 are developed in phase, correlation is performed on both the projection curves, and the bottom value is detected, and as shown in fig. 6 (b), it is understood that Δ θ =3 °, and the detection result matches the rotation amount set in advance. As can be seen from this example, although there is a horizontally moving object (tank) in the image sequence, an accurate estimation of rotational motion can be obtained because the gradient direction projection method eliminates the effect of translational motion on the estimation of rotational motion.
Next, the present invention was tested on a sequence of simulated images with a rotation amount of 0 to 30 ° using the present algorithm to test the estimated speed and accuracy of the present invention for large angular rotation amounts. The test result shows that the gradient direction projection method can achieve high estimation precision and calculation speed for a large rotation amount (see table 1).
TABLE 1 estimation result of simulation image sequence rotation amount by GDPA method
Figure BBM2022072000800000053
From the above implementation, it can be seen that:
the invention relates to a fast video interframe rotation motion estimation method which improves the prior gray level projection method from two aspects of projection characteristic quantity and projection mode:
(1) Characteristic aspects of the projection: the gradient direction projection method of the invention uses the gradient value of the pixel point as the characteristic quantity. The gradient value reflects the edge characteristics of the image, is an abstract feature set in the image and can well reflect the overall change trend of the image motion. The gradient is used as the characteristic quantity, so that the disadvantages of single gray value and weak texture of the low-contrast image can be eliminated, and the estimation precision is improved.
(2) Mode aspect of projection: the gradient direction projection method of the invention is to carry out projection along the gradient direction of the pixel points, and covers the range of 0-360 degrees. Macroscopically, the translation motion of the image does not influence the gradient direction of the pixel, but the gradient direction of the pixel is changed only by the rotation motion of the image, and the change is consistent with the rotation motion of the image, so the projection mode eliminates the influence of a moving object and can accurately estimate the rotation amount of the image. Meanwhile, the gradient direction projection method can obtain the rotation offset only by one-time matching, the calculated amount cannot be increased due to the increase of the rotation amount, and the algorithm has high efficiency. This is also a cause of low accuracy and large computation amount when the conventional motion estimation algorithm detects the rotation amount.
The above disclosure is only specific examples of the present invention, but the present invention is not limited thereto, and variations that can be made by those skilled in the art according to the idea of the present invention should fall within the protection scope of the present invention.

Claims (9)

1. A fast video interframe rotation motion estimation method is characterized by comprising the following steps:
(1) Respectively carrying out filtering smoothing pretreatment on the current frame and the reference frame according to the magnitude of the signal-to-noise ratio of the image;
(2) Calculating the gradient size grad (i, j) and the direction theta (i, j) of each pixel point (i, j) in the current frame and the reference frame after filtering in the step (1);
(3) Will [0 degree, 360 degree ]]Dividing into m phase intervals [ theta ] by equal step size [ theta ] n ,θ n+1 ]Representing the nth phase interval, if the gradient direction theta (i, j) of the pixel point (i, j) belongs to [ theta ] n ,θ n+1 ]In the phase interval, the gradient magnitude grad (i, j) of the pixel point is projected to [ theta ] n ,θ n+1 ]A phase interval;
(4) Performing gradient direction projection according to the method in the step (3), calculating gradient magnitude accumulated values in each phase interval, and respectively obtaining one-dimensional gradient direction projection curves G of the current frame and the reference frame by taking the phase as a horizontal coordinate and the gradient magnitude as a vertical coordinate r And G k
(5) Projecting curve G in the gradient direction of the current frame and the reference frame obtained in the step (4) r And G k And performing correlation operation, and performing valley detection on a correlation operation result C (theta), wherein the phase corresponding to the detected valley is the interframe global rotation offset delta theta.
2. The method of fast video interframe rotational motion estimation according to claim 1, wherein:
the filtering method selected in the step (1) is selected from one of Gaussian filtering, mean filtering, median filtering and wavelet filtering, and aims to reduce the influence of noise on gradient calculation in the step (2).
3. The method of fast video interframe rotation motion estimation according to claim 1, characterized in that:
in the step (2), the gradient magnitude and direction of the pixel point (i, j) can be obtained by the following formula:
grad(i,j)=(grad x (i,j) 2 +grad y (i,j) 2 ) 1/2
θ(i,j)=arc tan(grad y (i,j)/grad x (i,j))
wherein, grad x (i,j),grad y (i, j) is the magnitude of the gradient in the horizontal and vertical directions at the point (i, j), respectively.
4. The method of fast video interframe rotational motion estimation according to claim 1, wherein:
in the step (3), the step length theta determines the calculation precision of the interframe global rotation offset, the step length theta can be set according to requirements, and the number m of the intervals is equal to 360 degrees and is divided by the step length theta.
5. The method of fast video interframe rotational motion estimation according to claim 1 or 4, wherein:
in the step (4), in order to improve the accuracy of the algorithm, threshold judgment needs to be performed on the pixel points participating in projection during projection, and the pixel points with the gradient smaller than the threshold do not participate in projection, so as to reduce the influence of the pixel points in the low texture region on the calculation result.
6. The method of fast video interframe rotational motion estimation according to claim 5, wherein:
the method of claim 5, wherein the threshold is set to 3 to 10 times the average gradient size of the image.
7. The method of fast video interframe rotational motion estimation according to claim 1, wherein:
in the step (5), the correlation result C (θ) can be obtained by the following formula:
Figure FBM2022072000790000011
in order to make the domain of the gradient direction projection curve G (θ) meaningful in the above formula, G (θ) needs to be periodically extended by taking 360 ° as a period, that is:
G(θ+360°)=G(θ)
8. the method of fast video interframe rotational motion estimation according to claim 7, wherein:
search space-theta for inter-frame global rotational offset delta theta s θ s ]Can be set according to actual requirements.
9. The method of fast video interframe rotational motion estimation according to claim 1 or 7, wherein:
the positive and negative of the valley detection result indicate the rotation direction of the current frame relative to the reference frame: when the current frame is a positive value, the current frame is rotated clockwise by theta DEG relative to the reference frame; a negative value indicates that the current frame is rotated counterclockwise by θ ° with respect to the reference frame.
CN201218008124.6A 2012-12-31 2012-12-31 Fast video interframe rotation motion estimation method Active CN115336434B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201218008124.6A CN115336434B (en) 2012-12-31 2012-12-31 Fast video interframe rotation motion estimation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201218008124.6A CN115336434B (en) 2012-12-31 2012-12-31 Fast video interframe rotation motion estimation method

Publications (1)

Publication Number Publication Date
CN115336434B true CN115336434B (en) 2015-04-01

Family

ID=83943820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201218008124.6A Active CN115336434B (en) 2012-12-31 2012-12-31 Fast video interframe rotation motion estimation method

Country Status (1)

Country Link
CN (1) CN115336434B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016568A (en) * 2019-05-31 2020-12-01 北京初速度科技有限公司 Method and device for tracking image feature points of target object
CN114264835A (en) * 2021-12-22 2022-04-01 上海集成电路研发中心有限公司 Method, device and chip for measuring rotating speed of fan

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016568A (en) * 2019-05-31 2020-12-01 北京初速度科技有限公司 Method and device for tracking image feature points of target object
CN114264835A (en) * 2021-12-22 2022-04-01 上海集成电路研发中心有限公司 Method, device and chip for measuring rotating speed of fan
CN114264835B (en) * 2021-12-22 2023-11-17 上海集成电路研发中心有限公司 Method, device and chip for measuring rotation speed of fan

Similar Documents

Publication Publication Date Title
CN110796010B (en) Video image stabilizing method combining optical flow method and Kalman filtering
CN105447888B (en) A kind of UAV Maneuver object detection method judged based on effective target
CN106991650B (en) Image deblurring method and device
CN109816673B (en) Non-maximum value inhibition, dynamic threshold value calculation and image edge detection method
CN102202164B (en) Motion-estimation-based road video stabilization method
KR100985805B1 (en) Apparatus and method for image stabilization using adaptive Kalman filter
CN103402045A (en) Image de-spin and stabilization method based on subarea matching and affine model
CN107301657B (en) A kind of video target tracking method considering target movable information
CN107169972B (en) Non-cooperative target rapid contour tracking method
CN106709939B (en) Method for tracking target and target tracker
CN105374049B (en) Multi-corner point tracking method and device based on sparse optical flow method
CN103544710A (en) Image registration method
CN109308713A (en) A kind of improvement core correlation filtering Method for Underwater Target Tracking based on Forward-looking Sonar
CN107360377B (en) Vehicle-mounted video image stabilization method
CN106408600B (en) A method of for image registration in sun high-definition picture
CN102917217A (en) Movable background video object extraction method based on pentagonal search and three-frame background alignment
CN115336434B (en) Fast video interframe rotation motion estimation method
CN105869108B (en) A kind of method for registering images in the mobile target detecting of moving platform
Subramanyam Automatic image mosaic system using steerable Harris corner detector
CN102123234B (en) Unmanned airplane reconnaissance video grading motion compensation method
CN106357958A (en) Region-matching-based fast electronic image stabilization method
CN115336433B (en) Novel electronic image stabilization motion estimation method
Vignesh et al. Performance and Analysis of Edge detection using FPGA Implementation
KR102003671B1 (en) Method and apparatus for processing the image
JP2011133423A (en) Object deducing device

Legal Events

Date Code Title Description
GR03 Grant of secret patent right
GRSP Grant of secret patent right
DC01 Secret patent status has been lifted
DC01 Secret patent status has been lifted