CN111582270A - Identification tracking method based on high-precision bridge region visual target feature points - Google Patents
Identification tracking method based on high-precision bridge region visual target feature points Download PDFInfo
- Publication number
- CN111582270A CN111582270A CN202010334904.3A CN202010334904A CN111582270A CN 111582270 A CN111582270 A CN 111582270A CN 202010334904 A CN202010334904 A CN 202010334904A CN 111582270 A CN111582270 A CN 111582270A
- Authority
- CN
- China
- Prior art keywords
- points
- point
- region
- candidate points
- tracking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
A high-precision bridge region visual target feature point-based identification and tracking method belongs to the field of bridge engineering health monitoring. The method comprises the following steps: firstly, extracting a small region containing a target in a single-frame video image by adopting background separation according to the target characteristics as an interested region; step two, carrying out corner detection on the extracted region of interest, and determining corners by detecting gray change through sliding Gaussian windows in the region of interest; calculating a first-order partial derivative of a pixel in the selected Gaussian window, constructing a gray covariance matrix, and selecting a point with an error close to an ellipse as a candidate point; step four, determining the angular points of the candidate points by further adopting singular value decomposition, matching and judging two adjacent frames and finally determining the angular points; step five: and repeating the second step and the previous step, performing corner matching and tracking on each frame of the video, and detecting and tracking each frame of corner until the video is finished. The method is used for identifying and tracking the characteristic points of the visual target in the bridge area.
Description
Technical Field
The invention belongs to the field of bridge engineering health monitoring, and particularly relates to a high-precision bridge region visual target feature point-based identification and tracking method.
Background
With the accelerated web formation of traffic infrastructure, the bridge construction industry has also been rapidly developed. However, the following bridge collapse accident also occurs, so that it is very important to directly, rapidly and accurately measure the relevant indexes such as bridge vibration displacement, strain and acceleration.
At present, displacement measuring tools and methods commonly used in bridges mainly comprise a total station, a displacement sensor, an acceleration sensor, a laser interferometry, a GPS method and the like, and the methods have the disadvantages of complex construction process, long time consumption, high cost and difficulty in measurement in places with complex space structures. With the development of photography technology, methods such as image processing and digital image correlation are increasingly receiving attention to directly extract structural vibration displacement information from images.
With the development of computer vision, many scholars try to combine bridge vibration displacement with computer vision and adopt various methods based on template matching algorithms to track targets, and these methods need to determine templates first and then perform template matching in a full window, which undoubtedly consumes a large amount of computing resources and has complex processing. Therefore, how to provide an efficient and high-precision method for tracking and measuring a target or a target feature point is an urgent problem to be solved.
Disclosure of Invention
The invention aims to solve the defects of high calculation cost and complex operation of the existing bridge vibration displacement target tracking algorithm, provides a high-precision bridge region visual target feature point-based identification and tracking method, realizes the rapid extraction of a bridge region target and the precise and rapid tracking of a target feature point, provides a feasible way for the application of an advanced computer vision technology in bridge vibration displacement measurement, and provides a solution for the intelligent extraction of structural vibration displacement information of a bridge.
The technical scheme of the invention is as follows:
the method for identifying and tracking the characteristic points of the visual target in the bridge area based on high precision is characterized by comprising the following steps of: the method comprises the following steps:
firstly, extracting a small region containing a target in a single-frame video image by adopting background separation according to the target characteristics as an interested region;
step two, carrying out corner detection on the extracted region of interest, and determining corners by detecting gray change through sliding Gaussian windows in the region of interest;
calculating a first-order partial derivative of a pixel in the selected Gaussian window, constructing a gray covariance matrix, and selecting a point with an error close to an ellipse as a candidate point;
step four, determining the angular points of the candidate points by further adopting singular value decomposition, matching and judging two adjacent frames and finally determining the angular points;
step five: and repeating the second step and the previous step, performing corner matching and tracking on each frame of the video, and detecting and tracking each frame of corner until the video is finished.
Compared with the prior art, the invention has the beneficial effects that: the invention can quickly and accurately extract the computer visual target from the image background and identify and track the characteristic points, greatly reduces the data processing time (compared with the time of the whole area, the time is greatly reduced because only part of the area is detected), and provides a technical means for the subsequent real-time detection. The tracking characteristic points are identified by adopting a rough and detailed strategy, error points of the tracking points are eliminated, the precision is ensured, the 1mm bridge vibration displacement can be measured, the engineering application requirements can be met, the whole process is automated, and the manual participation degree in the detection process is obviously reduced. The invention improves the automation, the intellectualization and the accuracy of the bridge vibration displacement measurement and provides a solution for the automation of bridge health monitoring.
Detailed Description
The first embodiment is as follows: the embodiment discloses a high-precision bridge region visual target feature point-based identification and tracking method, which comprises the following steps:
firstly, extracting a small region containing a target in a single-frame video image by adopting background separation according to the target characteristics as an interested region;
step two, carrying out corner detection on the extracted region of interest, and determining a corner by sliding a Gaussian window in the region of interest to detect gray change (the Gaussian window is the prior art);
calculating a first-order partial derivative of a pixel in the selected Gaussian window, constructing a gray covariance matrix, and selecting a point with an error close to an ellipse as a candidate point;
step four, determining the corner points of the candidate points by further adopting singular value decomposition (singular value decomposition is the prior art), matching two adjacent frames, judging and finally determining the corner points;
step five: and repeating the second step and the previous step, performing corner matching and tracking on each frame of the video, and detecting and tracking each frame of corner until the video is finished.
Further, in the first step, based on the spatial distribution correlation between the pixel center and the neighborhood in the single-frame image of the video, 20 pixel points are selected in the neighborhood of the current pixel point 8 as background samples, and the pixel points to be determined are extracted as the standard for distinguishing the foreground point from the background point by using the Euclidean distance from the central point. And tracking and identifying are only carried out in the region of interest subsequently, so that the problems of large calculation amount and complex processing of whole image detection are avoided, the background interference of other regions is eliminated, and the identification precision is easily improved.
Further, in the second step, the extracted region of interest is judged to change in gray scale by moving a gaussian window to determine an angular point, the size of the gaussian window is 6 σ × 6 σ, and σ is a standard deviation of a gaussian function. The Gaussian window can effectively avoid the interference of noise to the image, and the accuracy of the subsequent corner point identification is ensured.
Further, in step three, the gradient f of the pixel points in the Gaussian window is calculatedx、fyAnd a gray-scale covariance matrix Q, and calculates interest points w and Q,
fxf (x +1, y +1) -f (x, y) (one)
fyF (x +1, y) -f (x, y +1) (two)
In the formula fxAnd fyRespectively representing the partial differential of the pixel points along the horizontal and vertical directions (corresponding to the gradient f of the pixel points in the Gaussian window)x、fyExpress the same meaning), DetN is the determinant of the covariance matrix N, trN is the trace of the covariance matrix N, given the threshold TqAnd TwTaking Tq=0.5~0.8,Tw0.5 to 1.8, and T is selected to satisfyq< q and TwThe feature points of < w are used as candidate points, then the maximum value of the candidate points in each Gaussian window is selected as candidate points, and x and y represent the horizontal direction and the vertical direction. The step is to roughly screen feature points, remove some obvious error points, use the selected points as candidate points and lay a cushion for accurately searching angular points in the next step.
Further, in step four, the area correlation coefficient c is calculated for all the candidate points extracted from the two adjacent frames respectivelyi,j,
Wherein R, S refers to the region, Ri、RjRepresenting the corresponding candidate points in the two frames,representing the mean, σ, of candidate points in two framesR、σSRepresenting standard deviation, and W representing candidate points; i. j, m, n each represent the following:
i 1,2,3.. m, j 1,2,3.. n, m, n are the number of corresponding candidate points,
a similarity matrix G is constructed which is,
in the formula Gi,jRepresenting the gaussian weighted distances of the candidate points of different frames,representing Euclidean distances of candidate points of different frames, sigma representing standard deviation of Gaussian function, controlling the relation between two candidate points through sigma, wherein the general sigma is 1/6 high of the image, e represents an exponential function with e as base, Ci,jRepresenting the calculated correlation coefficient;
finally, singular value decomposition is carried out on the similar G;
G=UDVT(eight)
G denotes a similarity matrix, U, V denotes an orthogonal matrix, D denotes a diagonal matrix, D denotes a matrix with elements other than 0 to 1 (0 is set in the matrix except for the diagonal, 0 is changed to 1, a new matrix is constructed), a matrix P is constructed, T denotes a transpose,
P=UEVT(nine)
And E represents the reconstructed matrix, and the maximum value of the corresponding row and column in the final P matrix is the solved characteristic point (which is the determined corner point). In the step, the wrong corner points in the candidate points are further screened and removed, so that the real corner points are identified, and tracking is performed.
Claims (5)
1. A high-precision bridge region visual target feature point-based identification and tracking method is characterized by comprising the following steps of: the method comprises the following steps:
firstly, extracting a small region containing a target in a single-frame video image by adopting background separation according to the target characteristics as an interested region;
step two, carrying out corner detection on the extracted region of interest, and determining corners by detecting gray change through sliding Gaussian windows in the region of interest;
calculating a first-order partial derivative of a pixel in the selected Gaussian window, constructing a gray covariance matrix, and selecting a point with an error close to an ellipse as a candidate point;
step four, determining the angular points of the candidate points by further adopting singular value decomposition, matching and judging two adjacent frames and finally determining the angular points;
step five: and repeating the second step and the previous step, performing corner matching and tracking on each frame of the video, and detecting and tracking each frame of corner until the video is finished.
2. The high-precision bridge region visual target feature point-based identification and tracking method according to claim 1, characterized in that: in the first step, on the basis of spatial distribution correlation between a pixel center and a neighborhood thereof in a single-frame video image, 20 pixel points are selected from 8 neighborhoods of current pixel points as background samples, and an interested area containing a target is extracted by taking the Euclidean distance between the pixel points to be judged and a central point as a standard for distinguishing a foreground point and a background point.
3. The high-precision bridge region visual target feature point-based identification and tracking method according to claim 1, characterized in that: and in the second step, judging gray level change of the extracted region of interest by moving a Gaussian window to determine an angular point, wherein the size of the Gaussian window adopts 6 sigma multiplied by 6 sigma, and the sigma is a standard deviation of a Gaussian function.
4. The high-precision bridge region visual target feature point-based identification and tracking method according to claim 1, characterized in that: in step three, calculating gradient f of pixel point in Gaussian windowx、fyAnd a gray-scale covariance matrix Q, and calculates interest points w and Q,
fxf (x +1, y +1) -f (x, y) (one)
fyF (x +1, y) -f (x, y +1) (two)
In the formula fxAnd fyRespectively representing partial differentiation of pixel points along the horizontal direction and the vertical direction, DetN is a determinant of a covariance matrix N, trN is a trace of the covariance matrix N, and a given threshold value TqAnd TwTaking Tq=0.5~0.8,Tw0.5 to 1.8, and T is selected to satisfyq< q and TwThe feature points of < w are used as candidate points, then the maximum value of the candidate points in each Gaussian window is selected as candidate points, and x and y represent the horizontal direction and the vertical direction.
5. The high-precision bridge region visual target feature point-based identification and tracking method according to claim 1, characterized in that: in the fourth step, the area correlation coefficient c is respectively calculated for all the candidate points extracted from the two adjacent framesi,j,
Wherein R, S refers to the region, Ri、RjRepresenting the corresponding candidate points in the two frames,representing the mean, σ, of candidate points in two framesR、σSRepresenting standard deviation, and W representing candidate points; i. j, m, n each represent the following:
i 1,2,3.. m, j 1,2,3.. n, m, n are the number of corresponding candidate points,
a similarity matrix G is constructed which is,
in the formula Gi,jRepresenting the gaussian weighted distances of the candidate points of different frames,representing Euclidean distances of candidate points of different frames, sigma representing standard deviation of Gaussian function, controlling the relation between two candidate points through sigma, wherein the general sigma is 1/6 high of the image, e represents an exponential function with e as base, Ci,jRepresenting the calculated correlation coefficient;
finally, singular value decomposition is carried out on the similar G;
G=UDVT(eight)
G denotes a similarity matrix, U, V denotes an orthogonal matrix, D denotes a diagonal matrix, D is set to 0 elements other than 0 elements to 1, a matrix P is constructed, T denotes a transpose,
P=UEVT(nine)
E represents the reconstructed matrix, and the maximum value of the corresponding row and column in the final P matrix is the characteristic point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010334904.3A CN111582270A (en) | 2020-04-24 | 2020-04-24 | Identification tracking method based on high-precision bridge region visual target feature points |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010334904.3A CN111582270A (en) | 2020-04-24 | 2020-04-24 | Identification tracking method based on high-precision bridge region visual target feature points |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111582270A true CN111582270A (en) | 2020-08-25 |
Family
ID=72113556
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010334904.3A Pending CN111582270A (en) | 2020-04-24 | 2020-04-24 | Identification tracking method based on high-precision bridge region visual target feature points |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111582270A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112229500A (en) * | 2020-09-30 | 2021-01-15 | 石家庄铁道大学 | Structural vibration displacement monitoring method and terminal equipment |
CN114627395A (en) * | 2022-05-17 | 2022-06-14 | 中国兵器装备集团自动化研究所有限公司 | Multi-rotor unmanned aerial vehicle angle analysis method, system and terminal based on nested targets |
CN115240471A (en) * | 2022-08-09 | 2022-10-25 | 东揽(南京)智能科技有限公司 | Intelligent factory collision avoidance early warning method and system based on image acquisition |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010113821A1 (en) * | 2009-04-02 | 2010-10-07 | アイシン精機株式会社 | Face feature point detection device and program |
CN102222228A (en) * | 2011-05-26 | 2011-10-19 | 北京建筑工程学院 | Method for extracting feature points of images |
CN103886582A (en) * | 2014-01-26 | 2014-06-25 | 中国测绘科学研究院 | Space-borne synthetic aperture interferometer radar image registration method with use of feature point Voronoi diagram optimization |
CN104134220A (en) * | 2014-08-15 | 2014-11-05 | 北京东方泰坦科技股份有限公司 | Low-altitude remote sensing image high-precision matching method with consistent image space |
CN104754182A (en) * | 2015-03-19 | 2015-07-01 | 长安大学 | Stationary phase method for aerial video with high resolution based on self-adaption motion filtering |
CN109752855A (en) * | 2017-11-08 | 2019-05-14 | 九阳股份有限公司 | A kind of method of hot spot emitter and detection geometry hot spot |
CN110596116A (en) * | 2019-07-23 | 2019-12-20 | 浙江科技学院 | Vehicle surface flaw detection method and system |
-
2020
- 2020-04-24 CN CN202010334904.3A patent/CN111582270A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010113821A1 (en) * | 2009-04-02 | 2010-10-07 | アイシン精機株式会社 | Face feature point detection device and program |
CN102222228A (en) * | 2011-05-26 | 2011-10-19 | 北京建筑工程学院 | Method for extracting feature points of images |
CN103886582A (en) * | 2014-01-26 | 2014-06-25 | 中国测绘科学研究院 | Space-borne synthetic aperture interferometer radar image registration method with use of feature point Voronoi diagram optimization |
CN104134220A (en) * | 2014-08-15 | 2014-11-05 | 北京东方泰坦科技股份有限公司 | Low-altitude remote sensing image high-precision matching method with consistent image space |
CN104754182A (en) * | 2015-03-19 | 2015-07-01 | 长安大学 | Stationary phase method for aerial video with high resolution based on self-adaption motion filtering |
CN109752855A (en) * | 2017-11-08 | 2019-05-14 | 九阳股份有限公司 | A kind of method of hot spot emitter and detection geometry hot spot |
CN110596116A (en) * | 2019-07-23 | 2019-12-20 | 浙江科技学院 | Vehicle surface flaw detection method and system |
Non-Patent Citations (2)
Title |
---|
唐晓敏: "轨道磨耗非接触检测系统研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 * |
陈淑荞: "数字图像特征点提取及匹配的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112229500A (en) * | 2020-09-30 | 2021-01-15 | 石家庄铁道大学 | Structural vibration displacement monitoring method and terminal equipment |
CN112229500B (en) * | 2020-09-30 | 2022-05-20 | 石家庄铁道大学 | Structural vibration displacement monitoring method and terminal equipment |
CN114627395A (en) * | 2022-05-17 | 2022-06-14 | 中国兵器装备集团自动化研究所有限公司 | Multi-rotor unmanned aerial vehicle angle analysis method, system and terminal based on nested targets |
CN115240471A (en) * | 2022-08-09 | 2022-10-25 | 东揽(南京)智能科技有限公司 | Intelligent factory collision avoidance early warning method and system based on image acquisition |
CN115240471B (en) * | 2022-08-09 | 2024-03-01 | 东揽(南京)智能科技有限公司 | Intelligent factory collision avoidance early warning method and system based on image acquisition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106875381B (en) | Mobile phone shell defect detection method based on deep learning | |
CN111429484B (en) | Multi-target vehicle track real-time construction method based on traffic monitoring video | |
CN111582270A (en) | Identification tracking method based on high-precision bridge region visual target feature points | |
CN111932511B (en) | Electronic component quality detection method and system based on deep learning | |
CN110766095A (en) | Defect detection method based on image gray level features | |
CN110929710A (en) | Method and system for automatically identifying meter pointer reading based on vision | |
CN116563279B (en) | Measuring switch detection method based on computer vision | |
CN104318559A (en) | Quick feature point detecting method for video image matching | |
CN116844147A (en) | Pointer instrument identification and abnormal alarm method based on deep learning | |
CN115797354A (en) | Method for detecting appearance defects of laser welding seam | |
CN113313047A (en) | Lane line detection method and system based on lane structure prior | |
CN113420619A (en) | Remote sensing image building extraction method | |
CN106529548A (en) | Sub-pixel level multi-scale Harris corner point detection algorithm | |
CN113947714B (en) | Multi-mode collaborative optimization method and system for video monitoring and remote sensing | |
CN111783672A (en) | Image feature identification method for improving bridge dynamic displacement precision | |
CN112017213B (en) | Target object position updating method and system | |
CN116934820A (en) | Cross-attention-based multi-size window Transformer network cloth image registration method and system | |
CN117593243A (en) | Compressor appearance self-adaptive detection method guided by reliable pseudo tag | |
CN110163090B (en) | PCB (printed circuit board) identification tracking method based on multi-feature and scale estimation | |
CN112198170A (en) | Detection method for identifying water drops in three-dimensional detection of outer surface of seamless steel pipe | |
CN113920254B (en) | Monocular RGB (Red Green blue) -based indoor three-dimensional reconstruction method and system thereof | |
CN116310263A (en) | Pointer type aviation horizon instrument indication automatic reading implementation method | |
CN112199984B (en) | Target rapid detection method for large-scale remote sensing image | |
CN115035164A (en) | Moving target identification method and device | |
Song et al. | An accurate vehicle counting approach based on block background modeling and updating |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200825 |