CN114596525A - Dynamic bridge form identification method based on computer vision - Google Patents

Dynamic bridge form identification method based on computer vision Download PDF

Info

Publication number
CN114596525A
CN114596525A CN202210201036.0A CN202210201036A CN114596525A CN 114596525 A CN114596525 A CN 114596525A CN 202210201036 A CN202210201036 A CN 202210201036A CN 114596525 A CN114596525 A CN 114596525A
Authority
CN
China
Prior art keywords
video
amplification
edge
frame
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210201036.0A
Other languages
Chinese (zh)
Inventor
王佐才
张飞
段大猷
辛宇
马乐乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN202210201036.0A priority Critical patent/CN114596525A/en
Publication of CN114596525A publication Critical patent/CN114596525A/en
Priority to GB2215178.1A priority patent/GB2616322B/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration by non-spatial domain filtering
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M5/00Investigating the elasticity of structures, e.g. deflection of bridges or air-craft wings
    • G01M5/0008Investigating the elasticity of structures, e.g. deflection of bridges or air-craft wings of bridges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention is suitable for the technical field of vision measurement, and provides a dynamic bridge form identification method based on computer vision, which comprises the following steps: selecting a sensor with proper sampling frequency and resolution ratio for video acquisition; cutting the collected video; amplifying the structural deformation of the video by using an Euler video amplification algorithm; aiming at a target structure in each frame of image after motion amplification, preliminarily determining possible structure edge pixel points by using a Prewitt operator, and then calculating the position of the structure edge point in the region by using a space moment subpixel edge detection algorithm; obtaining a dynamic form recognition result of the structure according to the edge recognition point cloud result of each frame; and calibrating the structural deformation identified by the method by using the known distance between two pixel points on the surface of the object in the picture and the video motion amplification coefficient to obtain accurate dynamic form change. The invention realizes the vision non-contact measurement, does not need to install a sensor on the structure, does not interfere the normal operation of the structure and saves the test cost.

Description

Dynamic bridge form identification method based on computer vision
Technical Field
The invention belongs to the technical field of vision measurement, and particularly relates to a dynamic bridge form identification method based on computer vision.
Background
In the operation stage of the bridge, the deformation of the main beam, the bridge tower, the inhaul cable and other members directly reflects the structural construction rigidity of the bridge, so that the form change of the bridge is an important index for evaluating the stress performance of the bridge. Compared with the existing deformation detection methods such as a contact displacement meter, a laser displacement meter, a photoelectric displacement meter and the like, the bridge form measurement method based on the digital image technology has certain development in recent years, obtains the displacement information of the structure by comparing the structural digital images at different times based on the computer vision technology, can measure in a long-distance non-contact manner without installing a sensor, and has the advantages of rapidness, easiness in use, high precision, small limitation, no influence on the normal operation of the structure and the like.
However, the dynamic form recognition of the bridge structure by using the vision sensor at present has the difficulty of recognizing the micro deformation when the sensor pixels are insufficient. When the target structure is large, if the whole camera structure is desired to be shot for form recognition, the resolution of the sensor is not increased, and the small deformation of the pixel capture structure is not enough, so that the cost of the sensor and the time for computer processing are greatly increased by increasing the resolution of the sensor. Therefore, a more accurate and effective dynamic bridge shape recognition method is needed.
Disclosure of Invention
The embodiment of the invention aims to provide a dynamic bridge form identification method based on computer vision, and aims to solve the problems in the background technology.
The embodiment of the invention is realized in such a way that a dynamic bridge form identification method based on computer vision comprises the following steps:
s1, selecting a sensor with proper sampling frequency and resolution to carry out video acquisition on the target structure;
s2, cutting the collected video to obtain the video of the target area;
s3, amplifying the structural deformation in the video by using an Euler video amplification algorithm based on brightness change;
s4, preliminarily determining possible structure edge pixel points by using a Prewitt operator according to a target structure in each frame of image after motion amplification, and then calculating the positions of the structure edge points in the region by using a space moment subpixel edge detection algorithm;
s5, obtaining a dynamic form recognition result of the structure according to the edge recognition point cloud result of each frame;
s6, the distance between two pixel points on the surface of the object and the video motion amplification coefficient which are known in the picture are used, and the structural deformation identified by the method can be calibrated to obtain the accurate dynamic form change of the structure.
Preferably, when the sensor is a color vision sensor, the video is converted to a grayscale color mode.
Preferably, the step S3 is to directly process the pixel gray scale of the video image by equating a slight motion in the video to a slight brightness change.
Preferably, the S3 specifically includes the following steps:
s31, assuming that the one-dimensional signal intensity of the pixel at the arbitrary position x of the video image at the time t is represented as I (x, t), delta (t) is a tiny displacement, and delta (t) is a tiny displacementk(t) denotes the change signal, δ (t) the kth frequency component at time t, γkRepresenting the attenuation multiple of the kth change signal at time t, where 0 < gammakLess than 1, the amplification factor of different frequencies in linear amplification environment is changed into gammakα, the amplification result at this time is expressed as:
Figure BDA0003529224010000021
s32, band-pass filtering is carried out on the frequency band range within the effective sampling frequency in the video according to the Shannon sampling law, and when the sampling frame rate of the video is 60fps, the motion within 0-30Hz is selected for amplification.
Preferably, the motion amplification process in S32 includes the following steps:
performing Laplacian pyramid multilayer decomposition on the video frame by frame to obtain image sub-bands under different resolutions, adopting different amplification factors according to the different resolutions, wherein the signal-to-noise ratio is lowest under the scale of the highest resolution, the smallest amplification factor can be selected, and the largest amplification factor is selected under the scale of the lowest resolution;
carrying out band-pass filtering on the sub-band obtained after the image multi-scale pyramid decomposition to obtain a change signal in a target frequency band for subsequent processing, wherein a wide-pass-band filter is adopted for vibration amplification, the amplified frequency band range is manually set, and the processing is directly carried out in a time domain;
carrying out Taylor series difference approximation on the interesting signals obtained by filtering, and multiplying the interesting signals by the set amplification factor to obtain a slightly-changed linearly-amplified image sequence;
and (4) reconstructing the amplified image sequence by using a pyramid, overlapping the image sequence with the input sequence, and outputting the amplified video.
Preferably, the S4 specifically includes the following steps:
s41, preliminarily determining the structural edge points at the pixel level through a Prewitt operator:
convolving each frame of image I by using transverse and longitudinal templates to obtain approximate gradients of the transverse and longitudinal directions
Figure BDA0003529224010000031
And
Figure BDA0003529224010000032
thereby obtaining a gradient magnitude matrix
Figure BDA0003529224010000033
After the set threshold value, the pixel points with the gradient amplitude value larger than the threshold value are possible edge points
Figure BDA0003529224010000034
S42, obtaining sub-pixel level edges of the structure by using space moments:
image I is matched with four space moment templates m with the size of 5 multiplied by 500、m01、m10、m20Is convoluted to obtainCorresponding moment A00、A01、A10、A20
For in S41
Figure BDA0003529224010000035
The edge direction can be obtained by calculation
Figure BDA0003529224010000036
Distance from the centre of the form
Figure BDA0003529224010000037
Difference in gray scale
Figure BDA0003529224010000038
Optimizing final edge location coordinates by setting thresholds
Figure BDA0003529224010000039
Wherein
Figure BDA00035292240100000310
Preferably, in S5, the first frame of structural configuration point cloud is used as a reference, and the first frame of structural configuration point cloud is compared with the point clouds of other frames to select a stationary point in the point clouds for registration, so as to obtain the structural overall configuration change.
The dynamic bridge form identification method based on computer vision provided by the embodiment of the invention realizes vision non-contact measurement, does not need to install a sensor on a structure, does not interfere with the normal operation of the structure, and saves the test cost;
the overall form measurement of the target structure is realized, instead of measuring one point on the structural member like the traditional sensor, and the overall form change of the structure can be more conveniently obtained;
by using the Euler visual angle video amplification algorithm and the sub-pixel edge identification method, the whole micro morphological change of the structure can be accurately measured, and the method is suitable for the conditions of artificial excitation and environmental excitation, is economical and efficient, has low requirements on the parameters of a measuring instrument, and can be used for structural health monitoring.
Drawings
FIG. 1 is a flowchart of a dynamic bridge form recognition method based on computer vision according to an embodiment of the present invention;
fig. 2 is an initial frame of a test beam configuration change video acquired in a dynamic bridge configuration identification method based on computer vision according to an embodiment of the present invention;
fig. 3 is a flow chart of the video acquired in the dynamic bridge form identification method based on computer vision according to the embodiment of the present invention, which is amplified by the euler visual angle video amplification algorithm;
fig. 4 is a graph of an edge recognition result of a test beam obtained based on computer vision in the dynamic bridge form recognition method based on computer vision according to the embodiment of the present invention;
fig. 5 is a comparison graph of the reconstructed vibration time course and the mid-span point of the test beam measured by the eddy current displacement sensor in the dynamic bridge form identification method based on the computer vision according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Specific implementations of the present invention are described in detail below with reference to specific embodiments.
Example 1
As shown in fig. 1, a flowchart of a dynamic bridge form recognition method based on computer vision provided for an embodiment of the present invention includes the following steps:
s1, selecting a sensor with proper sampling frequency and resolution to carry out video acquisition on the target structure;
s2, cutting the collected video to obtain a video of the target area; when a color vision sensor is used, video can be converted into a gray-scale color mode to reduce the amount of computation for subsequent computer processing;
s3, amplifying the structural deformation in the video by using an Euler video amplification algorithm based on brightness change; the basic principle is mainly based on an optical flow method in the traditional video motion processing, the tiny motion in the video is equivalent to tiny brightness change by using the assumption of optical flow space consistency and constant brightness, and the indirect processing of the tiny motion in the video is realized by directly processing the pixel brightness (gray scale) of a video image, and the method specifically comprises the following steps:
s31, assuming that the one-dimensional signal intensity of the pixel at the arbitrary position x of the video image at the time t is represented as I (x, t), delta (t) is a tiny displacement, and delta (t) is a tiny displacementk(t) denotes the k-th frequency component of the change signal delta (t) at time t, gammakRepresents the attenuation multiple of the kth change signal at the time t (0 < gamma)kLess than 1), the amplification factor of different frequencies under the linear amplification environment is changed into gammakα, the amplification result at this time is expressed as:
Figure BDA0003529224010000051
s32, performing band-pass filtering on a frequency band range within the effective sampling frequency in the video according to the Shannon sampling law, for example, if the sampling frame rate of the video is 60fps, selecting motion within 0-30Hz for amplification to prevent amplification distortion;
the specific video motion amplification process comprises the following steps: firstly, performing Laplace pyramid multilayer decomposition on a video frame by frame to obtain image sub-bands under different resolutions (spatial scales), adopting different magnification factors according to the different resolutions, wherein the signal-to-noise ratio is lowest under the highest resolution scale, the smallest magnification factor can be selected, and the largest magnification factor is selected under the lowest resolution scale; secondly, carrying out band-pass filtering on the sub-bands obtained after the image multi-scale pyramid decomposition to obtain a change signal in a target frequency band for subsequent processing, wherein a wide-pass-band filter such as a Butterworth filter is generally adopted for vibration amplification, an amplified frequency band range is manually set, and the change signal is directly processed in a time domain; then, Taylor series difference approximation is carried out on the interesting signals obtained through filtering, and the interesting signals are multiplied by the set amplification factor to obtain a slightly-changed linearly-amplified image sequence; finally, the amplified image sequence is reconstructed by using a pyramid, and is superposed with the input sequence, and the amplified video is output;
s4, aiming at the target structure in each frame of image after motion amplification, preliminarily determining possible structure edge pixel points by using a Prewitt operator, and then calculating the positions of the structure edge points in the region by using a space moment subpixel edge detection algorithm, wherein the method specifically comprises the following steps:
s41, preliminarily determining the structural edge points at the pixel level through a Prewitt operator: convolving each frame of image I by using transverse and longitudinal templates to obtain approximate gradients of the transverse and longitudinal directions
Figure BDA0003529224010000061
And
Figure BDA0003529224010000062
thereby obtaining a gradient magnitude matrix
Figure BDA0003529224010000063
After the set threshold value, the pixel points with the gradient amplitude value larger than the threshold value are possible edge points
Figure BDA0003529224010000064
S42, obtaining a subpixel level edge of the structure by using the space moment; image I is matched with four space moment templates m with the size of 5 multiplied by 500、m01、m10、m20Convolution is carried out to obtain corresponding moment A00、A01、A10、A20(ii) a For in S41
Figure BDA0003529224010000065
The following can be obtained by calculation: direction of edge
Figure BDA0003529224010000066
Distance from the centre of the form
Figure BDA0003529224010000067
Difference in gray scale
Figure BDA0003529224010000068
Optimizing final edge location coordinates by setting thresholds
Figure BDA0003529224010000069
Wherein
Figure BDA00035292240100000610
S5, obtaining a dynamic form recognition result of the structure according to the edge recognition point cloud result of each frame, taking the first frame structure form point cloud as a reference, comparing the first frame structure form point cloud with the point clouds of other frames, selecting an immobile point in the point clouds for registration, and obtaining the surface deformation of the whole structure by integral monitoring;
s6, the distance between two pixel points on the surface of the object and the video motion amplification coefficient which are known in the picture are used, and the structural deformation identified by the method can be calibrated to obtain the accurate dynamic form change of the structure.
Example 2
A specific dynamic bridge form identification method based on computer vision is provided:
s1, testing the beam structure under the condition of simple support boundary, wherein the tested beam is an aluminum beam with the length of 2.8m, the width of 100mm, the thickness of 20mm, the Young modulus of 70GPa, the Poisson ratio of 0.33, and the density of the beam of 2700kg/m3An eddy current displacement sensor is arranged for measuring the dynamic displacement across the middle position, and a video is acquired at a proper position to ensure that the whole test structure is positioned in a video acquisition visual field; a Sony IMX586 CMOS camera of a mobile phone Redmi K20 is used for video acquisition, the video acquisition parameter is 1080p @60fps, the intercepted video initial frame is shown in figure 2, and a random hammering excitation is used for vibrating a test beam;
s2, cutting the video to the required size and converting the video into gray colors;
s3, performing motion amplification on the video by using an Euler visual angle video amplification algorithm, wherein the amplification frequency band is 0-30Hz, the amplification factor is 10 times, and the amplification effect is shown in figure 3;
s4, performing sub-pixel edge detection by combining a Prewitt operator and the space moment to obtain the accurate morphological edge of the target structure, wherein the result of one frame is shown in FIG. 4;
s5, taking the first frame structural form point cloud as a reference, comparing the first frame structural form point cloud with point clouds of other frames, selecting an immobile point in the point clouds for registration, and obtaining the surface deformation of the whole structure by integral monitoring;
s6, calibrating the shape change of the test beam through the pixel distance of the beam with the thickness of 20mm in the video and the magnification factor of the Euler visual angle video amplification algorithm to obtain the accurate deformation of the test beam, selecting the vibration time course across the midpoint, and comparing the vibration time course with the vibration measured by the displacement sensor arranged on site, wherein the result is shown in figure 5, and the method has high identification precision.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (7)

1. A dynamic bridge form identification method based on computer vision is characterized by comprising the following steps:
s1, selecting a sensor with proper sampling frequency and resolution to carry out video acquisition on the target structure;
s2, cutting the collected video to obtain the video of the target area;
s3, amplifying the structural deformation in the video by using an Euler video amplification algorithm based on brightness change;
s4, preliminarily determining possible structure edge pixel points by using a Prewitt operator according to a target structure in each frame of image after motion amplification, and then calculating the positions of the structure edge points in the region by using a space moment subpixel edge detection algorithm;
s5, obtaining a dynamic form recognition result of the structure according to the edge recognition point cloud result of each frame;
s6, the distance between two pixel points on the surface of the object and the video motion amplification coefficient which are known in the picture are used, and the structural deformation identified by the method can be calibrated to obtain the accurate dynamic form change of the structure.
2. The dynamic bridge form recognition method based on computer vision as claimed in claim 1, wherein when the sensor is a color vision sensor, the video is converted into a gray scale color mode.
3. The method for identifying a dynamic bridge form according to claim 1, wherein S3 is a method for directly processing the pixel gray scale of the video image by equating a slight motion in the video to a slight brightness change.
4. The dynamic bridge form recognition method based on computer vision of claim 3, wherein the step S3 specifically comprises the following steps:
s31, assuming that the one-dimensional signal intensity of the pixel at the arbitrary position x of the video image at the time t is represented as I (x, t), delta (t) is a tiny displacement, and delta (t) is a tiny displacementk(t) denotes the change signal, δ (t) the kth frequency component at time t, γkRepresenting the attenuation multiple of the kth change signal at time t, where 0 < gammakLess than 1, the amplification factor of different frequencies in linear amplification environment is changed into gammakα, the amplification result at this time is expressed as:
Figure FDA0003529222000000011
and S32, performing band-pass filtering on the frequency band range within the effective sampling frequency in the video according to the Shannon sampling law, and selecting the motion within 0-30Hz for amplification when the sampling frame rate of the video is 60 fps.
5. The dynamic bridge form recognition method based on computer vision of claim 4, wherein the motion amplification process in S32 comprises the following steps:
performing Laplacian pyramid multilayer decomposition on the video frame by frame to obtain image sub-bands under different resolutions, adopting different magnification factors according to different resolutions, selecting the minimum magnification factor when the signal-to-noise ratio is lowest under the highest resolution scale, and selecting the maximum magnification factor under the lowest resolution scale;
carrying out band-pass filtering on the sub-band obtained after the image multi-scale pyramid decomposition to obtain a change signal in a target frequency band for subsequent processing, wherein a wide-pass-band filter is adopted for vibration amplification, the amplified frequency band range is manually set, and the processing is directly carried out in a time domain;
carrying out Taylor series difference approximation on the interesting signals obtained by filtering, and multiplying the interesting signals by the set amplification factor to obtain a slightly-changed linearly-amplified image sequence;
and (4) reconstructing the amplified image sequence by using a pyramid, overlapping the image sequence with the input sequence, and outputting the amplified video.
6. The dynamic bridge form recognition method based on computer vision of claim 1, wherein the S4 specifically comprises the following steps:
s41, preliminarily determining the structural edge points at the pixel level through a Prewitt operator:
convolving each frame of image I by using transverse and longitudinal templates to obtain approximate gradients of the transverse and longitudinal directions
Figure FDA0003529222000000021
And
Figure FDA0003529222000000022
thereby obtaining a gradient magnitude matrix
Figure FDA0003529222000000023
After the set threshold value, the pixel points with the gradient amplitude value larger than the threshold value are possible edge points
Figure FDA0003529222000000024
S42, obtaining sub-pixel level edges of the structure by using space moments:
image I is matched with four space moment templates m with the size of 5 multiplied by 500、m01、m10、m20Convolution is carried out to obtain corresponding moment A00、A01、A10、A20
For those in S41
Figure FDA0003529222000000031
The edge direction can be obtained by calculation
Figure FDA0003529222000000032
Distance from the centre of the form
Figure FDA0003529222000000033
Difference in gray scale
Figure FDA0003529222000000034
Optimizing final edge location coordinates by setting thresholds
Figure FDA0003529222000000035
Wherein
Figure FDA0003529222000000036
7. The method of claim 1, wherein in step S5, the point cloud of the structural form of the first frame is used as a reference, and compared with the point clouds of other frames to select a stationary point in the point cloud for registration, so as to obtain the overall structural form change.
CN202210201036.0A 2022-03-03 2022-03-03 Dynamic bridge form identification method based on computer vision Pending CN114596525A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210201036.0A CN114596525A (en) 2022-03-03 2022-03-03 Dynamic bridge form identification method based on computer vision
GB2215178.1A GB2616322B (en) 2022-03-03 2022-10-14 Computer vision-based dynamic bridge shape recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210201036.0A CN114596525A (en) 2022-03-03 2022-03-03 Dynamic bridge form identification method based on computer vision

Publications (1)

Publication Number Publication Date
CN114596525A true CN114596525A (en) 2022-06-07

Family

ID=81815156

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210201036.0A Pending CN114596525A (en) 2022-03-03 2022-03-03 Dynamic bridge form identification method based on computer vision

Country Status (2)

Country Link
CN (1) CN114596525A (en)
GB (1) GB2616322B (en)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5070401A (en) * 1990-04-09 1991-12-03 Welch Allyn, Inc. Video measurement system with automatic calibration and distortion correction
WO2015167537A2 (en) * 2014-04-30 2015-11-05 Halliburton Energy Services, Inc. Subterranean monitoring using enhanced video
CN106530285B (en) * 2016-10-21 2019-04-09 国网山东省电力公司电力科学研究院 A kind of transmission line part recognition methods based on GPU and the processing of CPU blended data
CN109559323A (en) * 2018-11-16 2019-04-02 重庆邮电大学 A method of picture edge characteristic is enhanced based on improved prewitt operator
CN109580137B (en) * 2018-11-29 2020-08-11 东南大学 Bridge structure displacement influence line actual measurement method based on computer vision technology
CN110595601B (en) * 2019-04-26 2021-10-15 深圳市豪视智能科技有限公司 Bridge vibration detection method and related device
CN110599510A (en) * 2019-08-02 2019-12-20 中山市奥珀金属制品有限公司 Picture feature extraction method
CN114841965B (en) * 2022-04-30 2023-08-01 中建三局第一建设工程有限责任公司 Steel structure deformation detection method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
GB202215178D0 (en) 2022-11-30
GB2616322A (en) 2023-09-06
GB2616322B (en) 2024-02-21

Similar Documents

Publication Publication Date Title
CN110108348B (en) Thin-wall part micro-amplitude vibration measurement method and system based on motion amplification optical flow tracking
CN103994724B (en) Structure two-dimension displacement and strain monitoring method based on digital image processing techniques
CN111277833B (en) Multi-passband filter-based multi-target micro-vibration video amplification method
CN108106541A (en) A kind of bridge cable force measuring method based on video image identification
CN111083365B (en) Method and device for rapidly detecting optimal focal plane position
CN111174961A (en) Modal analysis-based cable force optical measurement method and measurement system thereof
CN104200457A (en) Wide-angle camera shooting based discrete type canopy leaf area index detection system and method
Lee et al. A new image-quality evaluating and enhancing methodology for bridge inspection using an unmanned aerial vehicle
CN106022354B (en) Image MTF measurement methods based on SVM
CN111242884B (en) Image dead pixel detection and correction method and device, storage medium and camera equipment
CN114596525A (en) Dynamic bridge form identification method based on computer vision
CN108364274B (en) Nondestructive clear reconstruction method of optical image under micro-nano scale
CN111445435A (en) No-reference image quality evaluation method based on multi-block wavelet transform
CN116579959A (en) Fusion imaging method and device for hyperspectral image
CN113870150B (en) Method for inverting spacecraft low-frequency vibration parameters based on continuous multiple remote sensing images
CN115683431A (en) Method, device and equipment for determining cable force of inhaul cable based on linear tracking algorithm
Chen et al. Modal frequency identification of stay cables with ambient vibration measurements based on nontarget image processing techniques
CN112816120B (en) Cable force measuring method
CN114993452A (en) Structure micro-vibration measurement method and system based on broadband phase motion amplification
CN109035162A (en) A kind of picture drift correction method and system based on pixel reconstruction
CN110930447A (en) Android-based unattended snow depth measurement method
Zheng et al. Bridge vibration virtual sensor based on eulerian video magnification and gray mean difference
CN107218918B (en) A kind of single camera distance measuring method
CN111060437A (en) Detection equipment and detection method for 3D printing sand box
CN115638731B (en) Super-resolution-based vibrating table test computer vision displacement measurement method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination