CN114049586A - Vibration frequency automatic identification method based on computer vision and power spectrum transfer ratio - Google Patents
Vibration frequency automatic identification method based on computer vision and power spectrum transfer ratio Download PDFInfo
- Publication number
- CN114049586A CN114049586A CN202111276493.8A CN202111276493A CN114049586A CN 114049586 A CN114049586 A CN 114049586A CN 202111276493 A CN202111276493 A CN 202111276493A CN 114049586 A CN114049586 A CN 114049586A
- Authority
- CN
- China
- Prior art keywords
- peak
- transfer ratio
- power spectrum
- representing
- matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 77
- 238000012546 transfer Methods 0.000 title claims abstract description 56
- 238000001228 spectrum Methods 0.000 title claims abstract description 42
- 239000011159 matrix material Substances 0.000 claims abstract description 45
- 230000006870 function Effects 0.000 claims abstract description 29
- 230000003287 optical effect Effects 0.000 claims abstract description 24
- 230000003595 spectral effect Effects 0.000 claims description 27
- 238000005259 measurement Methods 0.000 claims description 21
- 230000004044 response Effects 0.000 claims description 15
- 238000012892 rational function Methods 0.000 claims description 10
- 230000005284 excitation Effects 0.000 claims description 9
- 238000005316 response function Methods 0.000 claims description 9
- 230000005540 biological transmission Effects 0.000 claims description 8
- 238000012549 training Methods 0.000 claims description 8
- 238000013459 approach Methods 0.000 claims description 5
- 238000009499 grossing Methods 0.000 claims description 3
- 238000011002 quantification Methods 0.000 claims description 2
- 230000036541 health Effects 0.000 abstract description 8
- 238000012544 monitoring process Methods 0.000 abstract description 7
- 230000008569 process Effects 0.000 abstract description 7
- 238000012545 processing Methods 0.000 abstract description 6
- 238000013135 deep learning Methods 0.000 abstract description 5
- 238000005516 engineering process Methods 0.000 abstract description 2
- 238000012360 testing method Methods 0.000 description 11
- 229910000831 Steel Inorganic materials 0.000 description 5
- 239000010959 steel Substances 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 4
- 230000007613 environmental effect Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- 238000003708 edge detection Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005206 flow analysis Methods 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000010200 validation analysis Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 238000013016 damping Methods 0.000 description 1
- 230000001066 destructive effect Effects 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 238000011065 in-situ storage Methods 0.000 description 1
- 238000005305 interferometry Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 150000003839 salts Chemical class 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000010998 test method Methods 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01H—MEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
- G01H17/00—Measuring mechanical vibrations or ultrasonic, sonic or infrasonic waves, not provided for in the preceding groups
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01H—MEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
- G01H9/00—Measuring mechanical vibrations or ultrasonic, sonic or infrasonic waves by using radiation-sensitive means, e.g. optical means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)
Abstract
The invention provides a vibration frequency automatic identification method based on computer vision and power spectrum transfer ratio, which comprises the following steps: automatically outputting a vibration signal of each effective pixel in the ROI based on an optical flow method; constructing a transfer ratio function and a transfer ratio matrix; positioning by a template matching method to obtain the peak of a peak value curve; peak selection is performed using a peak detector. The invention can automatically realize the whole process of extracting the vibration signal to the frequency identification based on the deep learning enhanced image processing technology and the power spectrum transfer ratio. Compared with the traditional modal identification method, the method has the advantages of wide applicability, high identification precision, strong robustness and no need of excessive human intervention, and provides greater possibility for real-time health monitoring of the structure.
Description
Technical Field
The invention relates to the field of engineering structure health monitoring, in particular to a vibration frequency automatic identification method based on computer vision and power spectrum transfer ratio.
Background
Modal parameters (e.g., natural frequency, damping ratio, mode shape) are inherent characteristics of the structure, and can be obtained by analyzing measured structural vibration signals, and are often used to construct key indicators for evaluating structural performance. The running modal analysis (OMA) may identify modal parameters of the structure from the system response without input. However, most of the existing OMA methods require manual intervention, depending on the judgment and experience of the engineer. Thus, even though structures incorporating Structural Health Monitoring Systems (SHMS) may obtain real-time vibration data, implementing automation and real-time modal identification remains challenging.
Structural vibration measurement is the basis of structural health monitoring and can be divided into two main categories, namely contact measurement and non-contact measurement. Touch measurements utilize touch sensors, such as accelerometers and strain gauges, connected to the measurement target. Non-contact measurements use Laser Doppler Vibrometers (LDVs), microwave interferometry techniques, and cameras to transmit information. Photogrammetry techniques can be classified as Point Tracking (PT), Digital Image Correlation (DIC), or object-free methods, depending on the type of object used.
To further determine the frequency of the system from the vibration response, a stable plot derived from a time domain system is commonly used for automatic modal analysis, such as random subspace algorithm (SSI), eigen system implementation algorithm (ERA) and PolyMAX, but such an approach should provide a priori information. The peak picking method based on frequency domain estimation of structural modal parameters is widely used due to its simplicity. The OMA method based on Transitivity (TOMA) has been proposed in the prior art, but this method is insensitive to the nature of the system input and has no white noise assumption. Traditionally, peak selection has been highly dependent on manual operations, making it impractical for real-time and long-term use. While advances have been made in implementing automatic peak picking, they rely on a predefined threshold and they ignore peaks below the threshold, thereby increasing the number of false positives. In addition, a pseudo peak may be selected because the peak shape due to environmental influences is not constant in height and position.
Although computer vision techniques have unparalleled potential in measuring vibration signals to identify natural frequencies, automatically implementing modal parameter identification remains challenging. Optical flow analysis has received great attention in computer vision research, and when using optical flow methods to extract structural vibration signals, optimal selection of effective pixels remains a major obstacle for practical vibration measurement applications. In pixel selection, the conventional edge detection method needs to manually adjust parameters according to different measurement conditions, and the internal texture of an object is generally regarded as an edge. In addition, when processing the measurement signals in different time periods, the key peak value needs to be selected repeatedly and manually, and the method cannot be well applied to real-time and continuous signal processing. Furthermore, the results of peak picking methods based on power spectral density transfer ratio (PSDT) driving are highly dependent on personal judgment and subjective experience.
Disclosure of Invention
The invention mainly aims to overcome the defects in the prior art, provides a vibration frequency automatic identification method based on computer vision and power spectrum transfer ratio, can realize the whole process of automatic identification from vibration signal extraction to frequency identification without manual intervention, can effectively eliminate false modes, can accurately identify dense modes through the automatic identification method, and has strong robustness to resist noise interference, thereby having wide application prospect in real-time structural health monitoring.
The invention adopts the following technical scheme:
the vibration frequency automatic identification method based on computer vision and power spectrum transfer ratio comprises the following steps:
defining an ROI of an original image, inputting the ROI of an image sequence into a trained HED network, identifying effective pixels, and outputting a vibration signal of each effective pixel by adopting a horns-Schunk optical flow method;
the extracted vibration signals are used as the input of a system, a relation between input excitation and output response power spectral density is established in a frequency domain by introducing a reference point, a single-reference-point power spectrum transfer ratio function is deduced, a power spectrum transfer ratio matrix is jointly formed by combining power spectrum transfer ratio functions corresponding to different reference points, a peak value curve graph of a PSDT rational function can be created based on the PSDT matrix, and possible system poles can be obtained by screening the peak value curve graph;
converting the frequency domain graph into an image format as a source image, comparing the source image with the template image by adopting a template matching algorithm, positioning a peak area of the source image, calculating the matching degree by using a normalized cross-correlation model, and determining the peak position;
and identifying a physical peak value by using a peak value detector according to the peak value position, and finally determining the vibration frequency corresponding to the physical peak value.
Specifically, outputting a vibration signal of each effective pixel by using a horns-Schunk optical flow method specifically comprises the following steps:
firstly, defining the brightness value in the image sequence as B (x, y, t), wherein x is an abscissa and y is an ordinate;
wherein: u and v represent the velocity flow in the x and y directions, there being:
Bxu+Byv+Bt=0
wherein: b isx,ByAnd BtPartial differentials representing luminance values for x, y and t, respectively;
Eb=Bxu+Byv+Bt
quantification of the deviation of the variables characterizing the optical flow from the smoothing hypothesis, EbCharacterizing the sum of the constraint errors;
the total deviation degree of the optical flow method is derived based on a complete constraint condition:
in the formula: e is the total deviation degree, alpha represents a weight coefficient, and the optimal solution of u and v is sought to ensure that the total deviation degree reaches the minimum
Finally, the values of u and v for each active pixel are iteratively determined.
Specifically, a reference point is introduced, a relational expression of input excitation and output response power spectral density is established in a frequency domain, a single-reference-point power spectral transfer ratio function is deduced, and a power spectral transfer ratio matrix is formed by combining power spectral transfer ratio functions corresponding to different reference points; the method specifically comprises the following steps:
introducing a reference point, establishing a relationship between input excitation and output response power spectral density in the frequency domain as follows:
Syy(s)=H(s)Guu(s)H*(s)
wherein Syy(s) Power spectral Density matrix, G, representing the structural responseuu(s) is an input to a power spectral density matrix, H(s) represents a frequency response function matrix, H*(s) represents the complex conjugate transpose of H(s);
when s approaches the r-th system pole λrAnd obtaining a convergence solution of the single reference point PSDT function:
wherein:represents the output yi(t) and yj(t) output y at the same transmission ratioq(t) transfer ratio, Siq(s) represents the output yi(t) and yq(t) power spectral density matrix, Sjq(s) represents the output yj(t) and yq(t) power spectral density matrix, Hik(s) represents the output yi(t) and input uk,k=1,2,…,NrA matrix of frequency response functions in between,represents the output yq(t) and input un,n=1,2,…,NrFrequency response function matrix H therebetweenqnComplex conjugate matrix of(s), NrIndicating the number of system inputs, Gkn(s) denotes the degree of freedom k and N, N being 1,2, …, NrPower spectral density matrix of (phi)irAnd phijrRepresenting the vibration mode components corresponding to the degrees of freedom i and j;
subtracting two power spectrum transmission ratio functions with different transmission ratio outputs under the same vibration condition satisfies that:
wherein: lambda [ alpha ]rRepresenting the pole of the system of the r-th order,function representing transfer ratioAnda difference of (d);
the pole of the system is the point corresponding to the zero value of the PSDT difference function; combining PSDT difference functions of different measurement degrees of freedom to obtain PSDT rational function delta T(-1)The following were used:
wherein N is0Representing the number of system outputs;
and jointly forming a power spectrum transfer ratio matrix by combining the power spectrum transfer ratios corresponding to different reference points.
Specifically, comparing a source image with a template image by adopting a template matching algorithm, positioning a peak area of the source image, calculating the matching degree by using a normalized cross-correlation model, and determining the peak position, wherein the normalized cross-correlation model specifically comprises the following steps:
the size M multiplied by N of a source image f (x, y), the size J multiplied by K of a template image w (u, v), and xi epsilon [0,1] represent image correlation coefficients.
Specifically, the identifying a physical peak value by using a peak detector specifically includes:
the peak value detector which is proposed by training and quantizing the peak value meets the following requirements:
wherein N isWNumber of pixels, N, representing the width of each peakHNumber of pixels, W, representing height of each peakpAnd HpRespectively, relative width and relative height, WmaxAnd HminRepresenting a maximum value of the relative width and a minimum value of the relative height, MWRepresenting the total number of pixels, M, representing the source image along the x-axisHRepresenting the total number of pixels along the y-axis representing the source image.
As can be seen from the above description of the present invention, compared with the prior art, the present invention has the following advantages:
(1) the invention provides an automatic realization method of signal extraction and frequency identification, which comprises the steps of firstly establishing a transfer ratio matrix based on the vibration response of a deep learning enhanced computer vision measurement structure, determining the frequency of a system according to the vibration response by adopting a peak value picking method driven by a power spectrum transfer ratio (PSDT), and further realizing a vibration frequency full-automatic identification method based on the deep learning enhanced computer vision and the power spectrum transfer ratio.
(2) Compared with the traditional modal identification method, the structural response matrix is constructed based on power spectrum transfer, is insensitive to environmental factor change, has strong robustness, and is more suitable for identifying structural modal parameters in an operation environment.
(3) Compared with the existing automatic mode identification method, the method can automatically operate to identify the mode parameters without manual intervention, can effectively eliminate false modes, accurately identifies the dense mode, and has wide application prospect in real-time SHM.
Drawings
FIG. 1 is a schematic diagram of an ASCE-Benchmark model provided by an embodiment of the present invention; wherein, (a) is a reference model; (b) connecting the beam column; (c) is an accelerometer;
FIG. 2 is a peak curve of the PSDT matrix driver according to an embodiment of the present invention;
fig. 3 is a diagram illustrating peak region positioning based on a template matching algorithm according to an embodiment of the present invention;
fig. 4 is a result of peak automatic identification based on a peak detector according to an embodiment of the present invention;
FIG. 5 is a graph of the stability of the PSDT drive provided by an embodiment of the present invention;
FIG. 6 illustrates a five-edge bay bridge in-situ experimental setup; wherein (a) an experimental layout; (b) a camera field of view; (c) an ROI map;
FIG. 7 provides measurements of bridge cable vibration signals according to an embodiment of the present invention; wherein, (a) the measurement result of the bridge cable vibration signal obtained by the optical flow method; (b) measuring a bridge cable vibration signal obtained by the contact sensor;
FIG. 8 illustrates an embodiment of the present invention that provides results for automatic selection of a peak of a bridgewire structure.
Detailed Description
The invention is further described below by means of specific embodiments.
The invention provides a vibration frequency automatic identification method based on computer vision and power spectrum transfer ratio under a structural health monitoring system, which mainly comprises the following steps:
step S1, automatically outputting a vibration signal of each effective pixel in the ROI based on the optical flow method: defining an ROI of an original image, inputting the ROI of an image sequence into a trained HED network, identifying effective pixels, and outputting a vibration signal of each effective pixel by adopting a horns-Schunk optical flow method;
a key prerequisite for optical flow analysis is the selection of valid pixels, which is motivated by edge detection methods. Firstly, defining a measurement target as an ROI in an original image, inputting the ROI of a first frame of an image sequence into a trained Hollistically-Nested Edge Detection (HED) network, and storing effective pixels. Then, the vibration signal of each effective pixel in the ROI is automatically output using the horns-Schunk optical flow.
In the optical flow method identification process, in a single-channel image with brightness information, the brightness value can be stored as a two-dimensional quantity B (x, y) with x and y coordinates, and since the image signal of a specific frame will change with time t in its sequence, and its data form is represented as B (x, y, t), the time variation experienced by the image sequence is referred to as B (x, y, t), and for a time variation Δ t tending to zero, the brightness variation in the period can be approximately regarded as 0, as shown in the formula:
the chain formula is then applied to the above equation to yield:
wherein: u and v represent the velocity flow in the x and y directions:
substitution of formula (2) above may be derived from formula (4):
Bxu+Byv+Bt=0 (4)
wherein: b isx,ByAnd BtThe partial differential of the luminance values for x, y and t are shown, respectively.
Since the statically determinate problem requires that the number of the constraints is equal to the number of the variables, and only 1 constraint condition exists currently, the two variables (u and v) can be known to exist by the theory of the optical flow method, so that the solution of the equation cannot be directly realized. Additional constraints can be introduced by exploiting the assumption of smoothness (meaning that the luminance transitions of the entire image exhibit smooth characteristics). This smoothness assumption can be expressed as:
By definitionA quantitative description that can be used for the assumption of deviation of the optical flow variables from the smoothing, as shown in equation (6), while using EbThe sum of the characterization constraint errors is shown in equation (7).
Eb=Bxu+Byv+Bt (7)
The total deviation degree of the optical flow method can be derived based on complete constraint conditions:
in the formula: and E is the total deviation degree, the total deviation degree needs to be subjected to minimization solving, alpha represents a weight coefficient, when the weight coefficient is increased, the dominant weight of the optical flow variable to the global smoothness is also increased, and otherwise, the dominant weight is reduced.
The process of finding the optimal solution for u and v can also be considered as the process of minimizing the total degree of divergence. Iterative equation (9) can be derived from equation (8).
Finally, the values of u and v for each active pixel can be determined iteratively.
Step S2, construction of a transfer ratio function and a transfer ratio matrix: establishing a relational expression of input excitation and output response power spectral density in a frequency domain by introducing a reference point, deriving a single-reference-point power spectrum transfer ratio function, jointly forming a power spectrum transfer ratio matrix by combining power spectrum transfer ratio functions corresponding to different reference points, and establishing a peak curve graph of a power spectrum transfer ratio rational function based on the power spectrum transfer ratio matrix;
the structural vibration signal can be obtained through image processing based on the horns-Schunk optical flow, in order to further determine the frequency of the system according to the vibration response, a peak value picking method driven by a power spectrum transfer ratio (PSDT) is adopted, the anti-noise performance is improved, and a transfer ratio function and a transfer ratio matrix are created.
For a multiple degree of freedom system, by introducing a reference point, the relationship between the input excitation and output response power spectral density in the frequency domain can be established as follows:
Syy(s)=H(s)Guu(s)H*(s) (10)
wherein Syy(s) Power spectral Density matrix, G, representing the structural responseuu(s) is an input to a power spectral density matrix, H(s) represents a frequency response function matrix, H*(s) represents the complex conjugate transpose of H(s);
when s approaches the r-th system pole λrAnd deriving a convergence solution of the PSDT function with the single reference point:
wherein:represents the output yi(t) and yj(t) output y at the same transmission ratioq(t) ofTransmission ratio, Siq(s) represents the output yi(t) and yq(t) power spectral density matrix, Sjq(s) represents the output yj(t) and yq(t) power spectral density matrix, Hik(s) represents the output yi(t) and input uk(k=1,2,…,Nr) A matrix of frequency response functions in between,represents the output yq(t) and input un,n=1,2,…,NrFrequency response function matrix H therebetweenqnComplex conjugate matrix of(s), NrIndicating the number of system inputs, Gkn(s) denotes the effect on degrees of freedom k and N (N ═ 1,2, …, Nr) Power spectral density matrix of (phi)irAnd phijrRepresenting the mode shape components corresponding to the degrees of freedom i and j.
Subtracting two PSDT functions with different transmission ratio outputs under the same vibration condition satisfies the following conditions:
wherein: lambda [ alpha ]rRepresenting the pole of the system of the r-th order,function representing transfer ratioAndthe difference of (a).
The pole of the system is the point corresponding to the zero value of the PSDT difference function; combining PSDT difference functions of different measurement degrees of freedom to obtain PSDT rational function delta T(-1)The following were used:
wherein N is0Representing the number of system outputs.
Based on the concept of the power spectrum transfer ratio and the characteristics of the power spectrum transfer ratio at the extreme points of the system, the PSDT matrix is formed by combining the power spectrum transfer ratios corresponding to different reference points.
Step S3, positioning by a template matching method to obtain the peak of the peak value curve: converting the frequency domain graph into an image format as a source image, comparing the source image with the template image by adopting a template matching algorithm, positioning a peak area of the source image, calculating the matching degree by using a normalized cross-correlation model, and determining the peak position;
firstly, converting a frequency domain image into an image format as a source image, then searching a target similar to the template image by sliding the template on the source image by adopting a template matching algorithm so as to locate a small area of the source image, calculating the matching degree of the template image and the source image by using a normalized cross-correlation (NCC) model as a fitness function, and then determining a searching position. The size M × N of the source image f (x, y) and the size J × K of the template image w (u, v), the (NCC) model can be expressed as:
wherein: xi belongs to [0,1] represents the image correlation coefficient, and the more xi approaches 1, the higher the degree of correlation.
All possible peak regions satisfying equation (14) can be obtained by a template matching algorithm.
Step S4, performing peak value selection using a peak value detector: identifying a physical peak value by using a peak value detector according to the peak value position, and finally determining the vibration frequency corresponding to the physical peak value;
after the peak position is confirmed by the template matching method, a part of the screened peak regions are obtained, and further the peak matching regions are judged one by a peak detector. Since the peak detector is trained based on the improved google net network structure, the peak detector is trained on the basis that the graphic features of the peak image samples are simpler, but the original google net model is deep in layer number and is prone to gradient disappearance and gradient explosion. The learning rate of a shallow network in the model is set to be zero by combining the characteristics of the peak image, the transfer learning efficiency is improved, the classification layer in the original model is replaced by a new classification layer of the peak image, the training image is randomly turned over along the vertical axis, and the overfitting of the network is effectively prevented through data enhancement.
The proposed peak detector is trained by quantitatively defining the peaks, satisfying the following equation:
wherein N isWNumber of pixels, N, representing the width of each peakHNumber of pixels, W, representing height of each peakpAnd HpRespectively, relative width and relative height, WmaxAnd HminRepresenting a maximum value of the relative width and a minimum value of the relative height, MWRepresenting the total number of pixels, M, representing the source image along the x-axisHRepresenting the total number of pixels along the y-axis representing the source image.
The training peak detector needs specific training samples, and a peak value is selected from a PSDT-driven peak value curve obtained from an ASCE-Benchmark model, so that a database of the peak detector is constructed. The data were divided into a training data set and a validation set, with 70% of the samples used for training and the remaining 30% used for validation. In this training process, the geometric pattern of the peaks is satisfied with the relative width WpAnd relative height HpIs defined as a peak tag; otherwise, they are defined as off-peak tags. In 30% of the test samples, the peak detector was unable to distinguish whether only 9 of the peaks were physical or false peaks, indicating that the trained peak detector could be applied to the test structure and automatically distinguish whether the selected peak was a physical peak or a false peak. Finally obtaining the vibration frequency corresponding to the physical peak value and using the vibration frequencyAs the final recognition result of the automatic recognition method.
Example 1: automatic modal identification of ASCE-Benchmark model
In order to verify the feasibility of the automatic peak picking method, an ASCE-Benchmark model is selected for verification. As shown in fig. 1, the reference model is a steel frame of 1/3 reduced scale, and has 4 layers, 2 × 2 spans, 2.5 × 2.5m plane size, 0.9m layer height, 8 inclined struts in each layer, and the beam columns are fixedly connected, and the support is flexibly connected with the structure and can be freely disassembled and assembled. Nine vertical columns are bolted to the steel foundation frame and the lower flanges of the two foundation beams are wrapped in concrete, thereby securing the steel frame to the concrete slab. The component is 300W grade hot rolled steel, specially designed for testing structures. 15 accelerometers were placed throughout the frame to measure the response of the test structure. Two accelerometers measure the north-south movement of the structure and one accelerometer measures the east-west movement of the structure. A series of environmental and forced stimuli were performed on the structure, including weight and vibrator tests.
The test environment is an environmental stimulus under non-destructive conditions, and the method is tested with acceleration data collected at a sampling frequency of 200 Hz. The PSDT matrix is calculated and constructed by equation (12), and the PSDT rational function is calculated using equation (13). The peak curve of the PSDT rational function is plotted in fig. 2. As shown in fig. 3, after applying the template matching algorithm, regions with distinct peak features can be automatically matched, all matched peak regions are highlighted by rectangular boxes, and two circled peaks in the PSDT-driven peak curve satisfy equations (15) and (16), but the peak amplitudes of the peaks are relatively low.
Subsequently, in order to distinguish whether the matched peak regions are physical peaks or pseudo-peaks, the trained peak detector is applied to the Benchmark model, checking the shape of each peak region from the matched peak regions, wherein only the peaks marked as peaks by the peak detector are selected as possible system poles. Fig. 4 shows the results of automatic confirmation using a peak detector, selected peaks are marked with dots, and table 1 lists the results of recognition using an automatic recognition method. The results show that the method is able to identify dense modes of 7.41Hz and 7.79 Hz. In addition, the two false peak regions (circled regions in fig. 3) are eliminated, indicating that the peak detector improves the identification accuracy.
TABLE 1 Modal parameters obtained by ASCE-Benchmark model identification
For comparison, a PSDT-driven stability map was also applied to the model, and fig. 5 plots the results of the PSDT-driven stability map, showing the five automatically generated stability axes. Compared with the results of the proposed deep learning enhanced image recognition technique, the method can also accurately identify the natural frequencies of the structure, and can also identify two dense modalities of 7.41Hz and 7.69 Hz. However, as the system order gradually increases, the calculation process for generating the stable graph is repeated, and therefore, it is time-consuming to realize the automatic identification of the stable axis. The deep learning enhanced image recognition technology can realize automatic frequency recognition under the condition of not generating a stable chart, and can be well applied to real-time and continuous signal processing.
In this verification, acceleration data from the Benchmark model is used as system input, and a well-designed program runs automatically to identify modal parameters without manual intervention, can achieve automatic frequency identification without generating a stable graph, and can be well applied to real-time, continuous signal processing. The identification result shows that the method can effectively eliminate the false mode and accurately identify the dense mode.
Example 2: application of cable-stayed bridge cable
If the five-edge bridge of the mansion door is made of steel structures in the figure 6, the operation environment is close to the sea, and the long-term exposure of the bridge to salt fog can cause more serious corrosion of a plurality of components of the bridge. In addition, the design service life of the bridge can be greatly influenced, so that the health condition evaluation of the bridge draws attention of numerous scholars and related departments, and the selection of the bridge as a research object of a field test has strong practical significance.
Because the visual field of the camera is limited, a cable on the bridge is selected as an actual measurement object in the test, so that the measurement precision is ensured. In order to improve the consistency of the different test methods as much as possible, the vibrometer at the blue circle highlight is installed under the selected bridge cable, the sampling rate of the vibration sensor and the camera are both preset to 100Hz, and the ROI highlighted by the red rectangle is 72 × 86 pixels, as shown in fig. 6.
Firstly, defining an ROI (region of interest) in an image, and then extracting the edge of a measured object through HED (high-level object extraction), so that the HED algorithm is combined with an optical flow method to automatically extract a vibration signal. Fig. 7(a) and 7(b) show time charts of vibration data of the bridge cable obtained by the proposed method and acceleration sensor, respectively. As shown in the figure, under the environment excitation, the vibration signal amplitude is basically stable, and the vibration signal amplitude acquired by the optical flow method is basically the same as that of the contact sensor. It is worth mentioning that the PSDT rational function requires a plurality of measuring points, so that the image recognition technique can just avoid the defect of installing a plurality of sensors.
Next, a peak curve is plotted based on the PSDT rational function, and the peak is automatically detected using the proposed PSDT-driven peak picking method. In the first stage, all possible peak regions on the source image are located by template matching, highlighted with a rectangular box (fig. 8). Furthermore, in order to distinguish whether the matched peak region is a physical peak or a pseudo peak, the trained peak detector (adapted for different engineering structures) examines the shape of each possible peak. The results show that only one peak was selected that meets the physical peak shape requirement, as shown in fig. 8.
For comparison, the proposed method and the natural frequency determined by the acceleration sensor are shown in table 2.
TABLE 2 Modal parameters obtained by bridge cable identification
Since in computer vision measurements the selected ROI region does not cover the entire bridgewire and the actual engineering scene environment where the test conditions are unfavorable, only a first order frequency (12.78Hz) is obtained. As shown in Table 2, the first order frequency values obtained from both experiments were very close, with an error of only 1.49%, regardless of the vibrometer results and the results obtained from the proposed method. Due to variations in natural light and instability of the test background, the error of the final frequency measurement is still within an acceptable range, although the measurement reflects higher noise in the time course. As long as there are peaks that satisfy the peak shape on the PSDT drive map, the vibration frequency can be identified. In the field test, the automatic mode identification method provided by the invention is proved to have strong robustness and is expected to be applied to real-time health monitoring.
The above description is only an embodiment of the present invention, but the design concept of the present invention is not limited thereto, and any insubstantial modifications made by using the design concept should fall within the scope of infringing the present invention.
Claims (5)
1. The vibration frequency automatic identification method based on computer vision and power spectrum transfer ratio is characterized by comprising the following steps:
defining an ROI of an original image, inputting the ROI of an image sequence into a trained HED network, identifying effective pixels, and outputting a vibration signal of each effective pixel by adopting a horns-Schunk optical flow method;
the extracted vibration signals are used as the input of a system, a relation between input excitation and output response power spectral density is established in a frequency domain by introducing a reference point, a single-reference-point power spectrum transfer ratio function is deduced, a power spectrum transfer ratio matrix is formed by combining power spectrum transfer ratio functions corresponding to different reference points, and a peak curve graph of a power spectrum transfer ratio rational function is established on the basis of the power spectrum transfer ratio matrix;
converting the frequency domain graph into an image format as a source image, comparing the source image with the template image by adopting a template matching algorithm, positioning a peak area of the source image, calculating the matching degree by using a normalized cross-correlation model, and determining the peak position;
and identifying a physical peak value by using a peak value detector according to the peak value position, and finally determining the vibration frequency corresponding to the physical peak value.
2. The method for automatically identifying vibration frequency based on computer vision and power spectrum transfer ratio according to claim 1, wherein the method for outputting the vibration signal of each effective pixel by using a horns-Schunk optical flow method specifically comprises the following steps:
firstly, defining the brightness value in the image sequence as B (x, y, t), wherein x is an abscissa and y is an ordinate;
wherein: u and v represent the velocity flow in the x and y directions, there being:
Bxu+Byv+Bt=0
wherein: b isx,ByAnd BtPartial differentials representing luminance values for x, y and t, respectively;
Eb=Bxu+Byv+Bt
quantification of the deviation of the variables characterizing the optical flow from the smoothing hypothesis, EbCharacterizing the sum of the constraint errors;
the total deviation degree of the optical flow method is derived based on a complete constraint condition:
in the formula: e is the total deviation degree, alpha represents a weight coefficient, and the optimal solution of u and v is sought to ensure that the total deviation degree reaches the minimum
Finally, the values of u and v for each active pixel are iteratively determined.
3. The method for automatically identifying the vibration frequency based on the computer vision and the power spectrum transfer ratio as claimed in claim 2, wherein a relation between the power spectral density of input excitation and output response is established in a frequency domain by introducing a reference point, a power spectrum transfer ratio function with a single reference point is deduced, and a power spectrum transfer ratio matrix is formed by combining the power spectrum transfer ratio functions corresponding to different reference points; the method specifically comprises the following steps:
introducing a reference point, establishing a relationship between input excitation and output response power spectral density in the frequency domain as follows:
Syy(s)=H(s)Guu(s)H*(s)
wherein Syy(s) Power spectral Density matrix, G, representing the structural responseuu(s) is an input to a power spectral density matrix, H(s) represents a frequency response function matrix, H*(s) represents the complex conjugate transpose of H(s);
when s approaches the r-th system pole λrAnd obtaining a convergence solution of the single reference point PSDT function:
wherein:represents the output yi(t) and yj(t) output y at the same transmission ratioq(t) transfer ratio, Siq(s) represents the output yi(t) and yq(t) power spectral density matrix, Sjq(s) represents the output yj(t) and yq(t) power spectral density matrix, Hik(s) represents the output yi(t) and input uk,k=1,2,…,NrA matrix of frequency response functions in between,represents the output yq(t) and input un,n=1,2,…,NrFrequency response function matrix H therebetweenqnComplex conjugate matrix of(s), NrIndicating the number of system inputs, Gkn(s) denotes the degree of freedom k and N, N being 1,2, …, NrPower spectral density matrix of (phi)irAnd phijrRepresenting the vibration mode components corresponding to the degrees of freedom i and j;
subtracting two power spectrum transmission ratio functions with different transmission ratio outputs under the same vibration condition satisfies that:
wherein: lambda [ alpha ]rRepresenting the pole of the system of the r-th order,function representing transfer ratioAnda difference of (d);
the pole of the system is the point corresponding to the zero value of the PSDT difference function; combining PSDT difference functions of different measurement degrees of freedom to obtain PSDT rational function delta T(-1)The following were used:
wherein N is0Representing the number of system outputs;
and jointly forming a power spectrum transfer ratio matrix by combining the power spectrum transfer ratios corresponding to different reference points.
4. The method for automatically identifying the vibration frequency based on the computer vision and power spectrum transfer ratio as claimed in claim 2, wherein a template matching algorithm is adopted to compare the source image and the template image, the peak area of the source image is located, the matching degree is calculated by using a normalized cross-correlation model, and the peak position is determined, wherein the normalized cross-correlation model specifically comprises:
the size M multiplied by N of a source image f (x, y), the size J multiplied by K of a template image w (u, v), and xi epsilon [0,1] represent image correlation coefficients.
5. The method for automatically identifying vibration frequency based on computer vision and power spectrum transfer ratio according to claim 2, wherein the identifying physical peak values by using a peak detector specifically comprises:
the peak value detector which is proposed by training and quantizing the peak value meets the following requirements:
wherein N isWNumber of pixels, N, representing the width of each peakHNumber of pixels, W, representing height of each peakpAnd HpRespectively, relative width and relative height, WmaxAnd HminRepresenting a maximum value of the relative width and a minimum value of the relative height, MWRepresenting the total number of pixels, M, representing the source image along the x-axisHRepresenting the total number of pixels along the y-axis representing the source image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111276493.8A CN114049586B (en) | 2021-10-29 | Vibration frequency automatic identification method based on computer vision and power spectrum transmission ratio |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111276493.8A CN114049586B (en) | 2021-10-29 | Vibration frequency automatic identification method based on computer vision and power spectrum transmission ratio |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114049586A true CN114049586A (en) | 2022-02-15 |
CN114049586B CN114049586B (en) | 2024-07-05 |
Family
ID=
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111275744A (en) * | 2020-01-20 | 2020-06-12 | 福州大学 | Non-contact vibration frequency measurement method based on deep learning and image processing |
AU2020101196A4 (en) * | 2020-06-30 | 2020-08-06 | Hunan University Of Science And Technology | Method and system for testing working modality of thin-walled member based on monocular visual optical flow tracking |
CN112926384A (en) * | 2021-01-15 | 2021-06-08 | 厦门大学 | Automatic modal identification method based on power spectrum transfer ratio and support vector machine |
CN113536223A (en) * | 2021-06-18 | 2021-10-22 | 中山大学 | Method and system for identifying structural mode under undersampling based on frequency domain decomposition method |
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111275744A (en) * | 2020-01-20 | 2020-06-12 | 福州大学 | Non-contact vibration frequency measurement method based on deep learning and image processing |
AU2020101196A4 (en) * | 2020-06-30 | 2020-08-06 | Hunan University Of Science And Technology | Method and system for testing working modality of thin-walled member based on monocular visual optical flow tracking |
CN112926384A (en) * | 2021-01-15 | 2021-06-08 | 厦门大学 | Automatic modal identification method based on power spectrum transfer ratio and support vector machine |
CN113536223A (en) * | 2021-06-18 | 2021-10-22 | 中山大学 | Method and system for identifying structural mode under undersampling based on frequency domain decomposition method |
Non-Patent Citations (1)
Title |
---|
李雪艳;许俊财;: "工程振动测试技术中实验模态分析方法的改进", 华南地震, no. 01, 15 March 2020 (2020-03-15), pages 115 - 121 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Galantucci et al. | Advanced damage detection techniques in historical buildings using digital photogrammetry and 3D surface anlysis | |
Fukuda et al. | Vision-based displacement sensor for monitoring dynamic response using robust object search algorithm | |
Chen et al. | Homography-based measurement of bridge vibration using UAV and DIC method | |
De Vries et al. | Remote sensing of surf zone waves using stereo imaging | |
Chae et al. | Neuro-fuzzy approaches for sanitary sewer pipeline condition assessment | |
Mansuri et al. | Artificial intelligence-based automatic visual inspection system for built heritage | |
Yang et al. | An improved EMD method for modal identification and a combined static-dynamic method for damage detection | |
Zhuge et al. | Noncontact deflection measurement for bridge through a multi‐UAVs system | |
KR20190080712A (en) | Methods for differentiation of earthquake signal and prediction of earthquake intensity using randomly generated artificial seismic training data for an arbitrary zone | |
Gunn et al. | On validating numerical hydrodynamic models of complex tidal flow | |
Havaran et al. | Extracting structural dynamic properties utilizing close photogrammetry method | |
Rucevskis et al. | Tikhonov's regularization approach in mode shape curvature analysis applied to damage detection | |
CN116861544B (en) | Building abnormal vibration source positioning method based on edge cloud cooperation and related equipment | |
Lai et al. | An evaluation of Mahalanobis Distance and grey relational analysis for crack pattern in concrete structures | |
CN110457795A (en) | High-rise charming appearance and behaviour displacement state appraisal procedure neural network based | |
Feng et al. | Computer vision for structural dynamics and health monitoring | |
CN102081045B (en) | Structural damage identification method based on laser television holographic technique | |
Xu et al. | Vision-based systems for structural deformation measurement: Case studies | |
CN105241428A (en) | Water depth retrieval method with hyper-spectrum | |
CN117076928A (en) | Bridge health state monitoring method, device and system and electronic equipment | |
Ierimonti et al. | A Bayesian-based inspection-monitoring data fusion approach for historical buildings and its post-earthquake application to a monumental masonry palace | |
Shang et al. | Multi-point vibration measurement for mode identification of bridge structures using video-based motion magnification | |
CN111595247B (en) | Crude oil film absolute thickness inversion method based on self-expansion convolution neural network | |
Hang et al. | Eulerian fast motion identification algorithm for deformation measurement of cable-stayed bridge | |
Liu et al. | Two-level W-ESMD denoising for dynamic deflection measurement of railway bridges by microwave interferometry |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |