CN113538580A - Vibration measurement method and system based on visual processing - Google Patents
Vibration measurement method and system based on visual processing Download PDFInfo
- Publication number
- CN113538580A CN113538580A CN202110799504.4A CN202110799504A CN113538580A CN 113538580 A CN113538580 A CN 113538580A CN 202110799504 A CN202110799504 A CN 202110799504A CN 113538580 A CN113538580 A CN 113538580A
- Authority
- CN
- China
- Prior art keywords
- monitoring area
- image
- vibration
- pixel point
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012545 processing Methods 0.000 title claims abstract description 59
- 230000000007 visual effect Effects 0.000 title claims abstract description 21
- 238000000691 measurement method Methods 0.000 title claims abstract description 19
- 238000012544 monitoring process Methods 0.000 claims abstract description 244
- 238000001514 detection method Methods 0.000 claims abstract description 77
- 238000001914 filtration Methods 0.000 claims abstract description 36
- 238000000034 method Methods 0.000 claims abstract description 23
- 238000005316 response function Methods 0.000 claims description 31
- 238000001228 spectrum Methods 0.000 claims description 17
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 238000005259 measurement Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 2
- 230000008569 process Effects 0.000 abstract description 4
- 239000007921 spray Substances 0.000 abstract description 2
- 238000013461 design Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 2
- 238000009432 framing Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01H—MEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
- G01H9/00—Measuring mechanical vibrations or ultrasonic, sonic or infrasonic waves by using radiation-sensitive means, e.g. optical means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T90/00—Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)
- Image Analysis (AREA)
Abstract
The application discloses vibration measurement method and system based on visual processing, which comprises the following steps: extracting brightness information of each pixel point in a first monitoring area; filtering the extracted brightness information; calculating to obtain the phase of each pixel point in the current frame vibration image in the first monitoring area; calculating the phase difference of each pixel point in the first monitoring area; weighting the obtained phase difference of each pixel point in the first monitoring area; summing the weighted phase differences of all pixel points in the first monitoring area in the current frame vibration image to obtain the object vibration quantity of the first monitoring area in the current frame vibration image; and generating a vibration signal of the target detection object image in the first monitoring area according to the object vibration quantity of the first monitoring area in each obtained frame of vibration image. The image filter of using design in this application carries out filtering process to the image, directly uses the pixel coordinate information in the image, need not to put up or spray the characteristic target point of artificial settlement on target detection object surface.
Description
Technical Field
The application belongs to the technical field of visual processing, and particularly relates to a vibration measurement method and system based on visual processing.
Background
When a high-speed rail train passes through the canopy at a high speed, the canopy can vibrate due to reasons such as air flow, the canopy is in a frequent vibration state, the structural safety of the canopy is greatly checked, and once fatigue damage occurs, the canopy brings harm to the safe operation of a railway, so that serious economic and property loss and personal safety threat are caused.
In the prior art, a contact measurement method is mainly adopted to detect the vibration condition of the canopy, and specifically, the canopy is detected by arranging a large number of contact measurement sensors on the canopy.
However, the contact measurement method in the prior art has a large workload and needs related departments to cooperate with each other, which is very inconvenient.
Disclosure of Invention
In order to solve the technical problems in the prior art, the application provides a vibration measurement method and system based on visual processing.
In a first aspect, the present application provides a vibration measurement method based on visual processing, including:
acquiring a vibration image sequence of a target detection object in a vibration state, wherein the vibration image sequence comprises a plurality of frames of vibration images which are sequenced according to a time sequence;
at least one monitoring area is selected in a frame mode on an image processing display interface, wherein each monitoring area at least covers part of the target detection object image in the vibration image;
executing the following steps for each monitoring area in each frame of vibration image:
extracting brightness information of each pixel point in a first monitoring area, wherein the first monitoring area is any one of the at least one monitoring area;
filtering the extracted brightness information of each pixel point in the first monitoring area;
respectively calculating the phase of each pixel point in the first monitoring area in the current frame vibration image according to the filtered brightness information of each pixel point in the first monitoring area;
respectively calculating the phase difference between the phase of each pixel point in the first monitoring area in the current frame vibration image and the phase of each pixel point in the first monitoring area in the first frame vibration image;
weighting the obtained phase difference of each pixel point in the first monitoring area to obtain the weighted phase difference of each pixel point in the first monitoring area in the current frame vibration image;
summing the weighted phase differences of all pixel points in the first monitoring area in the current frame vibration image to obtain the object vibration quantity of the first monitoring area in the current frame vibration image;
and generating a vibration signal of the target detection object image in the first monitoring area according to the object vibration quantity of the first monitoring area in each obtained frame of vibration image.
Optionally, the filtering processing of the extracted luminance information stream of each pixel point in the first monitoring area includes:
performing Fourier transform on the brightness information of each pixel point in the first monitoring area to obtain a Fourier spectrum response function of the brightness information of each pixel point in the first monitoring area;
filtering the Fourier spectrum response function through an image filter H (u, v) to obtain a filtering response function of each pixel point in the first monitoring area, wherein the image filter H (u, v) meets a first relational expression, and the first relational expression is as follows:
wherein W represents the passband bandwidth of the image filter, D (u, v) represents the distance from (u, v) to the origin of the frequency plane, D0Represents the cut-off frequency;
performing Fourier inverse transformation on the filter response function to obtain the brightness information of each pixel point in the first monitoring area after filteringThe filtered luminance informationSatisfying a second relational relationship:
wherein ,the method comprises the steps of representing a Fourier spectrum response function, wherein x represents the pixel abscissa of a pixel point, y represents the pixel ordinate of the pixel point, M, N represents the size of a vibration image, j represents an imaginary unit, u represents a frequency variable in the x direction, and v represents a frequency variable in the y direction.
Optionally, weighting the obtained phase difference of each pixel point in the first monitoring area to obtain a weighted phase difference of each pixel point in the first monitoring area in the current frame vibration image, including:
acquiring brightness information of preset adjacent pixel points corresponding to each pixel point in a first monitoring area in a current frame vibration image;
weighting the phase difference of each pixel point in the first monitoring area in the current frame vibration image according to a third relational expression, so as to obtain the weighted phase difference of each pixel point in the first monitoring area in the current frame vibration image, wherein the third relational expression is as follows:
wherein ,representing the weighted phase difference of each pixel point in the first monitoring area in the ith frame of vibration image,indicating the brightness information of the preset adjacent pixel points corresponding to the pixel point with the pixel coordinate (x, y) in the ith frame of vibration image,m represents the phase difference corresponding to the pixel point with the pixel coordinate (x, y) in the vibration image of the ith frame<k<n,m<l<And n, wherein m and n represent pixel coordinate values of preset adjacent pixel points corresponding to the pixel point with the pixel coordinate of (x, y).
Optionally, the method further includes:
converting the generated vibration signal of the target detection object image in each monitoring area into a vibration signal of an entity part corresponding to the target detection object image in each monitoring area;
and determining the vibration frequency information of each entity part of the target detection object according to the vibration signal of the entity part corresponding to the target detection object image in each monitoring area.
Optionally, if the target detection object is a canopy, the method includes:
determining a vibration signal of an entity part corresponding to the canopy ceiling image in the first monitoring area according to a fourth relational expression, wherein the fourth relational expression is as follows:
R=S×γ×1/cos(α)
wherein, R represents a vibration signal of an entity portion corresponding to the canopy roof image in the first monitoring area, S represents a vibration signal of the canopy roof image in the first monitoring area, γ represents a pixel equivalent ratio, and α represents an included angle between an entity vibration direction of the canopy roof and a vibration direction of the acquired canopy roof image;
and carrying out Fourier transform on the vibration signal of the entity part corresponding to the canopy ceiling image in the first monitoring area to obtain the vibration frequency information of the entity part corresponding to the canopy ceiling image in the first monitoring area.
In a second aspect, the present application also provides a vibration measurement system based on visual processing, comprising:
the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a vibration image sequence of a target detection object in a vibration state, and a plurality of frames of vibration images which are sequenced according to a time sequence are included in the vibration image sequence;
the frame selection module is used for selecting at least one monitoring area in a frame mode on an image processing display interface, wherein each monitoring area at least covers part of the target detection object image in the vibration image;
the extraction module is used for extracting the brightness information of each pixel point in a first monitoring area, wherein the first monitoring area is any monitoring area in the at least one monitoring area;
the filtering processing module is used for filtering the extracted brightness information of each pixel point in the first monitoring area;
the first calculation module is used for respectively calculating the phase of each pixel point in the first monitoring area in the current frame vibration image according to the brightness information of each pixel point in the first monitoring area after filtering;
the second calculation module is used for respectively calculating the phase difference between the phase of each pixel point in the first monitoring area in the current frame vibration image and the phase of each pixel point in the first monitoring area in the first frame vibration image;
the weighting processing module is used for weighting the obtained phase difference of each pixel point in the first monitoring area to obtain the weighted phase difference of each pixel point in the first monitoring area in the current frame vibration image;
the third calculation module is used for summing the weighted phase differences of all pixel points in the first monitoring area in the current frame vibration image to obtain the object vibration quantity of the first monitoring area in the current frame vibration image;
and the generating module is used for generating a vibration signal of the target detection object image in the first monitoring area according to the object vibration quantity of the first monitoring area in each obtained frame of vibration image.
Optionally, the filtering processing module includes a fourier transform module, an image filter, and an inverse fourier transform module;
the Fourier transform module is used for carrying out Fourier transform on the brightness information of each pixel point in the first monitoring area to obtain a Fourier spectrum response function of the brightness information of each pixel point in the first monitoring area;
the image filter is used for filtering the Fourier spectrum response function to obtain a filtering response function of each pixel point in the first monitoring area, wherein the image filter H (u, v) meets a first relational expression, and the first relational expression is as follows:
wherein W represents the passband bandwidth of the image filter, D (u, v) represents the distance from (u, v) to the origin of the frequency plane, D0Represents the cut-off frequency;
the inverse Fourier transform module is used for performing inverse Fourier transform on the filter response function to obtain the brightness information of each pixel point in the first monitoring area after being filteredThe filtered luminance informationSatisfying a second relational relationship:
wherein ,representing a Fourier spectral response function, x representing a pixel abscissa of a pixel, y representing a pixel ordinate of a pixel, M, N representing a size of the vibration image, j representing an imaginary unit, u representing a frequency variation in an x direction, v representing a frequency variation in a y direction。
Optionally, the weighting processing module includes an obtaining sub-module and a weighting processing sub-module;
the acquisition submodule is used for acquiring preset adjacent pixel point brightness information corresponding to each pixel point in a first monitoring area in the current frame vibration image;
the weighting processing submodule is used for weighting the phase difference of each pixel point in the first monitoring area in the current frame vibration image according to a third relation formula, so as to obtain the weighted phase difference of each pixel point in the first monitoring area in the current frame vibration image, and the third relation formula is as follows:
wherein ,representing the weighted phase difference of each pixel point in the first monitoring area in the ith frame of vibration image,indicating the brightness information of the preset adjacent pixel points corresponding to the pixel point with the pixel coordinate (x, y) in the ith frame of vibration image,m represents the phase difference corresponding to the pixel point with the pixel coordinate (x, y) in the vibration image of the ith frame<k<n,m<l<And n, wherein m and n represent pixel coordinate values of preset adjacent pixel points corresponding to the pixel point with the pixel coordinate of (x, y).
Optionally, the system further includes a conversion module and a determination module:
the conversion module is used for converting the generated vibration signal of the target detection object image in each monitoring area into a vibration signal of an entity part corresponding to the target detection object image in each monitoring area;
and the determining module is used for determining the vibration frequency information of each entity part of the target detection object according to the vibration signal of the entity part corresponding to the target detection object image in each monitoring area.
Optionally, if the target detection object is a canopy, the conversion module is configured to determine a vibration signal of an entity portion corresponding to the canopy image in the first monitoring area according to a fourth relational expression, where the fourth relational expression is:
R=S×γ×1/cos(α)
wherein, R represents a vibration signal of an entity portion corresponding to the canopy roof image in the first monitoring area, S represents a vibration signal of the canopy roof image in the first monitoring area, γ represents a pixel equivalent ratio, and α represents an included angle between an entity vibration direction of the canopy roof and a vibration direction of the acquired canopy roof image;
the determining module is used for performing Fourier transform on the vibration signal of the entity part corresponding to the canopy ceiling image in the first monitoring area to obtain the vibration frequency information of the entity part corresponding to the canopy ceiling image in the first monitoring area.
In summary, the vibration measurement method and system based on visual processing provided by the present application use the designed image filter to perform filtering processing on the image, and directly use the pixel coordinate information in the image, without identifying in advance what special features, such as artificially set feature target points, are included in the image, that is, in the present application, the feature information of the target detection object itself can be directly used, and there is no need to paste or spray artificially set feature target points on the surface of the target detection object. In addition, according to the vibration measurement method based on visual processing provided by the embodiment of the application, each local vibration condition of the target detection object can be analyzed by framing a plurality of monitoring areas.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a schematic workflow diagram of a vibration measurement method based on visual processing according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a video stream or a sequence of vibration images of a target test object acquired in the field according to an embodiment of the present application;
fig. 3 is a schematic diagram of a monitoring area outlined in a vibration measurement method based on visual processing according to an embodiment of the present application;
FIG. 4 is a schematic workflow diagram of another vibration measurement method based on visual processing according to an embodiment of the present application;
fig. 5 is a schematic diagram of measuring vibration of a canopy by using a vibration measurement method based on visual processing according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The application provides a vibration measurement method based on visual processing, as shown in fig. 1, comprising the following steps:
step 100, obtaining a vibration image sequence of a target detection object in a vibration state, wherein a plurality of frames of vibration images are sequenced according to a time sequence.
It should be noted that, if the video stream of the target detection object in the vibration state is initially acquired, decoding needs to be performed on the video stream to obtain the vibration image sequence.
Secondly, the video stream or the vibration image sequence of the target detection object in the vibration state needs to be acquired by the acquisition equipment. However, since the vibration measurement of the target object needs to be performed on site, that is, the capturing device captures the video stream or the vibration image sequence in the vibration state, the vibration frequency of the capturing device itself is much lower than that of the target object in the same vibration state in order to ensure the accuracy of the vibration measurement result.
Secondly, the present application does not limit the capturing device for capturing the video stream of the target detection object in the vibration state, for example: the collecting apparatus may include a camera and a camera fixing device, wherein a professional industrial camera may be used for image collection, a high-quality vibration image of the target object may be collected, and a stable measuring environment may be provided by fixing the camera using the camera fixing device.
In a specific example, as shown in fig. 2, the target detection object is a canopy of a high-speed rail platform, and it is necessary to authorize a detection person to erect acquisition equipment on the platform (to erect tools such as a tripod and a camera within a safety range), wait for a train to enter the station after the acquisition equipment is erected, start to acquire data when the train is about to enter the station, suspend acquisition after the train passes the station for a proper time, store the data, and record the time, the type and the driving direction of the train. In the data processing process (i.e., the process from step 100 to step 900), the data may be processed and analyzed in real time, or the image data may be captured and stored first, and then the data analysis operation may be performed after the capturing is finished.
Step 200, at least one monitoring area is selected in a frame mode on an image processing display interface, wherein each monitoring area at least covers part of the target detection object image in the vibration image.
The acquired complete target detection object image can be displayed on the image processing display interface, but if the image in the whole display interface is directly processed, on one hand, the data size is large, on the other hand, only the vibration condition of the whole target detection object can be reflected, and the local vibration condition of the target detection object cannot be reflected. Based on this, in the embodiment of the application, one or more monitoring areas may be framed on an image processing display interface, where each monitoring area at least covers part of the target detection object image in the vibration image, so as to perform data processing analysis on each part of the target detection object in the vibration image, and then output corresponding vibration signals, so as to reflect each local vibration condition of the target detection object. For example, as shown in fig. 3 and 4, a monitoring area a, a monitoring area B, and a monitoring area C are framed in the acquired complete target detection object image, and then the monitoring area a, the monitoring area B, and the monitoring area C are respectively subjected to data processing analysis to respectively obtain vibration signals corresponding to the three monitoring areas, so that the three local vibration conditions of the target detection object A, B, C can be reflected.
After acquiring the vibration image sequence, a time period for data processing is selected, so as to determine a start frame vibration image and an end frame vibration image for data processing. Because each frame of vibration image is processed, the monitoring area can be framed on the display interface displaying the vibration image of the initial frame, and the position of the monitoring area framed in advance on the image processing display interface is unchanged when the vibration image of the subsequent frame is processed.
After the step 200 is completed, the following steps 300 to 800 are performed on each monitoring area in each vibration image to obtain the vibration condition of the target detection object image in each monitoring area.
The following describes a data processing procedure of a first monitoring area in one frame of vibration image in detail, where the first monitoring area may be any monitoring area in the at least one monitoring area.
And step 300, extracting the brightness information of each pixel point in the first monitoring area.
The brightness information in the embodiment of the application mainly includes brightness values, and the brightness information of each pixel point in the first monitoring area can be expressed asWherein (x, y) represents the pixel coordinate of the pixel point, tiShowing the ith moment, and selecting the time period for data processing to comprise T moments T1,t2,t3,……,tT. That is to say that the position of the first electrode,indicating a first monitored area at t1The time (i.e., the first frame of vibration image), the luminance information of the pixel point with the pixel coordinate (x, y),indicating a first monitored area at t2And (3) obtaining the brightness information of each pixel point in the first monitoring area in each frame of vibration image by repeating the steps of (x, y) and (ii) obtaining the brightness information of the pixel point at the moment (namely the second frame of vibration image).
And step 400, filtering the extracted brightness information of each pixel point in the first monitoring area.
The method for filtering the extracted brightness information of each pixel point in the first monitoring area is not limited in the present application, and in a specific example, the method can be implemented through the following steps 410 to 430.
Step 410, performing Fourier transform on the brightness information of each pixel point in the first monitoring area to obtain a Fourier spectrum response function of the brightness information of each pixel point in the first monitoring area wherein ,tiIndicating the i-th time, (u, v) indicates a frequency variable corresponding to the pixel coordinate (x, y),a Fourier spectrum response function representing the corresponding pixel point with the pixel coordinate of (x, y) in the vibration image of the ith frame, wherein the Fourier spectrum response functionSatisfies the following relation (1):
where M, N denotes the size of the vibration image, j denotes an imaginary unit, u denotes a frequency variation in the x direction, and v denotes a frequency variation in the y direction.
Step 420, Fourier spectral response function by image filter H (u, v)Filtering to obtain the filter response function of each pixel point in the first monitoring areaWherein the image filter H (u, v) satisfies a first relation (2), and the first relation (2) is:
wherein W represents the passband bandwidth of the image filter, D (u, v) represents the distance from (u, v) to the origin of the frequency plane, D0The cut-off frequency is indicated.
Step 430, responding to the filter response functionPerforming inverse Fourier transform to obtain the brightness information of each pixel point in the first monitoring area after filteringThe filtered luminance informationSatisfying a second relation (3), wherein the second relation (3) is:
where x denotes a pixel abscissa of a pixel point, y denotes a pixel ordinate of a pixel point, M, N denotes a size of a vibration image, j denotes an imaginary unit, u denotes a frequency variable in the x direction, and v denotes a frequency variable in the y direction.
The value of (1) is in a complex form and is a luminance information map of the vibration image after the filtering process. The vibration of the target detection object is shot through the camera, the vibration of the target detection object can be projected to the camera plane to form different brightness values, the target detection object vibrates along with time, the brightness value of the corresponding position of the target detection object on the vibration image can be changed, namely, the structural information on the vibration image is changed, and the phase position is changedSynchronously reflects the change of the structural information on the vibration image and the phaseThe variation of the target detection object is equal to the variation of the structural information of the vibration image, and the vibration amount of the target detection object on the vibration image can be obtained. Therefore, need to be further based onCalculating phasePhase positionContaining structural information of the pixels on the vibration image.
Step 500, according to the filtered brightness information of each pixel point in the first monitoring area, respectively calculating to obtain the phase of each pixel point in the first monitoring area in the current frame vibration image.
According to the step 400, the filtered brightness information of each pixel point in the first monitoring area in the current frame vibration image can be obtainedThen, the phase of each pixel point in the first monitoring area in the current frame vibration image can be calculated according to the following relation (4), wherein the relation (4) is as follows:
and step 600, respectively calculating the phase difference between the phase of each pixel point in the current frame vibration image and the phase of each pixel point in the first monitoring area in the first frame vibration image.
After the phase of each pixel point in the first monitoring area in the current frame vibration image is obtained, the phase difference of each pixel point in the first monitoring area in the current frame vibration image and the first frame vibration image is obtained, and the phase difference is equal to the variation of the vibration image structure information. Wherein the first frame vibration image is t1Vibration images corresponding to the moment.
Similarly, a phase difference corresponding to each pixel point in the first monitoring area in the other frames of vibration images can be obtained, for example, the phase difference corresponding to each pixel point in the first monitoring area in the fifth frame of vibration image refers to a phase difference between each pixel point in the first monitoring area in the fifth frame of vibration image and the first frame of vibration image in the fifth frame of vibration image.
Step 700, weighting the obtained phase difference of each pixel point in the first monitoring area to obtain the weighted phase difference of each pixel point in the first monitoring area in the current frame vibration image.
In order to obtain more accurate vibration quantity of the object, the obtained phase difference of each pixel point in the first monitoring area is weighted, but the method for weighting each phase difference is not limited.
In a feasible mode, the embodiment of the application firstly acquires the brightness information of the preset adjacent pixel points corresponding to each pixel point in the first monitoring area in the current frame vibration image, wherein the number and the position of the preset adjacent pixel points corresponding to each pixel point can be set by self, and the application does not limit the preset adjacent pixel points; then, according to a third relational expression, weighting the phase difference corresponding to each pixel point in the first monitoring area in the current frame vibration image to obtain the weighted phase difference of each pixel point in the first monitoring area in the current frame vibration image, wherein the third relational expression (5) is as follows:
wherein ,representing the weighted phase difference of each pixel point in the first monitoring area in the ith frame of vibration image,indicating the brightness information of the preset adjacent pixel points corresponding to the pixel point with the pixel coordinate (x, y) in the ith frame of vibration image,m represents the phase difference corresponding to the pixel point with the pixel coordinate (x, y) in the vibration image of the ith frame<k<n,m<l<And n, wherein m and n represent pixel coordinate values of preset adjacent pixel points corresponding to the pixel point with the pixel coordinate of (x, y).
Using luminance information around pixel coordinates (x, y)(m<k<n,m<l<n) pairsWeighting to obtain the weighted phase difference of the pixel point corresponding to the pixel coordinate (x, y)By which the luminance value pair comparison can be increasedThe weight of the phase difference of the large area in the finally found object vibration quantity, that is, the weight of the phase difference at the contour of the target detection object on the vibration image, is such that the finally found object vibration quantityIs more accurate.
The object vibration amount of the first monitoring area in the current frame vibration image can reflect the vibration deviation of the first monitoring area in the current frame vibration image relative to the first monitoring area in the first frame vibration image.
According to the steps 300-800, the vibration quantity of the object in the first monitoring area in each vibration image can be obtained wherein ,
and 900, generating a vibration signal of the target detection object image in the first monitoring area according to the object vibration quantity of the first monitoring area in each obtained frame of vibration image.
It should be understood that, according to the steps 300-900, the object vibration amount of each monitoring area in each frame of vibration image can be obtained, so that the vibration signal of the target detection object image in each monitoring area can be obtained, that is, the vibration signal corresponding to the vibration image sequence can be obtainedTherefore, the vibration signals can reflect the vibration conditions of the target detection object images in the monitoring area at different moments.
Further, the generated vibration signal of the target detection object image in each monitoring area can be converted into a vibration signal of an entity part corresponding to the target detection object image in each monitoring area; and then, determining the vibration frequency information of each entity part of the target detection object according to the vibration signal of the entity part corresponding to the target detection object image in each monitoring area.
As shown in fig. 5, taking the target detection object as a canopy, the camera captures a vibration image of the canopy at an elevation angle α, and assuming that the real vibration of the canopy is in a vertical direction, the vibration direction of the canopy captured by the camera is actually a projection of the real vibration of the canopy, so that the real vibration signal of the canopy satisfies a fourth relational expression (6), and the fourth relational expression (6) is:
r ═ sx γ × 1/cos (α) relational expression (6)
Wherein, R represents the vibration signal of the entity part corresponding to the canopy image in the first monitoring area, S represents the vibration signal of the canopy image in the first monitoring area, gamma represents the pixel equivalent proportion, alpha represents the camera elevation angle, namely the included angle between the entity vibration direction of the canopy and the vibration direction of the acquired canopy image, wherein, the pixel equivalent proportion scale gamma (unit: mm/pixel) can be determined through image calibration; then, performing fourier transform on the vibration signal of the entity part corresponding to the canopy ceiling image in the first monitoring area to obtain a spectrogram of the signal, and directly reading the peak frequency of the signal on the spectrogram to obtain the frequency of each step of the signal, thereby obtaining the vibration frequency information of the entity part corresponding to the canopy ceiling in the first monitoring area.
In summary, the vibration measurement method based on visual processing provided by the embodiment of the present application performs filtering processing on an image by using a designed image filter, directly uses pixel coordinate information in the image, and does not need to identify what special features are included in the image in advance, such as artificially set feature target points and the like. In addition, according to the vibration measurement method based on visual processing provided by the embodiment of the application, each local vibration condition of the target detection object can be analyzed by framing a plurality of monitoring areas.
The embodiment of the present application further provides a vibration measurement system based on visual processing, including:
the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a vibration image sequence of a target detection object in a vibration state, and a plurality of frames of vibration images which are sequenced according to a time sequence are included in the vibration image sequence;
the frame selection module is used for selecting at least one monitoring area in a frame mode on an image processing display interface, wherein each monitoring area at least covers part of the target detection object image in the vibration image;
the extraction module is used for extracting the brightness information of each pixel point in a first monitoring area, wherein the first monitoring area is any monitoring area in the at least one monitoring area;
the filtering processing module is used for filtering the extracted brightness information of each pixel point in the first monitoring area;
the first calculation module is used for respectively calculating the phase of each pixel point in the first monitoring area in the current frame vibration image according to the brightness information of each pixel point in the first monitoring area after filtering;
the second calculation module is used for respectively calculating the phase difference between the phase of each pixel point in the first monitoring area in the current frame vibration image and the phase of each pixel point in the first monitoring area in the first frame vibration image;
the weighting processing module is used for weighting the obtained phase difference of each pixel point in the first monitoring area to obtain the weighted phase difference of each pixel point in the first monitoring area in the current frame vibration image;
the third calculation module is used for summing the weighted phase differences of all pixel points in the first monitoring area in the current frame vibration image to obtain the object vibration quantity of the first monitoring area in the current frame vibration image;
and the generating module is used for generating a vibration signal of the target detection object image in the first monitoring area according to the object vibration quantity of the first monitoring area in each obtained frame of vibration image.
The filtering processing module comprises a Fourier transform module, an image filter and an inverse Fourier transform module;
the Fourier transform module is used for carrying out Fourier transform on the brightness information of each pixel point in the first monitoring area to obtain a Fourier spectrum response function of the brightness information of each pixel point in the first monitoring area;
the image filter is used for filtering the Fourier spectrum response function to obtain a filtering response function of each pixel point in the first monitoring area, wherein the image filter H (u, v) meets a first relational expression, and the first relational expression is as follows:
wherein W represents the passband bandwidth of the image filter, D (u, v) represents the distance from (u, v) to the origin of the frequency plane, D0Represents the cut-off frequency;
the inverse Fourier transform module is used for performing inverse Fourier transform on the filter response function to obtain the brightness information of each pixel point in the first monitoring area after being filteredThe filtered luminance informationSatisfying a second relational relationship:
wherein ,the method comprises the steps of representing a Fourier spectrum response function, wherein x represents the pixel abscissa of a pixel point, y represents the pixel ordinate of the pixel point, M, N represents the size of a vibration image, j represents an imaginary unit, u represents a frequency variable in the x direction, and v represents a frequency variable in the y direction.
The weighting processing module comprises an acquisition submodule and a weighting processing submodule;
the acquisition submodule is used for acquiring preset adjacent pixel point brightness information corresponding to each pixel point in a first monitoring area in the current frame vibration image;
the weighting processing submodule is used for weighting the phase difference of each pixel point in the first monitoring area in the current frame vibration image according to a third relation formula, so as to obtain the weighted phase difference of each pixel point in the first monitoring area in the current frame vibration image, and the third relation formula is as follows:
wherein ,representing the weighted phase difference of each pixel point in the first monitoring area in the ith frame of vibration image,indicating the brightness information of the preset adjacent pixel points corresponding to the pixel point with the pixel coordinate (x, y) in the ith frame of vibration image,m represents the phase difference corresponding to the pixel point with the pixel coordinate (x, y) in the vibration image of the ith frame<k<n,m<l<And n, wherein m and n represent pixel coordinate values of preset adjacent pixel points corresponding to the pixel point with the pixel coordinate of (x, y).
The system further comprises a conversion module and a determination module:
the conversion module is used for converting the generated vibration signal of the target detection object image in each monitoring area into a vibration signal of an entity part corresponding to the target detection object image in each monitoring area;
and the determining module is used for determining the vibration frequency information of each entity part of the target detection object according to the vibration signal of the entity part corresponding to the target detection object image in each monitoring area.
If the target detection object is a canopy ceiling, the conversion module is configured to determine a vibration signal of an entity portion corresponding to the canopy ceiling image in the first monitoring area according to a fourth relational expression, where the fourth relational expression is:
R=S×γ×1/cos(α)
wherein, R represents a vibration signal of an entity portion corresponding to the canopy roof image in the first monitoring area, S represents a vibration signal of the canopy roof image in the first monitoring area, γ represents a pixel equivalent ratio, and α represents an included angle between an entity vibration direction of the canopy roof and a vibration direction of the acquired canopy roof image;
the determining module is used for performing Fourier transform on the vibration signal of the entity part corresponding to the canopy ceiling image in the first monitoring area to obtain the vibration frequency information of the entity part corresponding to the canopy ceiling image in the first monitoring area.
The same and similar parts in the various embodiments in this specification may be referred to each other. In particular, for the embodiments of the system, since they are substantially similar to the method embodiments, the description is simple, and for the relevant points, reference may be made to the description of the method embodiments.
The present application has been described in detail with reference to specific embodiments and illustrative examples, but the description is not intended to limit the application. Those skilled in the art will appreciate that various equivalent substitutions, modifications or improvements may be made to the presently disclosed embodiments and implementations thereof without departing from the spirit and scope of the present disclosure, and these fall within the scope of the present disclosure. The protection scope of this application is subject to the appended claims.
In a specific implementation, the present application further provides a computer-readable storage medium, where the computer-readable storage medium may store a program, and the program when executed may include some or all of the steps in the embodiments of the vibration measurement method and system based on visual processing provided by the present application. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a Random Access Memory (RAM), or the like.
Those skilled in the art will clearly understand that the techniques in the embodiments of the present application may be implemented by way of software plus a required general hardware platform. Based on such understanding, the technical solutions in the embodiments of the present application may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present application.
The above-described embodiments of the present application do not limit the scope of the present application.
Claims (10)
1. A vibration measurement method based on visual processing, comprising:
acquiring a vibration image sequence of a target detection object in a vibration state, wherein the vibration image sequence comprises a plurality of frames of vibration images which are sequenced according to a time sequence;
at least one monitoring area is selected in a frame mode on an image processing display interface, wherein each monitoring area at least covers part of the target detection object image in the vibration image;
executing the following steps for each monitoring area in each frame of vibration image:
extracting brightness information of each pixel point in a first monitoring area, wherein the first monitoring area is any one of the at least one monitoring area;
filtering the extracted brightness information of each pixel point in the first monitoring area;
respectively calculating the phase of each pixel point in the first monitoring area in the current frame vibration image according to the filtered brightness information of each pixel point in the first monitoring area;
respectively calculating the phase difference between the phase of each pixel point in the first monitoring area in the current frame vibration image and the phase of each pixel point in the first monitoring area in the first frame vibration image;
weighting the obtained phase difference of each pixel point in the first monitoring area to obtain the weighted phase difference of each pixel point in the first monitoring area in the current frame vibration image;
summing the weighted phase differences of all pixel points in the first monitoring area in the current frame vibration image to obtain the object vibration quantity of the first monitoring area in the current frame vibration image;
and generating a vibration signal of the target detection object image in the first monitoring area according to the object vibration quantity of the first monitoring area in each obtained frame of vibration image.
2. The method of claim 1, wherein filtering the extracted luminance information stream for each pixel in the first monitored region comprises:
performing Fourier transform on the brightness information of each pixel point in the first monitoring area to obtain a Fourier spectrum response function of the brightness information of each pixel point in the first monitoring area;
filtering the Fourier spectrum response function through an image filter H (u, v) to obtain a filtering response function of each pixel point in the first monitoring area, wherein the image filter H (u, v) meets a first relational expression, and the first relational expression is as follows:
wherein W represents the passband bandwidth of the image filter, D: (u, v) denotes the distance from (u, v) to the origin of the frequency plane, D0Represents the cut-off frequency;
performing Fourier inverse transformation on the filter response function to obtain the brightness information of each pixel point in the first monitoring area after filteringThe filtered luminance informationSatisfying a second relational relationship:
wherein ,the method comprises the steps of representing a Fourier spectrum response function, wherein x represents the pixel abscissa of a pixel point, y represents the pixel ordinate of the pixel point, M, N represents the size of a vibration image, j represents an imaginary unit, u represents a frequency variable in the x direction, and v represents a frequency variable in the y direction.
3. The method of claim 1, wherein weighting the obtained phase difference of each pixel point in the first monitoring area to obtain the weighted phase difference of each pixel point in the first monitoring area in the current frame vibration image comprises:
acquiring brightness information of preset adjacent pixel points corresponding to each pixel point in a first monitoring area in a current frame vibration image;
weighting the phase difference of each pixel point in the first monitoring area in the current frame vibration image according to a third relational expression, so as to obtain the weighted phase difference of each pixel point in the first monitoring area in the current frame vibration image, wherein the third relational expression is as follows:
wherein ,representing the weighted phase difference of each pixel point in the first monitoring area in the ith frame of vibration image,indicating the brightness information of the preset adjacent pixel points corresponding to the pixel point with the pixel coordinate (x, y) in the ith frame of vibration image,m represents the phase difference corresponding to the pixel point with the pixel coordinate (x, y) in the vibration image of the ith frame<k<n,m<l<And n, wherein m and n represent pixel coordinate values of preset adjacent pixel points corresponding to the pixel point with the pixel coordinate of (x, y).
4. The method of claim 1, further comprising:
converting the generated vibration signal of the target detection object image in each monitoring area into a vibration signal of an entity part corresponding to the target detection object image in each monitoring area;
and determining the vibration frequency information of each entity part of the target detection object according to the vibration signal of the entity part corresponding to the target detection object image in each monitoring area.
5. The method of claim 4, wherein if the target detection object is a canopy, the method comprises:
determining a vibration signal of an entity part corresponding to the canopy ceiling image in the first monitoring area according to a fourth relational expression, wherein the fourth relational expression is as follows:
R=S×γ×1/cos(α)
wherein, R represents a vibration signal of an entity portion corresponding to the canopy roof image in the first monitoring area, S represents a vibration signal of the canopy roof image in the first monitoring area, γ represents a pixel equivalent ratio, and α represents an included angle between an entity vibration direction of the canopy roof and a vibration direction of the acquired canopy roof image;
and carrying out Fourier transform on the vibration signal of the entity part corresponding to the canopy ceiling image in the first monitoring area to obtain the vibration frequency information of the entity part corresponding to the canopy ceiling image in the first monitoring area.
6. A vibration measurement system based on visual processing, comprising:
the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a vibration image sequence of a target detection object in a vibration state, and a plurality of frames of vibration images which are sequenced according to a time sequence are included in the vibration image sequence;
the frame selection module is used for selecting at least one monitoring area in a frame mode on an image processing display interface, wherein each monitoring area at least covers part of the target detection object image in the vibration image;
the extraction module is used for extracting the brightness information of each pixel point in a first monitoring area, wherein the first monitoring area is any monitoring area in the at least one monitoring area;
the filtering processing module is used for filtering the extracted brightness information of each pixel point in the first monitoring area;
the first calculation module is used for respectively calculating the phase of each pixel point in the first monitoring area in the current frame vibration image according to the brightness information of each pixel point in the first monitoring area after filtering;
the second calculation module is used for respectively calculating the phase difference between the phase of each pixel point in the first monitoring area in the current frame vibration image and the phase of each pixel point in the first monitoring area in the first frame vibration image;
the weighting processing module is used for weighting the obtained phase difference of each pixel point in the first monitoring area to obtain the weighted phase difference of each pixel point in the first monitoring area in the current frame vibration image;
the third calculation module is used for summing the weighted phase differences of all pixel points in the first monitoring area in the current frame vibration image to obtain the object vibration quantity of the first monitoring area in the current frame vibration image;
and the generating module is used for generating a vibration signal of the target detection object image in the first monitoring area according to the object vibration quantity of the first monitoring area in each obtained frame of vibration image.
7. The system of claim 6, wherein the filter processing module comprises a fourier transform module, an image filter, and an inverse fourier transform module;
the Fourier transform module is used for carrying out Fourier transform on the brightness information of each pixel point in the first monitoring area to obtain a Fourier spectrum response function of the brightness information of each pixel point in the first monitoring area;
the image filter is used for filtering the Fourier spectrum response function to obtain a filtering response function of each pixel point in the first monitoring area, wherein the image filter H (u, v) meets a first relational expression, and the first relational expression is as follows:
wherein W represents the passband bandwidth of the image filter, D (u, v) represents the distance from (u, v) to the origin of the frequency plane, D0Represents the cut-off frequency;
the inverse Fourier transform module is used for performing inverse Fourier transform on the filter response function to obtain the brightness information of each pixel point in the first monitoring area after being filteredThe filterLuminance information of wave afterSatisfying a second relational relationship:
wherein ,the method comprises the steps of representing a Fourier spectrum response function, wherein x represents the pixel abscissa of a pixel point, y represents the pixel ordinate of the pixel point, M, N represents the size of a vibration image, j represents an imaginary unit, u represents a frequency variable in the x direction, and v represents a frequency variable in the y direction.
8. The system of claim 6, wherein the weighting module comprises an acquisition sub-module and a weighting sub-module;
the acquisition submodule is used for acquiring preset adjacent pixel point brightness information corresponding to each pixel point in a first monitoring area in the current frame vibration image;
the weighting processing submodule is used for weighting the phase difference of each pixel point in the first monitoring area in the current frame vibration image according to a third relation formula, so as to obtain the weighted phase difference of each pixel point in the first monitoring area in the current frame vibration image, and the third relation formula is as follows:
wherein ,representing the weighted phase difference of each pixel point in the first monitoring area in the ith frame of vibration image,indicating the brightness information of the preset adjacent pixel points corresponding to the pixel point with the pixel coordinate (x, y) in the ith frame of vibration image,m represents the phase difference corresponding to the pixel point with the pixel coordinate (x, y) in the vibration image of the ith frame<k<n,m<l<And n, wherein m and n represent pixel coordinate values of preset adjacent pixel points corresponding to the pixel point with the pixel coordinate of (x, y).
9. The system of claim 6, further comprising a conversion module and a determination module:
the conversion module is used for converting the generated vibration signal of the target detection object image in each monitoring area into a vibration signal of an entity part corresponding to the target detection object image in each monitoring area;
and the determining module is used for determining the vibration frequency information of each entity part of the target detection object according to the vibration signal of the entity part corresponding to the target detection object image in each monitoring area.
10. The system of claim 9, wherein if the target detection object is a canopy, the converting module is configured to determine the vibration signal of the entity portion corresponding to the canopy image in the first monitoring area according to a fourth relationship:
R=S×γ×1/cos(α)
wherein, R represents a vibration signal of an entity portion corresponding to the canopy roof image in the first monitoring area, S represents a vibration signal of the canopy roof image in the first monitoring area, γ represents a pixel equivalent ratio, and α represents an included angle between an entity vibration direction of the canopy roof and a vibration direction of the acquired canopy roof image;
the determining module is used for performing Fourier transform on the vibration signal of the entity part corresponding to the canopy ceiling image in the first monitoring area to obtain the vibration frequency information of the entity part corresponding to the canopy ceiling image in the first monitoring area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110799504.4A CN113538580B (en) | 2021-07-15 | 2021-07-15 | Vibration measurement method and system based on visual processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110799504.4A CN113538580B (en) | 2021-07-15 | 2021-07-15 | Vibration measurement method and system based on visual processing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113538580A true CN113538580A (en) | 2021-10-22 |
CN113538580B CN113538580B (en) | 2023-06-16 |
Family
ID=78099383
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110799504.4A Active CN113538580B (en) | 2021-07-15 | 2021-07-15 | Vibration measurement method and system based on visual processing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113538580B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2003005302A1 (en) * | 2001-07-05 | 2003-01-16 | Advantest Corporation | Image processing apparatus and image processing method |
US20200198149A1 (en) * | 2018-12-24 | 2020-06-25 | Ubtech Robotics Corp Ltd | Robot vision image feature extraction method and apparatus and robot using the same |
JP2020186957A (en) * | 2019-05-10 | 2020-11-19 | 国立大学法人広島大学 | Vibration analysis system, vibration analysis method, and program |
CN112254801A (en) * | 2020-12-21 | 2021-01-22 | 浙江中自庆安新能源技术有限公司 | Micro-vibration vision measurement method and system |
-
2021
- 2021-07-15 CN CN202110799504.4A patent/CN113538580B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2003005302A1 (en) * | 2001-07-05 | 2003-01-16 | Advantest Corporation | Image processing apparatus and image processing method |
US20200198149A1 (en) * | 2018-12-24 | 2020-06-25 | Ubtech Robotics Corp Ltd | Robot vision image feature extraction method and apparatus and robot using the same |
JP2020186957A (en) * | 2019-05-10 | 2020-11-19 | 国立大学法人広島大学 | Vibration analysis system, vibration analysis method, and program |
CN112254801A (en) * | 2020-12-21 | 2021-01-22 | 浙江中自庆安新能源技术有限公司 | Micro-vibration vision measurement method and system |
Non-Patent Citations (1)
Title |
---|
马惠珠;宋朝晖;季飞;侯嘉;熊小芸;: "项目计算机辅助受理的研究方向与关键词――2012年度受理情况与2013年度注意事项", 电子与信息学报, no. 01 * |
Also Published As
Publication number | Publication date |
---|---|
CN113538580B (en) | 2023-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110108348B (en) | Thin-wall part micro-amplitude vibration measurement method and system based on motion amplification optical flow tracking | |
WO2018028103A1 (en) | Unmanned aerial vehicle power line inspection method based on characteristics of human vision | |
CN109931878A (en) | A kind of building curtain wall seismic deformation monitoring method based on digital speckle label | |
CN111047568A (en) | Steam leakage defect detection and identification method and system | |
CN109636927B (en) | System and method for training and identifying aircraft attitude measurement algorithm | |
Zhu et al. | A robust structural vibration recognition system based on computer vision | |
CN115375924A (en) | Bridge health monitoring method and system based on image recognition | |
CN107862713A (en) | Video camera deflection for poll meeting-place detects method for early warning and module in real time | |
JPH0719814A (en) | Meter indication reader | |
CN113421224A (en) | Cable structure health monitoring method and system based on vision | |
CN116503391A (en) | Tunnel face rock mass joint crack identification method and identification device | |
CN115761487A (en) | Method for quickly identifying vibration characteristics of small and medium-span bridges based on machine vision | |
Buyukozturk et al. | Smaller than the eye can see: Vibration analysis with video cameras | |
Zhu et al. | A Noval Building Vibration Measurement system based on Computer Vision Algorithms | |
Chen et al. | Video camera-based vibration measurement for Condition Assessment of Civil Infrastructure | |
WO2024067435A1 (en) | Video-based multi-object displacement tracking monitoring method and apparatus | |
CN110944154B (en) | Method for marking and identifying fixed object in high-altitude lookout camera image | |
CN113538580B (en) | Vibration measurement method and system based on visual processing | |
CN111669575B (en) | Method, system, electronic device, medium and terminal for testing image processing effect | |
KR101640527B1 (en) | Method and Apparatus for Monitoring Video for Estimating Size of Single Object | |
CN111289087A (en) | Remote machine vision vibration measurement method and device | |
CN115876365B (en) | Visual testing method, device and medium for inhaul cable force based on motion comprehensive brightness spectrum | |
CN111898552A (en) | Method and device for distinguishing person attention target object and computer equipment | |
Chen et al. | Modal frequency identification of stay cables with ambient vibration measurements based on nontarget image processing techniques | |
CN114184127B (en) | Single-camera target-free building global displacement monitoring method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |