CN113538580B - Vibration measurement method and system based on visual processing - Google Patents
Vibration measurement method and system based on visual processing Download PDFInfo
- Publication number
- CN113538580B CN113538580B CN202110799504.4A CN202110799504A CN113538580B CN 113538580 B CN113538580 B CN 113538580B CN 202110799504 A CN202110799504 A CN 202110799504A CN 113538580 B CN113538580 B CN 113538580B
- Authority
- CN
- China
- Prior art keywords
- vibration
- monitoring area
- image
- pixel point
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 title claims abstract description 59
- 238000000691 measurement method Methods 0.000 title claims abstract description 18
- 230000000007 visual effect Effects 0.000 title claims abstract description 16
- 238000012544 monitoring process Methods 0.000 claims abstract description 239
- 238000001514 detection method Methods 0.000 claims abstract description 79
- 238000001914 filtration Methods 0.000 claims abstract description 36
- 238000000034 method Methods 0.000 claims abstract description 24
- 238000005316 response function Methods 0.000 claims description 31
- 238000001228 spectrum Methods 0.000 claims description 19
- 239000007787 solid Substances 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000006243 chemical reaction Methods 0.000 claims description 9
- 238000005259 measurement Methods 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000007921 spray Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01H—MEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
- G01H9/00—Measuring mechanical vibrations or ultrasonic, sonic or infrasonic waves by using radiation-sensitive means, e.g. optical means
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T90/00—Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation
Abstract
The application discloses a vibration measurement method and system based on visual processing, comprising the following steps: extracting brightness information of each pixel point in the first monitoring area; filtering the extracted brightness information; calculating to obtain the phase of each pixel point in the first monitoring area in the vibration image of the current frame; calculating the phase difference of each pixel point in the first monitoring area; weighting the phase difference of each pixel point in the obtained first monitoring area; summing the weighted phase differences of all pixel points in a first monitoring area in the current frame vibration image to obtain the object vibration quantity of the first monitoring area in the current frame vibration image; and generating a vibration signal of the target detection object image in the first monitoring area according to the object vibration quantity of the first monitoring area in each obtained vibration image. In the method, the designed image filter is used for carrying out filtering processing on the image, pixel coordinate information in the image is directly used, and the characteristic target points which are manually set do not need to be posted or sprayed on the surface of the target detection object.
Description
Technical Field
The application belongs to the technical field of vision processing, and particularly relates to a vibration measurement method and system based on vision processing.
Background
When the high-speed railway platform canopy is in high-speed running of a high-speed railway train, vibration can occur due to air flow and other reasons, so that the canopy is in a frequent vibration state, the safety of the canopy structure is a great test, and once fatigue damage occurs, the safety operation of a railway is endangered, and serious economic and property loss and personal safety threat are caused.
In the prior art, a contact type measuring method is mainly adopted to detect the vibration condition of the rain shed, and specifically, the rain shed is detected by arranging a large number of contact type measuring sensors on the rain shed.
However, the contact measurement method in the prior art has the defects of large workload and great inconvenience because related departments are required to cooperate for development.
Disclosure of Invention
In order to solve the technical problems in the prior art, the application provides a vibration measurement method and system based on visual processing.
In a first aspect, the present application provides a vibration measurement method based on visual processing, comprising:
acquiring a vibration image sequence of a target detection object in a vibration state, wherein the vibration image sequence comprises a plurality of vibration images which are ordered according to time sequence;
selecting at least one monitoring area in an image processing display interface, wherein each monitoring area at least covers part of the target detection object image in the vibration image;
The following steps are executed for each monitoring area in each frame of vibration image:
extracting brightness information of each pixel point in a first monitoring area, wherein the first monitoring area is any monitoring area in the at least one monitoring area;
filtering the brightness information of each pixel point in the extracted first monitoring area;
according to the brightness information of each pixel point in the filtered first monitoring area, the phase of each pixel point in the first monitoring area in the vibration image of the current frame is calculated;
respectively calculating the phase difference between the phase of the vibration image of the current frame and the phase of the vibration image of the first frame of each pixel point in the first monitoring area;
weighting the phase difference of each pixel point in the first monitoring area to obtain a weighted phase difference of each pixel point in the first monitoring area in the current frame vibration image;
summing the weighted phase differences of all the pixel points in the first monitoring area in the current frame vibration image to obtain the object vibration quantity of the first monitoring area in the current frame vibration image;
and generating a vibration signal of the target detection object image in the first monitoring area according to the object vibration quantity of the first monitoring area in each obtained frame of vibration image.
Optionally, filtering the brightness information flow of each pixel point in the extracted first monitoring area includes:
performing Fourier transform on the brightness information of each pixel point in the first monitoring area to obtain a Fourier spectrum response function of the brightness information of each pixel point in the first monitoring area;
filtering the Fourier spectrum response function through an image filter H (u, v) to obtain a filtering response function of each pixel point in a first monitoring area, wherein the image filter H (u, v) meets a first relational expression, and the first relational expression is as follows:
wherein W represents the passband bandwidth of the image filter, D (u, v) represents the distance from (u, v) to the origin of the frequency plane, D 0 Represents a cut-off frequency;
inverse Fourier transform is performed on the filter response function to obtain brightness information after filtering of each pixel point in the first monitoring areaSaid filtered luminance information +.>A second relational expression is satisfied, and the second relational expression is:
wherein ,the fourier spectrum response function is represented by x, the pixel abscissa of the pixel, y, the pixel ordinate of the pixel, M, N, the size of the vibration image, j, the imaginary unit, u, the frequency variable in the x direction, and v, the frequency variable in the y direction.
Optionally, performing a weighting process on the obtained phase difference of each pixel point in the first monitoring area to obtain a weighted phase difference of each pixel point in the first monitoring area in the vibration image of the current frame, where the weighting process includes:
acquiring preset adjacent pixel lighting brightness information corresponding to each pixel point in a first monitoring area in a vibration image of a current frame;
the phase difference of each pixel point in the first monitoring area in the vibration image of the current frame is weighted according to a third relation, so that the weighted phase difference of each pixel point in the first monitoring area in the vibration image of the current frame is obtained, wherein the third relation is as follows:
wherein ,representing the weighted phase difference of each pixel point in the first monitoring area in the vibration image of the ith frame,/>Representing the luminance information of the preset adjacent pixel corresponding to the pixel point with the pixel coordinate of (x, y) in the vibration image of the ith frame,/>Representation ofPhase difference, m, corresponding to pixel point with pixel coordinates of (x, y) in ith frame vibration image<k<n,m<l<n, wherein m and n represent pixel coordinate values of a preset neighboring pixel point corresponding to a pixel point having a pixel coordinate of (x, y).
Optionally, the method further comprises:
converting the generated vibration signals of the target detection object images in each monitoring area into vibration signals of the entity parts corresponding to the target detection object images in each monitoring area;
And determining the vibration frequency information of each entity part of the target detection object according to the vibration signals of the entity parts corresponding to the target detection object images in each monitoring area.
Optionally, if the target detection object is a canopy ceiling, the method includes:
determining a vibration signal of an entity part corresponding to the canopy image in the first monitoring area according to a fourth relational expression, wherein the fourth relational expression is as follows:
R=S×γ×1/cos(α)
wherein R represents a vibration signal of a solid portion corresponding to the canopy image in the first monitoring area, S represents a vibration signal of the canopy image in the first monitoring area, γ represents a pixel equivalent ratio, and α represents an included angle between a solid vibration direction of the canopy and a vibration direction of the acquired canopy image;
and carrying out Fourier transform on the vibration signals of the entity part corresponding to the canopy image in the first monitoring area to obtain the vibration frequency information of the entity part corresponding to the canopy image in the first monitoring area.
In a second aspect, the present application also provides a vision-based vibration measurement system, comprising:
the acquisition module is used for acquiring a vibration image sequence of the target detection object in a vibration state, wherein the vibration image sequence comprises a plurality of vibration images which are ordered according to time sequence;
The frame selection module is used for selecting at least one monitoring area in the image processing display interface in a frame mode, wherein each monitoring area at least covers part of the target detection object image in the vibration image;
the extraction module is used for extracting the brightness information of each pixel point in a first monitoring area, wherein the first monitoring area is any monitoring area in the at least one monitoring area;
the filtering processing module is used for filtering the brightness information of each pixel point in the extracted first monitoring area;
the first calculation module is used for respectively calculating the phase of each pixel point in the first monitoring area in the current frame vibration image according to the brightness information of each pixel point in the filtered first monitoring area;
the second calculation module is used for calculating the phase difference between the phase of the vibration image of the current frame and the phase of the vibration image of the first frame of each pixel point in the first monitoring area respectively;
the weighting processing module is used for carrying out weighting processing on the obtained phase difference of each pixel point in the first monitoring area to obtain a weighted phase difference of each pixel point in the first monitoring area in the vibration image of the current frame;
The third calculation module is used for summing the weighted phase differences of all the pixel points in the first monitoring area in the current frame vibration image to obtain the object vibration quantity of the first monitoring area in the current frame vibration image;
and the generation module is used for generating a vibration signal of the target detection object image in the first monitoring area according to the object vibration quantity of the first monitoring area in each obtained frame of vibration image.
Optionally, the filtering processing module comprises a fourier transform module, an image filter and an inverse fourier transform module;
the Fourier transform module is used for carrying out Fourier transform on the brightness information of each pixel point in the first monitoring area to obtain a Fourier spectrum response function of the brightness information of each pixel point in the first monitoring area;
the image filter is used for filtering the Fourier spectrum response function to obtain a filtering response function of each pixel point in the first monitoring area, wherein the image filter H (u, v) meets a first relational expression, and the first relational expression is as follows:
wherein W represents the passband bandwidth of the image filter, D (u, v) represents the distance from (u, v) to the origin of the frequency plane, D 0 Represents a cut-off frequency;
An inverse fourier transform module, configured to inverse fourier transform the filter response function to obtain brightness information after filtering of each pixel point in the first monitoring areaSaid filtered luminance information +.>A second relational expression is satisfied, and the second relational expression is:
wherein ,the fourier spectrum response function is represented by x, the pixel abscissa of the pixel, y, the pixel ordinate of the pixel, M, N, the size of the vibration image, j, the imaginary unit, u, the frequency variable in the x direction, and v, the frequency variable in the y direction.
Optionally, the weighting processing module comprises an acquisition sub-module and a weighting processing sub-module;
the acquisition sub-module is used for acquiring preset adjacent pixel lighting brightness information corresponding to each pixel point in a first monitoring area in the vibration image of the current frame;
the weighting processing sub-module is configured to weight a phase difference of each pixel point in the first monitoring area in the current frame vibration image according to a third relational expression, so as to obtain a weighted phase difference of each pixel point in the first monitoring area in the current frame vibration image, where the third relational expression is:
wherein ,representing the weighted phase difference of each pixel point in the first monitoring area in the vibration image of the ith frame,/ >Representing the luminance information of the preset adjacent pixel corresponding to the pixel point with the pixel coordinate of (x, y) in the vibration image of the ith frame,/>Representing phase difference, m, corresponding to pixel point with pixel coordinate (x, y) in ith frame vibration image<k<n,m<l<n, wherein m and n represent pixel coordinate values of a preset neighboring pixel point corresponding to a pixel point having a pixel coordinate of (x, y).
Optionally, the system further comprises a conversion module and a determination module:
the conversion module is used for converting the generated vibration signals of the target detection object images in each monitoring area into the vibration signals of the entity parts corresponding to the target detection object images in each monitoring area;
and the determining module is used for determining the vibration frequency information of each entity part of the target detection object according to the vibration signals of the entity part corresponding to the target detection object image in each monitoring area.
Optionally, if the target detection object is a canopy ceiling, the conversion module is configured to determine a vibration signal of a physical portion corresponding to the canopy ceiling image in the first monitoring area according to a fourth relational expression, where the fourth relational expression is:
R=S×γ×1/cos(α)
wherein R represents a vibration signal of a solid portion corresponding to the canopy image in the first monitoring area, S represents a vibration signal of the canopy image in the first monitoring area, γ represents a pixel equivalent ratio, and α represents an included angle between a solid vibration direction of the canopy and a vibration direction of the acquired canopy image;
And the determining module is used for carrying out Fourier transform on the vibration signals of the entity part corresponding to the canopy image in the first monitoring area to obtain the vibration frequency information of the entity part corresponding to the canopy image in the first monitoring area.
In summary, according to the vibration measurement method and system based on visual processing provided by the application, the designed image filter is used for filtering the image, and pixel coordinate information in the image is directly used, so that no special features such as manually set feature target points and the like are required to be identified in advance in the image, that is, the feature information of the target detection object can be directly used, and no manually set feature target points are required to be posted or sprayed on the surface of the target detection object. In addition, according to the vibration measurement method based on visual processing provided by the embodiment of the application, each local vibration condition of the target detection object can be analyzed by selecting a plurality of monitoring areas through a frame.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic workflow diagram of a vibration measurement method based on visual processing according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a video stream or a sequence of vibration images acquired on site of a target test object according to an embodiment of the present application;
fig. 3 is a schematic diagram of a frame selection monitoring area in a vibration measurement method based on visual processing according to an embodiment of the present application;
FIG. 4 is a schematic workflow diagram of yet another vibration measurement method based on visual processing provided in an embodiment of the present application;
fig. 5 is a schematic diagram of vibration measurement of a canopy ceiling by using a vibration measurement method based on visual processing according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The application provides a vibration measurement method based on visual processing, as shown in fig. 1, comprising the following steps:
Step 100, acquiring a vibration image sequence of the target detection object in a vibration state, wherein the vibration image sequence comprises a plurality of vibration images which are ordered according to time sequence.
It should be noted that, if the video stream of the target object in the vibration state is initially acquired, decoding is also required to be performed on the video stream to obtain the vibration image sequence.
Secondly, the application needs to acquire a video stream or a vibration image sequence of the target detection object in a vibration state by using an acquisition device. However, since vibration measurement is required on the target detection object in the field, that is, the acquisition device performs video stream or vibration image series acquisition in the same vibration state, in order to ensure accuracy of the vibration measurement result, the vibration frequency of the acquisition device itself is far smaller than that of the target detection object in the same vibration state.
Still further, the present application does not limit an acquisition apparatus for acquiring a video stream of a target detection object in a vibrating state, for example: the acquisition device may include a camera and a camera fixture, wherein image acquisition may be performed using a professional industrial camera, a high quality vibration image of the target detection object may be acquired, and a stable measurement environment may be provided using the camera fixture to fix the camera.
In a specific example, as shown in fig. 2, the target detection object is a high-speed railway station canopy, and it is necessary to authorize a detection person to erect a collection device (tools such as a tripod and a camera are erected in a safety range) on a station, wait for a train to enter a station after erecting the collection device, start collecting data when the train is about to enter the station, pause the collection after a proper time of entering the station, save the data and record the time, the vehicle type and the direction of going to when the train passes the station. The process of data processing (i.e. the process of executing steps 100 to 900) may be selected to process the analysis data in real time, or to take and store the image data first, and then to perform the data analysis after the shooting is completed.
And 200, selecting at least one monitoring area in an image processing display interface, wherein each monitoring area at least covers part of the target detection object image in the vibration image.
The collected complete target detection object image can be displayed on the image processing display interface, but if the image in the whole display interface is directly processed, the data size is large on one hand, and on the other hand, the vibration condition of the whole target detection object can only be reflected, and the local vibration condition of the target detection object can not be reflected. Based on this, in the embodiment of the present application, one or more monitoring areas may be selected in a frame on the image processing display interface, where each monitoring area at least covers a part of the image of the target detection object in the vibration image, so as to perform data processing analysis on each part of the target detection object in the vibration image, and then output corresponding vibration signals respectively, so as to reflect each local vibration condition of the target detection object. For example, as shown in fig. 3 and fig. 4, the monitoring area a, the monitoring area B and the monitoring area C are selected from the acquired complete target detection object images, and then the monitoring area a, the monitoring area B and the monitoring area C are respectively subjected to data processing analysis, so that vibration signals corresponding to the three monitoring areas are respectively obtained, and three local vibration conditions of the target detection object A, B, C can be reflected.
After the vibration image sequence is acquired, a time period for performing data processing is selected, so that a start frame vibration image and an end frame vibration image for performing data processing are determined. Because each frame of vibration image is to be processed, the monitoring area can be framed on the display interface for displaying the initial frame of vibration image, and the position of the pre-framed monitoring area on the image processing display interface is unchanged when the subsequent frame of vibration image is processed.
After the above step 200 is completed, the following steps 300 to 800 are performed for each monitoring area in each frame of vibration image, so as to obtain the vibration condition of the target detection object image in each monitoring area.
The following describes in detail a data processing process of a first monitoring area in a frame of vibration image, where the first monitoring area may be any monitoring area in the at least one monitoring area.
Step 300, extracting brightness information of each pixel point in the first monitoring area.
The luminance information in the embodiment of the present application mainly includes luminance values, and the luminance information of each pixel point in the first monitoring area may be expressed asWherein (x, y) represents the pixel coordinates of the pixel point, t i Indicating the ith moment, selecting a time period for data processing to include T moments T 1 ,t 2 ,t 3 ,……,t T . That is to say +>Indicating that the first monitoring area is at t 1 At the moment (i.e., the first frame shaking image), the luminance information of the pixel point with the pixel coordinates (x, y),/>indicating that the first monitoring area is at t 2 And (3) at the moment (namely, a second frame of vibration image), obtaining the brightness information of each pixel point in the first monitoring area in each frame of vibration image by analogy with the brightness information of the pixel points with the pixel coordinates of (x, y).
Step 400, filtering the brightness information of each pixel in the extracted first monitoring area.
The method of filtering the luminance information of each pixel in the extracted first monitoring area is not limited, and in a specific example, the method may be implemented as follows steps 410-430.
Step 410, performing fourier transform on the brightness information of each pixel in the first monitoring area to obtain a fourier spectrum response function of the brightness information of each pixel in the first monitoring area wherein ,ti Indicating the i-th time, (u, v) indicates the frequency variable corresponding to the pixel coordinates (x, y), and +.>Representing a Fourier spectrum response function corresponding to a pixel point with a pixel coordinate of (x, y) in an ith frame vibration image, wherein the Fourier spectrum response function is +. >Satisfies the following relation (1):
where M, N denotes the size of the vibration image, j denotes an imaginary unit, u denotes a frequency variable in the x direction, and v denotes a frequency variable in the y direction.
Step 420, passing the image filter H (u, v) through the Fourier spectrum response functionFiltering to obtain a filter response function of each pixel in the first monitoring region>Wherein the image filter H (u, v) satisfies a first relational expression (2), and the first relational expression (2) is:
wherein W represents the passband bandwidth of the image filter, D (u, v) represents the distance from (u, v) to the origin of the frequency plane, D 0 Representing the cut-off frequency.
Step 430, applying a filter response function to the filterInverse fourier transform to obtain filtered luminance information +/for each pixel in the first monitored region>Said filtered luminance information +.>Satisfying a second relational expression (3), wherein the second relational expression (3) is:
where x represents the pixel abscissa of the pixel point, y represents the pixel ordinate of the pixel point, M, N represents the size of the vibration image, j represents an imaginary unit, u represents the frequency variation in the x direction, and v represents the frequency variation in the y direction.
The value of (2) is complex form, and is the brightness information graph of the vibration image after the filtering process. By shooting the vibration of the target detection object by the camera, the vibration of the target detection object can be projected into the camera plane to form different brightness values, the target detection object vibrates along with time, the brightness value of the corresponding position of the target detection object on the vibration image can also be changed, namely the structural information on the vibration image is changed, and the phase is- >The change synchronization of (1) reflects the change of the structural information on the vibration image, the phase +.>The variation of the vibration image is equal to the variation of the vibration image structure information, and the vibration amount of the target detection object on the vibration image can be obtained. Therefore, it is necessary to further according to->Calculate phase +.>Phase ofThe structure information of the pixels on the vibration image is contained.
And 500, respectively calculating the phase of each pixel point in the first monitoring area in the vibration image of the current frame according to the brightness information of each pixel point in the first monitoring area after filtering.
According to the above step 400, the filtered brightness information of each pixel point in the first monitoring area in the vibration image of the current frame can be obtainedThen, the phase of each pixel point in the first monitoring area in the vibration image of the current frame can be obtained through calculation according to the following relation (4), wherein the relation (4) is as follows:
step 600, respectively calculating a phase difference between the phase of the vibration image of the current frame and the phase of the vibration image of the first frame of each pixel point in the first monitoring area.
After the phase of each pixel point in the first monitoring area in the current frame vibration image is obtained, the phase difference of each pixel point in the first monitoring area in the current frame vibration image and the first frame vibration image is obtained, and the phase difference is equal to the variation of the vibration image structure information. Wherein the first frame vibration image is t 1 Vibration image corresponding to moment.
Similarly, a phase difference corresponding to each pixel point in the first monitoring area in the other frame of vibration image may be obtained, for example, the phase difference corresponding to each pixel point in the first monitoring area in the fifth frame of vibration image refers to a phase difference between the fifth frame of vibration image and the first frame of vibration image of each pixel point in the first monitoring area in the fifth frame of vibration image.
And 700, carrying out weighting processing on the obtained phase difference of each pixel point in the first monitoring area to obtain the weighted phase difference of each pixel point in the first monitoring area in the vibration image of the current frame.
In order to obtain a more accurate vibration amount of the object, the obtained phase difference of each pixel point in the first monitoring area is weighted in the present application, but the method for weighting each phase difference is not limited in the present application.
In a feasible manner, the embodiment of the present application firstly obtains the preset adjacent pixel lighting brightness information corresponding to each pixel point in the first monitoring area in the vibration image of the current frame, wherein the number and the position of the preset adjacent pixel points corresponding to each pixel point can be set by itself, and the present application does not limit the number and the position of the preset adjacent pixel points; and then weighting the phase difference corresponding to each pixel point in the first monitoring area in the vibration image of the current frame according to a third relation to obtain the weighted phase difference of each pixel point in the first monitoring area in the vibration image of the current frame, wherein the third relation (5) is as follows:
wherein ,representing the weighted phase difference of each pixel point in the first monitoring area in the vibration image of the ith frame,/>Representing the luminance information of the preset adjacent pixel corresponding to the pixel point with the pixel coordinate of (x, y) in the vibration image of the ith frame,/>Representing phase difference, m, corresponding to pixel point with pixel coordinate (x, y) in ith frame vibration image<k<n,m<l<n, wherein m and n represent pixel coordinate values of a preset neighboring pixel point corresponding to a pixel point having a pixel coordinate of (x, y).
Using luminance information in the vicinity of pixel coordinates (x, y)(m<k<n,m<l<n) pair->Weighting to obtain a weighted phase difference +.>By this step, the weight of the phase difference of the area with relatively large brightness value in the finally obtained object vibration quantity, namely the weight of the phase difference at the outline of the target detection object on the vibration image, can be increased, so that the finally obtained object vibration quantity +.>More accurate.
And 800, summing the weighted phase differences of all pixel points in the first monitoring area in the current frame vibration image to obtain the object vibration quantity of the first monitoring area in the current frame vibration image.
The object vibration quantity of the first monitoring area in the current frame vibration image can reflect the vibration deviation of the first monitoring area in the current frame vibration image relative to the first monitoring area in the first frame vibration image.
According to the steps 300-800, the vibration quantity of the object in the first monitoring area in each vibration image can be obtained wherein ,/>
And 900, generating a vibration signal of the target detection object image in the first monitoring area according to the object vibration quantity of the first monitoring area in each obtained frame of vibration image.
It should be understood that, according to the above steps 300-900, the vibration amount of the object in each monitoring area in each frame of vibration image can be obtained, so that the vibration signal of the image of the target detection object in each monitoring area can be obtained, that is, the vibration signal corresponding to the vibration image sequence can be obtainedTherefore, the vibration signals can reflect the vibration conditions of the target detection object images in the monitoring area at different moments.
Further, the generated vibration signal of the target detection object image in each monitoring area can be converted into a vibration signal of a solid portion corresponding to the target detection object image in each monitoring area; and then, determining the vibration frequency information of each entity part of the target detection object according to the vibration signals of the entity parts corresponding to the target detection object image in each monitoring area.
As shown in fig. 5, taking a target detection object as a canopy ceiling as an example, the camera collects a canopy ceiling vibration image at an elevation angle α, and assuming that the actual vibration of the canopy ceiling is in a vertical direction, the canopy ceiling vibration direction photographed by the camera is actually a projection of the actual vibration of the ceiling, so that the actual vibration signal of the canopy ceiling satisfies a fourth relational expression (6), where the fourth relational expression (6) is:
R=S.times.gamma.times.1/cos (. Alpha.) relation (6)
Wherein R represents a vibration signal of a physical part corresponding to the canopy ceiling image in the first monitoring area, S represents a vibration signal of the canopy ceiling image in the first monitoring area, gamma represents a pixel equivalent ratio, and alpha represents a camera elevation angle, namely an included angle between the physical vibration direction of the canopy ceiling and the vibration direction of the acquired canopy ceiling image, wherein a pixel equivalent ratio scale gamma (unit: mm/pixel) can be determined through picture calibration; and then, carrying out Fourier transform on the vibration signals of the entity part corresponding to the canopy image in the first monitoring area to obtain a spectrogram of the signals, and directly reading the peak frequency of the signals on the spectrogram to obtain each-order frequency of the signals, thereby obtaining the vibration frequency information of the entity part corresponding to the canopy in the first monitoring area.
In summary, according to the vibration measurement method based on visual processing provided in the embodiments of the present application, the image is filtered by using the designed image filter, and the pixel coordinate information in the image is directly used, so that it is unnecessary to identify in advance what special features are included in the image, such as the feature target points set by people, that is, the feature information of the target detection object itself can be directly used in the present application, and it is unnecessary to post or spray the feature target points set by people on the surface of the target detection object. In addition, according to the vibration measurement method based on visual processing provided by the embodiment of the application, each local vibration condition of the target detection object can be analyzed by selecting a plurality of monitoring areas through a frame.
The embodiment of the application also provides a vibration measurement system based on visual processing, which comprises:
the acquisition module is used for acquiring a vibration image sequence of the target detection object in a vibration state, wherein the vibration image sequence comprises a plurality of vibration images which are ordered according to time sequence;
the frame selection module is used for selecting at least one monitoring area in the image processing display interface in a frame mode, wherein each monitoring area at least covers part of the target detection object image in the vibration image;
The extraction module is used for extracting the brightness information of each pixel point in a first monitoring area, wherein the first monitoring area is any monitoring area in the at least one monitoring area;
the filtering processing module is used for filtering the brightness information of each pixel point in the extracted first monitoring area;
the first calculation module is used for respectively calculating the phase of each pixel point in the first monitoring area in the current frame vibration image according to the brightness information of each pixel point in the filtered first monitoring area;
the second calculation module is used for calculating the phase difference between the phase of the vibration image of the current frame and the phase of the vibration image of the first frame of each pixel point in the first monitoring area respectively;
the weighting processing module is used for carrying out weighting processing on the obtained phase difference of each pixel point in the first monitoring area to obtain a weighted phase difference of each pixel point in the first monitoring area in the vibration image of the current frame;
the third calculation module is used for summing the weighted phase differences of all the pixel points in the first monitoring area in the current frame vibration image to obtain the object vibration quantity of the first monitoring area in the current frame vibration image;
And the generation module is used for generating a vibration signal of the target detection object image in the first monitoring area according to the object vibration quantity of the first monitoring area in each obtained frame of vibration image.
The filtering processing module comprises a Fourier transform module, an image filter and an inverse Fourier transform module;
the Fourier transform module is used for carrying out Fourier transform on the brightness information of each pixel point in the first monitoring area to obtain a Fourier spectrum response function of the brightness information of each pixel point in the first monitoring area;
the image filter is used for filtering the Fourier spectrum response function to obtain a filtering response function of each pixel point in the first monitoring area, wherein the image filter H (u, v) meets a first relational expression, and the first relational expression is as follows:
wherein W represents the passband bandwidth of the image filter, D (u, v) represents the distance from (u, v) to the origin of the frequency plane, D 0 Represents a cut-off frequency;
an inverse fourier transform module, configured to inverse fourier transform the filter response function to obtain brightness information after filtering of each pixel point in the first monitoring areaSaid filtered luminance information +.>A second relational expression is satisfied, and the second relational expression is:
wherein ,the fourier spectrum response function is represented by x, the pixel abscissa of the pixel, y, the pixel ordinate of the pixel, M, N, the size of the vibration image, j, the imaginary unit, u, the frequency variable in the x direction, and v, the frequency variable in the y direction.
The weighting processing module comprises an acquisition sub-module and a weighting processing sub-module;
the acquisition sub-module is used for acquiring preset adjacent pixel lighting brightness information corresponding to each pixel point in a first monitoring area in the vibration image of the current frame;
the weighting processing sub-module is configured to weight a phase difference of each pixel point in the first monitoring area in the current frame vibration image according to a third relational expression, so as to obtain a weighted phase difference of each pixel point in the first monitoring area in the current frame vibration image, where the third relational expression is:
wherein ,representing the weighted phase difference of each pixel point in the first monitoring area in the vibration image of the ith frame,/>Representing the luminance information of the preset adjacent pixel corresponding to the pixel point with the pixel coordinate of (x, y) in the vibration image of the ith frame,/>Representing phase difference, m, corresponding to pixel point with pixel coordinate (x, y) in ith frame vibration image<k<n,m<l<n, wherein m and n represent pixel coordinate values of a preset neighboring pixel point corresponding to a pixel point having a pixel coordinate of (x, y).
The system further comprises a conversion module and a determination module:
the conversion module is used for converting the generated vibration signals of the target detection object images in each monitoring area into the vibration signals of the entity parts corresponding to the target detection object images in each monitoring area;
and the determining module is used for determining the vibration frequency information of each entity part of the target detection object according to the vibration signals of the entity part corresponding to the target detection object image in each monitoring area.
If the target detection object is a canopy ceiling, the conversion module is configured to determine a vibration signal of a physical portion corresponding to the canopy ceiling image in the first monitoring area according to a fourth relational expression, where the fourth relational expression is:
R=S×γ×1/cos(α)
wherein R represents a vibration signal of a solid portion corresponding to the canopy image in the first monitoring area, S represents a vibration signal of the canopy image in the first monitoring area, γ represents a pixel equivalent ratio, and α represents an included angle between a solid vibration direction of the canopy and a vibration direction of the acquired canopy image;
and the determining module is used for carrying out Fourier transform on the vibration signals of the entity part corresponding to the canopy image in the first monitoring area to obtain the vibration frequency information of the entity part corresponding to the canopy image in the first monitoring area.
The same or similar parts between the various embodiments in this specification are referred to each other. In particular, for embodiments of the system, since they are substantially similar to the method embodiments, the description is relatively simple, as far as reference is made to the description in the method embodiments.
The foregoing detailed description has been provided for the purposes of illustration in connection with specific embodiments and exemplary examples, but such description is not to be construed as limiting the application. Those skilled in the art will appreciate that various equivalent substitutions, modifications and improvements may be made to the technical solution of the present application and its embodiments without departing from the spirit and scope of the present application, and these all fall within the scope of the present application. The scope of the application is defined by the appended claims.
In a specific implementation, the embodiments of the present application further provide a computer readable storage medium, where the computer readable storage medium may store a program, where the program may include some or all of the steps in each embodiment of the vision processing-based vibration measurement method and system provided herein when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a random-access memory (random access memory, RAM), or the like.
It will be apparent to those skilled in the art that the techniques in the embodiments of the present application may be implemented in software plus the necessary general hardware platform. Based on such understanding, the technical solutions in the embodiments of the present application may be embodied in essence or what contributes to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments of the present application.
The above-described embodiments of the present application are not intended to limit the scope of the present application.
Claims (10)
1. A vibration measurement method based on visual processing, comprising:
acquiring a vibration image sequence of a target detection object in a vibration state, wherein the vibration image sequence comprises a plurality of vibration images which are ordered according to time sequence;
selecting at least one monitoring area in an image processing display interface, wherein each monitoring area at least covers part of the target detection object image in the vibration image;
The following steps are executed for each monitoring area in each frame of vibration image:
extracting brightness information of each pixel point in a first monitoring area, wherein the first monitoring area is any monitoring area in the at least one monitoring area;
filtering the brightness information of each pixel point in the extracted first monitoring area;
according to the brightness information of each pixel point in the filtered first monitoring area, the phase of each pixel point in the first monitoring area in the vibration image of the current frame is calculated;
respectively calculating the phase difference between the phase of the vibration image of the current frame and the phase of the vibration image of the first frame of each pixel point in the first monitoring area;
weighting the phase difference of each pixel point in the first monitoring area to obtain a weighted phase difference of each pixel point in the first monitoring area in the current frame vibration image;
summing the weighted phase differences of all the pixel points in the first monitoring area in the current frame vibration image to obtain the object vibration quantity of the first monitoring area in the current frame vibration image;
and generating a vibration signal of the target detection object image in the first monitoring area according to the object vibration quantity of the first monitoring area in each obtained frame of vibration image.
2. The method of claim 1, wherein filtering the extracted luminance information stream for each pixel in the first monitored region comprises:
performing Fourier transform on the brightness information of each pixel point in the first monitoring area to obtain a Fourier spectrum response function of the brightness information of each pixel point in the first monitoring area;
filtering the Fourier spectrum response function through an image filter H (u, v) to obtain a filtering response function of each pixel point in a first monitoring area, wherein the image filter H (u, v) meets a first relational expression, and the first relational expression is as follows:
wherein W represents the passband bandwidth of the image filter, D (u, v) represents the distance from (u, v) to the origin of the frequency plane, D 0 Represents a cut-off frequency;
inverse Fourier transform is performed on the filter response function to obtain brightness information after filtering of each pixel point in the first monitoring areaSaid filtered luminance information +.>A second relational expression is satisfied, and the second relational expression is:
3. The method of claim 1, wherein weighting the phase difference of each pixel point in the first monitored area to obtain a weighted phase difference of each pixel point in the first monitored area in the current frame vibration image comprises:
acquiring preset adjacent pixel lighting brightness information corresponding to each pixel point in a first monitoring area in a vibration image of a current frame;
the phase difference of each pixel point in the first monitoring area in the vibration image of the current frame is weighted according to a third relation, so that the weighted phase difference of each pixel point in the first monitoring area in the vibration image of the current frame is obtained, wherein the third relation is as follows:
wherein ,representing the weighted phase difference of each pixel point in the first monitoring area in the vibration image of the ith frame,/>Representing the luminance information of the preset adjacent pixel corresponding to the pixel point with the pixel coordinate of (x, y) in the vibration image of the ith frame,/>Representing phase difference, m, corresponding to pixel point with pixel coordinate (x, y) in ith frame vibration image<k<n,m<l<n, wherein m and n represent pixel coordinate values of a preset neighboring pixel point corresponding to a pixel point having a pixel coordinate of (x, y).
4. The method according to claim 1, wherein the method further comprises:
Converting the generated vibration signals of the target detection object images in each monitoring area into vibration signals of the entity parts corresponding to the target detection object images in each monitoring area;
and determining the vibration frequency information of each entity part of the target detection object according to the vibration signals of the entity parts corresponding to the target detection object images in each monitoring area.
5. The method of claim 4, wherein if the target test object is a canopy, the method comprises:
determining a vibration signal of an entity part corresponding to the canopy image in the first monitoring area according to a fourth relational expression, wherein the fourth relational expression is as follows:
R=S×γ×1/cos(α)
wherein R represents a vibration signal of a solid portion corresponding to the canopy image in the first monitoring area, S represents a vibration signal of the canopy image in the first monitoring area, γ represents a pixel equivalent ratio, and α represents an included angle between a solid vibration direction of the canopy and a vibration direction of the acquired canopy image;
and carrying out Fourier transform on the vibration signals of the entity part corresponding to the canopy image in the first monitoring area to obtain the vibration frequency information of the entity part corresponding to the canopy image in the first monitoring area.
6. A vision processing-based vibration measurement system, comprising:
the acquisition module is used for acquiring a vibration image sequence of the target detection object in a vibration state, wherein the vibration image sequence comprises a plurality of vibration images which are ordered according to time sequence;
the frame selection module is used for selecting at least one monitoring area in the image processing display interface in a frame mode, wherein each monitoring area at least covers part of the target detection object image in the vibration image;
the extraction module is used for extracting the brightness information of each pixel point in a first monitoring area, wherein the first monitoring area is any monitoring area in the at least one monitoring area;
the filtering processing module is used for filtering the brightness information of each pixel point in the extracted first monitoring area;
the first calculation module is used for respectively calculating the phase of each pixel point in the first monitoring area in the current frame vibration image according to the brightness information of each pixel point in the filtered first monitoring area;
the second calculation module is used for calculating the phase difference between the phase of the vibration image of the current frame and the phase of the vibration image of the first frame of each pixel point in the first monitoring area respectively;
The weighting processing module is used for carrying out weighting processing on the obtained phase difference of each pixel point in the first monitoring area to obtain a weighted phase difference of each pixel point in the first monitoring area in the vibration image of the current frame;
the third calculation module is used for summing the weighted phase differences of all the pixel points in the first monitoring area in the current frame vibration image to obtain the object vibration quantity of the first monitoring area in the current frame vibration image;
and the generation module is used for generating a vibration signal of the target detection object image in the first monitoring area according to the object vibration quantity of the first monitoring area in each obtained frame of vibration image.
7. The system of claim 6, wherein the filtering processing module comprises a fourier transform module, an image filter, and an inverse fourier transform module;
the Fourier transform module is used for carrying out Fourier transform on the brightness information of each pixel point in the first monitoring area to obtain a Fourier spectrum response function of the brightness information of each pixel point in the first monitoring area;
the image filter is used for filtering the Fourier spectrum response function to obtain a filtering response function of each pixel point in the first monitoring area, wherein the image filter H (u, v) meets a first relational expression, and the first relational expression is as follows:
Wherein W represents the passband bandwidth of the image filter, D (u, v) represents the distance from (u, v) to the origin of the frequency plane, D 0 Represents a cut-off frequency;
an inverse fourier transform module, configured to inverse fourier transform the filter response function to obtain brightness information after filtering of each pixel point in the first monitoring areaSaid filtered luminance information +.>Satisfy a second relational expression ofThe formula is:
8. The system of claim 6, wherein the weighting processing module comprises an acquisition sub-module and a weighting processing sub-module;
the acquisition sub-module is used for acquiring preset adjacent pixel lighting brightness information corresponding to each pixel point in a first monitoring area in the vibration image of the current frame;
the weighting processing sub-module is configured to weight a phase difference of each pixel point in the first monitoring area in the current frame vibration image according to a third relational expression, so as to obtain a weighted phase difference of each pixel point in the first monitoring area in the current frame vibration image, where the third relational expression is:
wherein ,representing the weighted phase difference of each pixel point in the first monitored region in the i-th frame of vibration image,representing the luminance information of the preset adjacent pixel corresponding to the pixel point with the pixel coordinate of (x, y) in the vibration image of the ith frame,/>Representing phase difference, m, corresponding to pixel point with pixel coordinate (x, y) in ith frame vibration image<k<n,m<l<n, wherein m and n represent pixel coordinate values of a preset neighboring pixel point corresponding to a pixel point having a pixel coordinate of (x, y).
9. The system of claim 6, further comprising a conversion module and a determination module:
the conversion module is used for converting the generated vibration signals of the target detection object images in each monitoring area into the vibration signals of the entity parts corresponding to the target detection object images in each monitoring area;
and the determining module is used for determining the vibration frequency information of each entity part of the target detection object according to the vibration signals of the entity part corresponding to the target detection object image in each monitoring area.
10. The system of claim 9, wherein if the target detection object is a canopy, the conversion module is configured to determine a vibration signal of a physical portion corresponding to the canopy image in the first monitoring area according to a fourth relational expression, where the fourth relational expression is:
R=S×γ×1/cos(α)
Wherein R represents a vibration signal of a solid portion corresponding to the canopy image in the first monitoring area, S represents a vibration signal of the canopy image in the first monitoring area, γ represents a pixel equivalent ratio, and α represents an included angle between a solid vibration direction of the canopy and a vibration direction of the acquired canopy image;
and the determining module is used for carrying out Fourier transform on the vibration signals of the entity part corresponding to the canopy image in the first monitoring area to obtain the vibration frequency information of the entity part corresponding to the canopy image in the first monitoring area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110799504.4A CN113538580B (en) | 2021-07-15 | 2021-07-15 | Vibration measurement method and system based on visual processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110799504.4A CN113538580B (en) | 2021-07-15 | 2021-07-15 | Vibration measurement method and system based on visual processing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113538580A CN113538580A (en) | 2021-10-22 |
CN113538580B true CN113538580B (en) | 2023-06-16 |
Family
ID=78099383
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110799504.4A Active CN113538580B (en) | 2021-07-15 | 2021-07-15 | Vibration measurement method and system based on visual processing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113538580B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2003005302A1 (en) * | 2001-07-05 | 2003-01-16 | Advantest Corporation | Image processing apparatus and image processing method |
JP2020186957A (en) * | 2019-05-10 | 2020-11-19 | 国立大学法人広島大学 | Vibration analysis system, vibration analysis method, and program |
CN112254801A (en) * | 2020-12-21 | 2021-01-22 | 浙江中自庆安新能源技术有限公司 | Micro-vibration vision measurement method and system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111354042B (en) * | 2018-12-24 | 2023-12-01 | 深圳市优必选科技有限公司 | Feature extraction method and device of robot visual image, robot and medium |
-
2021
- 2021-07-15 CN CN202110799504.4A patent/CN113538580B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2003005302A1 (en) * | 2001-07-05 | 2003-01-16 | Advantest Corporation | Image processing apparatus and image processing method |
JP2020186957A (en) * | 2019-05-10 | 2020-11-19 | 国立大学法人広島大学 | Vibration analysis system, vibration analysis method, and program |
CN112254801A (en) * | 2020-12-21 | 2021-01-22 | 浙江中自庆安新能源技术有限公司 | Micro-vibration vision measurement method and system |
Non-Patent Citations (1)
Title |
---|
项目计算机辅助受理的研究方向与关键词――2012年度受理情况与2013年度注意事项;马惠珠;宋朝晖;季飞;侯嘉;熊小芸;;电子与信息学报(第01期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113538580A (en) | 2021-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110108348B (en) | Thin-wall part micro-amplitude vibration measurement method and system based on motion amplification optical flow tracking | |
Chen et al. | Modal identification of simple structures with high-speed video using motion magnification | |
CN109598794B (en) | Construction method of three-dimensional GIS dynamic model | |
CN111383285B (en) | Sensor fusion calibration method and system based on millimeter wave radar and camera | |
CN109931878A (en) | A kind of building curtain wall seismic deformation monitoring method based on digital speckle label | |
JP7438220B2 (en) | Reinforcing bar determination device and reinforcing bar determination method | |
CN111047568A (en) | Steam leakage defect detection and identification method and system | |
US20150227806A1 (en) | Object information extraction apparatus, object information extraction program, and object information extraction method | |
Lee et al. | Diagnosis of crack damage on structures based on image processing techniques and R-CNN using unmanned aerial vehicle (UAV) | |
JP4701383B2 (en) | Visual field defect evaluation method and visual field defect evaluation apparatus | |
CN113155032A (en) | Building structure displacement measurement method based on dynamic vision sensor DVS | |
Zhu et al. | A robust structural vibration recognition system based on computer vision | |
WO2020174916A1 (en) | Imaging system | |
Chen et al. | Video camera-based vibration measurement for Condition Assessment of Civil Infrastructure | |
Zhu et al. | A Noval Building Vibration Measurement system based on Computer Vision Algorithms | |
CN113538580B (en) | Vibration measurement method and system based on visual processing | |
CN110944154B (en) | Method for marking and identifying fixed object in high-altitude lookout camera image | |
CN101149803A (en) | Small false alarm rate test estimation method for point source target detection | |
Lelégard et al. | Multiscale Haar transform for blur estimation from a set of images | |
CN111289087A (en) | Remote machine vision vibration measurement method and device | |
CN114184127B (en) | Single-camera target-free building global displacement monitoring method | |
Chen et al. | Modal frequency identification of stay cables with ambient vibration measurements based on nontarget image processing techniques | |
CN115761487A (en) | Method for quickly identifying vibration characteristics of small and medium-span bridges based on machine vision | |
CN110472085A (en) | 3-D image searching method, system, computer equipment and storage medium | |
CN108363985B (en) | Target object perception system testing method and device and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |