CN116152716A - Identification method for lost mode in binocular vision dynamics mode parameter identification - Google Patents

Identification method for lost mode in binocular vision dynamics mode parameter identification Download PDF

Info

Publication number
CN116152716A
CN116152716A CN202310166030.9A CN202310166030A CN116152716A CN 116152716 A CN116152716 A CN 116152716A CN 202310166030 A CN202310166030 A CN 202310166030A CN 116152716 A CN116152716 A CN 116152716A
Authority
CN
China
Prior art keywords
video
modal
binocular vision
mode
test
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310166030.9A
Other languages
Chinese (zh)
Other versions
CN116152716B (en
Inventor
胡育佳
赵浩兰
朱坚民
姚宇朕
孙一泽
朱宸辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN202310166030.9A priority Critical patent/CN116152716B/en
Publication of CN116152716A publication Critical patent/CN116152716A/en
Application granted granted Critical
Publication of CN116152716B publication Critical patent/CN116152716B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)

Abstract

The invention belongs to the field of binocular vision dynamics modal parameter identification analysis, and provides a method for identifying a lost modal in binocular vision dynamics modal parameter identification. The invention relates to the field of dynamic signal processing, in particular to measurement of structural dynamic characteristics, and combines a video amplification technology with a non-contact binocular vision dynamic parameter pattern recognition test system, carries out video amplification on an acquired test picture, processes incomplete test data with modal loss to obtain effective and reliable modal parameters, improves the integrity of the test data, reduces repeated tests, reduces test cost and improves efficiency.

Description

Identification method for lost mode in binocular vision dynamics mode parameter identification
Technical Field
The invention belongs to the field of binocular vision dynamics modal parameter identification and analysis, and particularly relates to a method for identifying a lost modal in binocular vision dynamics modal parameter identification.
Background
With the continuous development of modern science and technology, various precise and complex mechanical structures in the field of mechanical research, such as aero-engine blades in the field of aerospace, spacecraft shells, engine crankshafts of high-speed racing vehicles and military ships in the field of transportation, and the like, are generated. On the basis of the research of structural appearance design, the requirements on the mechanical properties of the structure are gradually improved. The mode test research is an important means for researching mechanical characteristics of the structure, and the natural frequency, the mode shape and the damping ratio of the corresponding structure are obtained, so that the characteristics of main modes of each stage of the structure in a certain easily affected frequency range are clarified, the actual response of the structure under the action of various vibration sources outside or inside the frequency range can be predicted, and the stability of the mechanical performance of the corresponding mechanical structure is judged.
However, based on the existing experimental modal testing method, noise signals of external vibration and other interference vibration signals are easily introduced in the testing process, so that the noise signals are confused with the vibration signals of a research target; for some precision mechanical structures, due to the factors of small volume, light weight, high precision, larger influence after being acted by external force and the like, the conventional contact mode test method based on the traditional attached speed sensor cannot obtain accurate mode experiment parameters due to the fact that the attached mass is generated.
Particularly in extreme environments, the non-contact binocular vision dynamics modal parameter identification is limited to the use of the traditional contact sensor, and is widely applied to structural dynamics mode identification in extreme environments. However, in the mode test process, due to factors such as excitation, test position selection and the like, the phenomena of incomplete mode identification and partial mode data loss inevitably exist, the accuracy and the reliability of a test result are affected, and analysis on test result data is carried out after the influence of the damage of the sample size of the result data is caused.
The invention provides an identification method for solving the problems of incomplete extraction of partial modal parameter data and loss of vibration mode data caused by the influence of random external excitation, noise and the like.
Disclosure of Invention
In order to solve the technical problems, the invention provides a method for identifying a lost mode in binocular vision dynamics mode parameter identification, which aims to solve the problems in the existing non-contact mode testing technology, and the technical scheme adopted by the invention is as follows:
a method for identifying a lost mode in binocular vision dynamics mode parameter identification comprises the following steps:
step one: prefabricating speckles on the surface of a measured object;
step two: applying random excitation to the tested object, and collecting a motion state image of the test piece for a certain time through a binocular vision system;
step three: selecting a region to be detected of a detected object, and extracting displacement of the region in an acquisition time period through images acquired by two cameras of a binocular vision system;
step four: obtaining a power spectrum density function, a natural frequency and a corresponding vibration mode of the measured object through the transient displacement extracted from the area;
step five: judging whether the modal parameter identification in the area is complete or not, if so, carrying out step six to step ten on the frequency segment corresponding to the modal parameter, and if not, carrying out step eleven;
step six: respectively synthesizing images acquired by two high-speed cameras of a binocular vision system into videos in a time sequence;
step seven: performing video motion amplification processing on the synthesized video;
step eight: decomposing the video subjected to video motion amplification into images in a time sequence, and extracting displacement of the detected object again;
step nine: obtaining a power spectrum density function through the extracted displacement variation of the measured object for a certain time, and extracting corresponding natural frequency and corresponding modal parameters aiming at a frequency segment after video motion amplification processing;
step ten: verifying whether the natural frequency peak of the power spectrum density function in the step nine corresponds to a truly lost modal parameter, and verifying whether the amplification is effective or not by comparing the step vibration mode calculated through simulation with a test vibration mode subjected to video motion amplification treatment;
step eleven: and obtaining complete modal data.
Further, in the fourth step and the ninth step, the natural frequency of the structure and the corresponding modal parameters such as the vibration mode are identified by a bayesian operation modal calculation method for the extracted displacement.
Further, in the step ten, it is verified by singular spectrum whether the natural frequency peak of the power spectrum density function in the step nine is a true missing mode parameter.
In the seventh step, according to the sampling frequency used in image acquisition, video motion amplification processing of a specific multiple is performed on the video according to the frequency segment to be processed.
In the seventh step, when video motion amplification is performed, after spatial decomposition, temporal filtering, phase denoising and motion amplification, each sub-band image is reconstructed by using a complex steerable pyramid, so as to obtain an amplified video image.
Further, in the case of spatial decomposition, the input synthesized video is decomposed to obtain sub-band images of each frame of the synthesized video.
Further, the second step: in the time domain space filtering, a time domain band-pass filter is utilized to filter the phase of each spatial position, each direction and each scale image
The invention has the following beneficial effects:
the invention combines the video amplification technology with the non-contact binocular vision dynamic parameter pattern recognition test system, carries out video amplification on the collected test picture, and processes incomplete test data with modal loss to obtain effective and reliable modal parameter data, thereby improving the integrity of the test data, reducing repeated tests, reducing test cost and improving efficiency.
Drawings
FIG. 1 is a flow chart of a method for identifying a lost mode in binocular vision dynamics mode parameter identification according to the present invention;
FIG. 2 is a schematic flow chart of video magnification;
FIG. 3 is a time slice contrast diagram of a part of a measured object after being subjected to video amplification processing;
FIG. 4 is a graph comparing the power spectral density functions identified after video amplification, wherein (a) is before amplification and (b) is after amplification;
FIG. 5 is a graph of a local power spectral density function and a graph of a singular spectrum comparison identified after video amplification, wherein (a) is the power spectral density function and (b) is the singular spectrum;
fig. 6 is a comparison of the mode shape corresponding to the natural frequency identified after the video amplification processing and the simulation result, where (a) is the mode shape corresponding to the natural frequency, and (b) is the simulation result corresponding to the measured object obtained by finite element mode simulation.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to fig. 1 to 6 in the embodiments of the present invention, and it is obvious that the described embodiments are only some embodiments of the present invention, but not all embodiments, and the technical means used in the embodiments are conventional means known to those skilled in the art unless specifically indicated.
The invention provides a method for identifying a lost mode in binocular vision dynamics mode parameter identification, which comprises the steps of amplifying video effective information of a frequency band in which an information loss condition occurs in video after a structure surface full-field transient deformation image is obtained through binocular identification by introducing a video amplification technology, identifying the lost segment mode parameter, and obtaining the inherent frequency, the vibration mode and the damping characteristic of a lost segment structure. The invention relates to the field of dynamic signal processing, in particular to measurement of structural dynamics.
The complete technical scheme of the invention is as follows:
referring to fig. 1, a method for identifying a lost mode in binocular vision dynamics mode parameter identification includes the following steps:
step one: prefabricating speckle with proper size, certain density and randomness on the surface of a measured object;
step two: applying random excitation to the tested object, and collecting a motion state image of the test piece (namely the tested object) for a certain time at a high frequency (preferably in a frequency range of 500hz-3000 hz) through a binocular vision system;
step three: selecting an area of interest of a measured object, and extracting displacement of the area in a collected time period from images collected by two cameras of a binocular vision system;
wherein the region of interest is a region where subsequent analysis is required;
step four: obtaining a power spectrum density function, natural frequency, corresponding mode parameters such as vibration mode and the like of the measured object through the transient displacement extracted from the region;
wherein the modal parameters are natural frequency, vibration mode and the like.
Step five: judging whether the modal parameter identification in the area is complete or not, if so, carrying out step six to step ten on the frequency segment corresponding to the modal parameter, and if not, carrying out step eleven;
step six: respectively synthesizing images acquired by two high-speed cameras of a binocular vision system into videos in a time sequence;
step seven: amplifying the synthesized video, and amplifying the video motion of a specific multiple aiming at a frequency segment to be processed according to the sampling frequency used in image acquisition;
step eight: decomposing the video subjected to video motion amplification into images in a time sequence, and extracting displacement of the detected object again;
step nine: obtaining a power spectrum density function through the extracted displacement variation of the measured object for a certain time, and extracting corresponding natural frequency and corresponding modal parameters aiming at a frequency segment after video motion amplification processing;
in the fourth step and the ninth step, the natural frequency of the structure and the corresponding modal parameters such as the vibration mode are identified by a bayesian operation modal calculation method aiming at the extracted displacement.
Step ten: verifying whether the natural frequency peak of the power spectrum density function in the step nine corresponds to a truly lost modal parameter, and verifying whether the amplification is effective or not by comparing the step vibration mode calculated through simulation with a test vibration mode subjected to video motion amplification treatment;
wherein, verifying whether the natural frequency peak of the power spectral density function in step nine is truly physically present through singular spectrum, not due to test error;
step eleven: and obtaining complete modal data.
Referring to fig. 2, in step seven, the video motion amplifying process includes the following steps:
the first step: space decomposition, namely performing multi-resolution decomposition on the input video in a complex steerable pyramid space domain to obtain sub-band images of each frame of image of the video in different positions, different directions and each scale in space, so as to separate and extract the amplitude spectrum and the phase spectrum of each sub-band, and lay a cushion for subsequent amplification and reconstruction;
and a second step of: the time domain filtering is carried out, a phase spectrum obtained by space decomposition is extracted, the phase difference between each frame and the first frame of the video decomposed in the time domain is calculated, and the time domain band-pass filter is utilized to filter the phase of each spatial position, each direction and each scale of image so as to obtain the motion information which is interested in the phase difference of the corresponding frequency band.
And a third step of: phase denoising, in order to improve the phase signal-to-noise ratio (SNR), performing amplitude weighted spatial smoothing on the filtered phase signal, and improving the final output result by the step;
fourth step: amplifying or attenuating the phase signals after time domain filtering and phase denoising to achieve the effect required by experiments;
fifth step: and synthesizing a video, and reconstructing each sub-band image by using a plurality of steerable pyramids, thereby obtaining an amplified video image.
The existing non-contact binocular vision dynamic parameter mode recognition test system can obtain the modal parameters of the tested object through testing, but partial modal parameter data loss can occur, especially when the test environment is not ideal enough or the tested object is in special environments such as high temperature and the like.
Therefore, in order to obtain reliable and complete test data, repeated measurement is often needed to be carried out on the tested object, but a plurality of tested objects belong to consumable materials, the unit price is high, repeated tests or repeated experiments cannot be carried out, and the single test cost is high; and when the test object is subjected to extreme environment test, the simulation cost of the working condition is high, the experiment cannot be repeated for a plurality of times, the same working condition of each time cannot be ensured, and the reliability of the test data of repeated test can be reduced.
At the moment, the invention combines the video amplification technology with the non-contact binocular vision dynamic parameter pattern recognition test system, and the acquired test pictures are subjected to video amplification, so that the problems can be solved, incomplete test data with modal loss are processed, effective and reliable modal data are obtained, the integrity of the test data is improved, repeated tests are reduced, the test cost is reduced, and the efficiency is improved.
The method solves the problems of incomplete data extraction, partial mode and vibration mode data loss phenomenon caused by the influence of external environmental noise and the like when non-contact mode test data are extracted, and improves the data integrity and the test reliability of test system mode identification.
Referring to fig. 3, it can be seen by comparing that the measured object after the enlargement process is enlarged compared with the fine motion change before the process, thereby achieving the purpose that the displacement corresponding to the fine motion of the measured object can be successfully extracted by image extraction.
Referring to fig. 4, the block area of the power spectrum density function diagram of the original data has no obvious peak, slight protrusion is similar to noise level, whether the power spectrum density function diagram is an energy peak corresponding to the actual natural frequency cannot be determined, the energy peak after the video amplification processing is obvious and no longer has the same level as the noise, and therefore the purposes of separating the natural frequency from the noise and extracting dynamic characteristic signals such as the natural frequency, the vibration mode and the like are achieved.
Referring to fig. 5, it is verified whether the amplified peak is a peak corresponding to the real natural frequency, so that the purpose of verifying that the video motion amplifying process is successful is achieved.
Referring to fig. 6, it is verified whether the data result processed by the technique of the present invention is reliable, whether the mode shape is the same as the simulation result, and whether it can be used for the subsequent study.
The invention combines the technology with the non-contact binocular vision dynamics testing system instead of simply introducing the video amplification technology, considers the characteristics of the testing system, uses the amplification technology to process test data, extracts incomplete data which is simply lost by using the non-contact binocular vision dynamics testing system, and verifies whether the obtained data is reliable or not by a subsequent program, and can be used for subsequent research or not.
The foregoing description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications, changes, modifications, substitutions and alterations can be made by those skilled in the art without departing from the spirit of the present invention.

Claims (7)

1. The identification method of the lost mode in binocular vision dynamics mode parameter identification is characterized by comprising the following steps of:
step one: prefabricating speckles on the surface of a measured object;
step two: applying random excitation to the tested object, and collecting a motion state image of the test piece for a certain time through a binocular vision system;
step three: selecting a region to be detected of a detected object, and extracting displacement of the region in an acquisition time period through images acquired by two cameras of a binocular vision system;
step four: obtaining a power spectrum density function, a natural frequency and a corresponding vibration mode of the measured object through the transient displacement extracted from the area;
step five: judging whether the modal parameter identification in the area is complete or not, if so, carrying out step six to step ten on the frequency segment corresponding to the modal parameter, and if not, carrying out step eleven;
step six: respectively synthesizing images acquired by two high-speed cameras of a binocular vision system into videos in a time sequence;
step seven: performing video motion amplification processing on the synthesized video;
step eight: decomposing the video subjected to video motion amplification into images in a time sequence, and extracting displacement of the detected object again;
step nine: obtaining a power spectrum density function through the extracted displacement variation of the measured object for a certain time, and extracting corresponding natural frequency and corresponding modal parameters aiming at a frequency segment after video motion amplification processing;
step ten: verifying whether the natural frequency peak of the power spectrum density function in the step nine corresponds to a truly lost modal parameter, and verifying whether the amplification is effective or not by comparing the step vibration mode calculated through simulation with a test vibration mode subjected to video motion amplification treatment;
step eleven: and obtaining complete modal data.
2. The identification method according to claim 1, wherein in the fourth step and the ninth step, the natural frequency of the structure and the corresponding modal parameters such as the vibration mode are identified by a bayesian operation modal calculation method for the extracted displacement.
3. The identification method according to claim 1, wherein in the step ten, it is verified by singular spectrum whether the natural frequency peak of the power spectral density function in the step nine is a true missing mode parameter.
4. The identification method according to claim 1, wherein in the seventh step, video motion amplification processing of a specific multiple is performed on the video for the frequency segment to be processed according to the sampling frequency used in image acquisition.
5. The identification method according to claim 1, wherein in the seventh step, when the video motion amplification process is performed, after spatial decomposition, temporal filtering, phase denoising and motion amplification, each sub-band image is reconstructed by using a complex steerable pyramid, so as to obtain an amplified video image.
6. The method according to claim 5, wherein the sub-band images of each frame of the synthesized video are obtained by decomposing the input synthesized video at the time of spatial decomposition.
7. The identification method according to claim 4, wherein the second step: in the time domain filtering, a time domain band-pass filter is used to filter the phase of each spatial position, each direction, and each scale image.
CN202310166030.9A 2023-02-24 2023-02-24 Identification method for lost mode in binocular vision dynamics mode parameter identification Active CN116152716B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310166030.9A CN116152716B (en) 2023-02-24 2023-02-24 Identification method for lost mode in binocular vision dynamics mode parameter identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310166030.9A CN116152716B (en) 2023-02-24 2023-02-24 Identification method for lost mode in binocular vision dynamics mode parameter identification

Publications (2)

Publication Number Publication Date
CN116152716A true CN116152716A (en) 2023-05-23
CN116152716B CN116152716B (en) 2023-12-08

Family

ID=86338808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310166030.9A Active CN116152716B (en) 2023-02-24 2023-02-24 Identification method for lost mode in binocular vision dynamics mode parameter identification

Country Status (1)

Country Link
CN (1) CN116152716B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102679902A (en) * 2012-05-24 2012-09-19 天津大学 Thin flat plate structure resonance modal analysis system and using method thereof
CN108709627A (en) * 2018-06-25 2018-10-26 华南理工大学 Umbrella reflectors vibration measurement device and method
CN109918614A (en) * 2019-03-14 2019-06-21 合肥工业大学 A kind of global dynamic strain measure method based on mode study
CN110349257A (en) * 2019-07-16 2019-10-18 四川大学 A kind of binocular measurement missing point cloud interpolating method based on the mapping of phase puppet
CN112798253A (en) * 2021-01-20 2021-05-14 南京航空航天大学 Structural modal parameter identification method considering non-white environment load influence
CN114580464A (en) * 2022-02-10 2022-06-03 安徽大学 Human heart rate variability and respiratory rate measurement method based on variational modal decomposition and constraint independent component analysis
WO2022228958A1 (en) * 2021-04-28 2022-11-03 Bayer Aktiengesellschaft Method and apparatus for processing of multi-modal data

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102679902A (en) * 2012-05-24 2012-09-19 天津大学 Thin flat plate structure resonance modal analysis system and using method thereof
CN108709627A (en) * 2018-06-25 2018-10-26 华南理工大学 Umbrella reflectors vibration measurement device and method
CN109918614A (en) * 2019-03-14 2019-06-21 合肥工业大学 A kind of global dynamic strain measure method based on mode study
CN110349257A (en) * 2019-07-16 2019-10-18 四川大学 A kind of binocular measurement missing point cloud interpolating method based on the mapping of phase puppet
CN112798253A (en) * 2021-01-20 2021-05-14 南京航空航天大学 Structural modal parameter identification method considering non-white environment load influence
WO2022228958A1 (en) * 2021-04-28 2022-11-03 Bayer Aktiengesellschaft Method and apparatus for processing of multi-modal data
CN114580464A (en) * 2022-02-10 2022-06-03 安徽大学 Human heart rate variability and respiratory rate measurement method based on variational modal decomposition and constraint independent component analysis

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YANNICK VERDIE 等: "CroMo: Cross-Modal Learnng for Monocular Depth Estimation", 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION *
侯 春 萍 等: "基于双通路生成对抗网络的红外与可见光图像融合方法", 激光与光电子学进展 *

Also Published As

Publication number Publication date
CN116152716B (en) 2023-12-08

Similar Documents

Publication Publication Date Title
Poozesh et al. Feasibility of extracting operating shapes using phase-based motion magnification technique and stereo-photogrammetry
CN106052849B (en) Method for identifying non-stationary abnormal noise source in automobile
Wang et al. Fault diagnosis of diesel engine based on adaptive wavelet packets and EEMD-fractal dimension
KR100981401B1 (en) Small displacement measuring method and instrument
Liu et al. Structural motion estimation via Hilbert transform enhanced phase-based video processing
CN104897774A (en) Eddy current microscopic construction imaging method of carbon fiber composite material
Cao et al. A New Joint Denoising Algorithm for High‐G Calibration of MEMS Accelerometer Based on VMD‐PE‐Wavelet Threshold
CN113865859B (en) Gear box state fault diagnosis method for multi-scale multi-source heterogeneous information fusion
CN111784647B (en) High-precision structural modal testing method based on video vibration amplification
Yang et al. Casing vibration fault diagnosis based on variational mode decomposition, local linear embedding, and support vector machine
Eitner et al. Modal parameter estimation of a compliant panel using phase-based motion magnification and stereoscopic digital image correlation
Molina-Viedma et al. Operational Deflection Shape Extraction from Broadband Events of an Aircraft Component Using 3D‐DIC in Magnified Images
Lv et al. Gear fault feature extraction based on fuzzy function and improved Hu invariant moments
CN110782041B (en) Structural modal parameter identification method based on machine learning
CN111353400A (en) Whole scene vibration intensity atlas analysis method based on visual vibration measurement
CN116152716B (en) Identification method for lost mode in binocular vision dynamics mode parameter identification
CN113155466A (en) Bearing fault visual vibration detection method and system
Xu et al. Rolling bearing fault feature extraction via improved SSD and a singular-value energy autocorrelation coefficient spectrum
Holak A motion magnification application in video-based vibration measurement
Peng et al. Full-field visual vibration measurement of rotating machine under complex conditions via unsupervised retinex model
Yang et al. Improving Long-Term Guided Wave Damage Detection With Measurement Resampling
Yao et al. Research on fault diagnosis of gearbox based on acoustic emission signal monitoring
CN113435487B (en) Deep learning-oriented multi-scale sample generation method
CN113280781B (en) Embedded type angular displacement online measuring method and device
CN112505779B (en) Method for removing collected footprints based on feature decomposition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant