CN112307932A - Parameterized full-field visual vibration modal decomposition method - Google Patents

Parameterized full-field visual vibration modal decomposition method Download PDF

Info

Publication number
CN112307932A
CN112307932A CN202011164566.XA CN202011164566A CN112307932A CN 112307932 A CN112307932 A CN 112307932A CN 202011164566 A CN202011164566 A CN 202011164566A CN 112307932 A CN112307932 A CN 112307932A
Authority
CN
China
Prior art keywords
modal
parameterized
mode
time
domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011164566.XA
Other languages
Chinese (zh)
Other versions
CN112307932B (en
Inventor
何清波
刘振
李天奇
彭志科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202011164566.XA priority Critical patent/CN112307932B/en
Publication of CN112307932A publication Critical patent/CN112307932A/en
Application granted granted Critical
Publication of CN112307932B publication Critical patent/CN112307932B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization

Abstract

The invention discloses a parameterized full-field visual vibration mode decomposition method, which relates to the technical field of vibration mode parameter identification, and comprises the following steps: step 1, parameterizing a time domain modal coordinate; step 2, parameterizing a spatial domain modal shape; step 3, constructing a time-space domain parameterized modal stack model; step 4, performing time-space domain joint optimization on the parameterized modal stack model; and 5, decomposing a parameter matrix and reconstructing a mode. By implementing the method, the difficulty of high-dimensional degree of freedom calculation caused by the advantage of high visual spatial resolution is avoided, the problems of accurate decomposition and parameter identification of high-dimensional and full-field visual vibration modes are solved, the complex time-space domain background noise in the visual vibration data is inhibited, and the calculation capacity and the calculation efficiency are improved.

Description

Parameterized full-field visual vibration modal decomposition method
Technical Field
The invention relates to the technical field of vibration modal parameter identification, in particular to a parameterized full-field visual vibration modal decomposition method.
Background
In a traditional operation modal analysis method, a vibration signal acquired by an accelerometer or a laser vibration meter is generally adopted for modal analysis, a small number of degrees of freedom needs to be calculated, and only dozens to hundreds of degrees of freedom exist under dense sensor arrangement, so that the spatial resolution of modal parameter identification is very low. In contrast, the total number of degrees of freedom required to be calculated for full-field modal parameter identification based on visual vibration measurement exceeds thousands, and both the calculation capability for modal identification and the hardware condition of a computer pose huge challenges. The traditional operation modal analysis methods, such as a frequency domain decomposition method, a time domain decomposition method, a random subspace method and the like, are only suitable for modal parameter identification calculation with a small number of degrees of freedom, because they mostly rely on algorithms such as calculation of mutual power density, principal component analysis or singular value decomposition, and other eigenvalue decomposition, and under the condition of high-dimensional degrees of freedom caused by visual vibration full-field displacement, the matrix required to be calculated is high-dimensional or even ultra-high-dimensional, for example, matrix eigenvalue or singular value decomposition of tens of thousands of times tens of thousands of scales is calculated, and the calculation cost is hardly realized under the current calculation conditions and is too high. Therefore, for the operation modal analysis based on the visual vibration measurement, it is urgently needed to develop an effective and rapid high-dimensional degree-of-freedom processing method so as to solve the problem of the visual full-field modal analysis, improve the calculation efficiency, and save the calculation cost and the calculation time.
The visual vibration modal analysis method in the prior art almost avoids the computational difficulty of high-dimensional degree of freedom in visual full-field modal identification without exception, and the method still processes a small number of degrees of freedom. Specifically, in the current visual vibration modal identification method, only a small amount of preset motions of reference pixels are selected for calculating modal parameters such as modal shape. The method greatly weakens the advantage that the visual vibration measurement technology has high spatial resolution, and in consideration of the dynamic characteristics of the structure, such as fatigue damage, cracks, grinding lines and the like, the method mostly has local characteristics, and the mode vibration mode with higher spatial resolution is often needed for damage detection and positioning. Therefore, how to fully utilize the advantage of high spatial resolution and solve the problem of identifying the parameters of the visual full-field vibration mode, and improve the calculation capability and the calculation efficiency is an important problem to be solved in the visual vibration analysis.
Therefore, those skilled in the art are dedicated to develop a parameterized full-field visual vibration mode decomposition method, which solves the high-dimensional degree of freedom calculation problem caused by the advantage of high visual spatial resolution, solves the problems of accurate decomposition and parameter identification of the high-dimensional and full-field visual vibration modes, and suppresses complex time-space domain background noise existing in visual vibration data.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, the technical problems to be solved by the present invention are: how to handle the high-dimensional degree-of-freedom computational problem, and how to suppress the background noise problem in the visual vibration data.
In order to achieve the above object, the present invention provides a parameterized full-field visual vibration modal decomposition method, which comprises the following steps:
step 1, parameterizing a time domain modal coordinate;
step 2, parameterizing a spatial domain modal shape;
step 3, constructing a time-space domain parameterized modal stack model;
step 4, performing time-space domain joint optimization on the parameterized modal stack model;
and 5, decomposing a parameter matrix and reconstructing a mode.
Further, in step 1, the mode coordinate q of the ith mode isi(t) is:
qi(t)=ai(t)cos(2πfit+ψi0),t=1,2,...,NT
wherein, ai(t) is amplitude, fiTo have a damped natural frequency, #i0Is a constant phase, t is time, NTIs the number of sampling points.
Further, the step 1 further comprises: estimation value of damping natural frequency from signal power spectrum density by using peak value selection algorithm
Figure BDA0002745341170000021
Figure BDA0002745341170000022
Wherein the content of the first and second substances,
Figure BDA0002745341170000023
Figure BDA0002745341170000024
bi(t) and
Figure BDA0002745341170000025
for two new amplitudes to be estimated, expand them into a Fourier series:
Figure BDA0002745341170000026
Figure BDA0002745341170000027
wherein the content of the first and second substances,
Figure BDA0002745341170000028
and
Figure BDA0002745341170000029
two amplitude-expanded fourier coefficients, respectively; l is the Fourier order; f0Is the frequency resolution, and the calculation formula is F0=fs2NT,fsIs the signal sampling frequency.
Further, the parameterized modal coordinates qi(t) discretizing to obtain the following formula:
qi=ρi TΘi T
Θi=[AiB CiB]
Figure BDA00027453411700000210
wherein the content of the first and second substances,
Figure BDA00027453411700000211
Figure BDA00027453411700000212
Figure BDA0002745341170000031
Figure BDA0002745341170000032
where (·) T is the transposed symbol and diag [ · ] is the diagonal matrix.
Further, the step 2 further comprises: using two-dimensional Fourier series to convert the i-th order mode shape phii,θ(x, y) is expanded in a space domain, and the expression is as follows:
Figure BDA0002745341170000033
wherein θ is a horizontal direction or a vertical direction; n and M are two-dimensional Fourier orders along the x-axis and y-axis;
Figure BDA0002745341170000034
Figure BDA0002745341170000035
are Fourier coefficients; fw2 pi/Gw and Fh=2π/Gh
Figure BDA00027453411700000311
Fundamental frequencies of the x-axis and the y-axis, respectively; h and w are the height and width of the video image, respectively.
Further, the step 2 further comprises: discretizing the parameterized ith-order spatial mode shape to obtain:
φi=Hzi
φi=Hzi
Figure BDA0002745341170000036
wherein the content of the first and second substances,
Figure BDA0002745341170000037
Figure BDA0002745341170000038
Figure BDA0002745341170000039
Figure BDA00027453411700000310
Figure BDA0002745341170000041
further, the step 3 further comprises: the step 3 further comprises: using parameterized time-domain modal coordinates qiAnd parameterized spatial domain mode shape phiiEstablishing a time-space domain modal superposition model, wherein the expression is as follows:
Figure BDA0002745341170000042
wherein Q is the number of modes, and
Ω=[Ω1 … ΩQ]
Θ=[Θ1 … ΘQ]。
further, the step 4 further includes: adopting a target optimization criterion of double regular parameters, wherein an optimization parameter matrix omega is as follows:
Figure BDA0002745341170000043
wherein | · | purple sweetFAn F norm representing a matrix; the last three F norms are regular terms to solve the ill-conditioned problem; lambda [ alpha ]1And λ2Are two canonical parameters.
Further, the parameter matrix
Figure BDA0002745341170000044
Comprises a plurality of sub-parameter matrixes,
Figure BDA0002745341170000045
using singular value decomposition method to each sub-matrix
Figure BDA0002745341170000046
Decomposing to obtain:
Figure BDA0002745341170000047
wherein
Figure BDA0002745341170000048
Is the maximum singular value, muiAnd upsiloniThe corresponding left singular vector and right singular vector.
Further, the final modal shape and modal coordinate are obtained through reconstruction, and the expression is as follows:
Figure BDA0002745341170000049
Figure BDA00027453411700000410
wherein the content of the first and second substances,
Figure BDA00027453411700000411
in order to estimate the mode shape of the mode,
Figure BDA00027453411700000412
is the estimated modal coordinates; i | · | purple windRepresenting an infinite norm of the vector.
Compared with the prior art, the invention at least has the following beneficial technical effects:
1. the parameterized full-field visual vibration modal decomposition method can convert high-dimensional freedom data into low-dimensional parameter matrix decomposition, and can efficiently solve the problems of insufficient computing capacity and low computing efficiency.
2. The invention provides a time-space domain dual-regular parameter target optimization criterion, which can inhibit time domain and space domain noises simultaneously and improve the estimation precision of modal parameters.
The conception, the specific structure and the technical effects of the present invention will be further described with reference to the accompanying drawings to fully understand the objects, the features and the effects of the present invention.
Drawings
FIG. 1 is a basic framework diagram of the parametric full-field visual vibration modal decomposition method of the present invention;
FIG. 2 is the top 4 th order full field modal shape estimated by the parameterized full field visual vibration modal decomposition method of the present invention;
fig. 3 shows the first 4 th order full-field mode shape reconstructed by Time Domain Decomposition (TDD) according to an embodiment of the present invention.
Detailed Description
The technical contents of the preferred embodiments of the present invention will be more clearly and easily understood by referring to the drawings attached to the specification. The present invention may be embodied in many different forms of embodiments and the scope of the invention is not limited to the embodiments set forth herein.
As shown in fig. 1, a parameterized full-field visual vibration mode decomposition method includes the following steps:
step 1, parameterizing a time domain modal coordinate;
in the time domain, carrying out parametric expansion on modal coordinates of each order by using a one-dimensional Fourier series to obtain a series of time domain parameters:
Figure BDA0002745341170000051
Figure BDA0002745341170000052
step 2, parameterizing a spatial domain modal shape;
in the airspace, parametrizing and expanding each level of modal shape by using a two-dimensional Fourier series to obtain a series of airspace parameters:
Figure BDA0002745341170000053
Figure BDA0002745341170000054
step 3, constructing a time-space domain parameterized modal stack model;
carrying out space-time coupling on the parameterized modal coordinates of the time domain and the parameterized modal shape of the space domain, and establishing a space-time domain parameterized modal superposition model:
Figure BDA0002745341170000055
the obtained parameter matrix omega comprises a plurality of sub-parameter matrices,
Ω=[Ω1 … ΩQ]
wherein omegai=ziρi T,i=1,…,Q。
Step 4, performing time-space domain joint optimization on the parameterized modal stack model;
constructing a double regular parameter target optimization criterion for optimizing and solving a parameter matrix omega;
Figure BDA0002745341170000061
the solution is as follows:
Figure BDA0002745341170000062
step 5, parameter matrix decomposition and modal reconstruction;
performing Singular Value Decomposition (SVD) on the submatrix of the parameter matrix obtained by optimization solution to obtain a left eigenvector mu corresponding to the maximum eigenvalueiAnd a right feature vector vi
Figure BDA0002745341170000063
And then reconstructing to obtain a final modal shape and a modal coordinate, wherein the expression is as follows:
Figure BDA0002745341170000064
Figure BDA0002745341170000065
the modal shape results of the final reconstruction of each order are shown in fig. 2. The reconstructed modal shape is very close to the real modal shape, and the error is very small, so that the method has very high modal parameter identification accuracy.
To verify the superiority of the present invention, the modal shape results extracted by Time Domain Decomposition (TDD) are shown in fig. 3. It can be seen that the conventional TDD method has more noise interference.
Further, the modal confidence of each order of modality reconstructed by the present invention and TDD method is given, as shown in table 1.
Table 1: mode confidence contrast between the invention and TDD method
Figure BDA0002745341170000066
Table 1 shows that the modal confidence of the present invention is higher than that of the conventional TDD method, which indicates that the modal parameter identification result of the present invention is more accurate.
Finally, to verify the computational efficiency and computational power of the present invention, the computation time of the present invention and frequency domain decomposition (FFD) and Time Domain Decomposition (TDD) in different degrees of freedom are compared, as shown in table 2. With the increase of the size of the video image, the number of degrees of freedom required to be calculated is rapidly increased, while the traditional frequency domain decomposition (FFD) method and Time Domain Decomposition (TDD) method are difficult to calculate or have long calculation time under the number of hundreds or thousands of degrees of freedom, but the invention can still calculate under the number of tens of thousands or even higher degrees of freedom, and the calculation time is only a few seconds, which shows that the invention has strong calculation capability and high calculation efficiency.
Table 2: comparison of computing time and computing power of different modal parameter identification methods
Figure BDA0002745341170000067
Figure BDA0002745341170000071
In conclusion, the parameterized full-field visual vibration mode decomposition method is adopted, the high-dimensional degree of freedom calculation difficulty caused by the advantage of high visual spatial resolution is avoided, the problems of accurate decomposition and parameter identification of the high-dimensional and full-field visual vibration modes are solved, the complex time-space domain background noise existing in the visual vibration data is restrained, and the calculation capacity and the calculation efficiency are improved.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. A parameterized full-field visual vibration modal decomposition method is characterized by comprising the following steps:
step 1, parameterizing a time domain modal coordinate;
step 2, parameterizing a spatial domain modal shape;
step 3, constructing a time-space domain parameterized modal stack model;
step 4, performing time-space domain joint optimization on the parameterized modal stack model;
and 5, decomposing a parameter matrix and reconstructing a mode.
2. The method according to claim 1, wherein in step 1, the mode coordinates q of the ith mode are determinedi(t) is:
qi(t)=ai(t)cos(2πfit+ψi0),t=1,2,...,NT
wherein, ai(t) is amplitude, fiTo have a damped natural frequency, #i0Is a constant phase, t is time, NTIs the number of sampling points.
3. The method of claim 2, wherein step 1 further comprises: estimation value of damping natural frequency from signal power spectrum density by using peak value selection algorithm
Figure FDA0002745341160000011
Figure FDA0002745341160000012
Wherein the content of the first and second substances,
Figure FDA0002745341160000013
Figure FDA0002745341160000014
bi(t) and
Figure FDA0002745341160000015
for two new amplitudes to be estimated, expand them into a Fourier series:
Figure FDA0002745341160000016
Figure FDA0002745341160000017
wherein the content of the first and second substances,
Figure FDA0002745341160000018
and
Figure FDA0002745341160000019
two amplitude-expanded fourier coefficients, respectively; l is the Fourier order; f0Is the frequency resolution, and the calculation formula is F0=fs/2NT,fsIs the signal sampling frequency.
4. Method according to claim 3, characterized in that the parameterized mode coordinates q are usedi(t) discretizing to obtain the following formula:
qi=ρi TΘi T
Θi=[AiB CiB]
Figure FDA0002745341160000021
Wherein the content of the first and second substances,
Figure FDA0002745341160000022
Figure FDA0002745341160000023
Figure FDA0002745341160000024
Figure FDA0002745341160000025
where (·) T is the transposed symbol and diag [ · ] is the diagonal matrix.
5. The method of claim 1, wherein step 2 further comprises: using two-dimensional Fourier series to convert the i-th order mode shape phii,θ(x, y) is expanded in a space domain, and the expression is as follows:
Figure FDA0002745341160000026
wherein θ is a horizontal direction or a vertical direction; n and M are two-dimensional Fourier orders along the x-axis and y-axis;
Figure FDA0002745341160000027
Figure FDA0002745341160000028
are Fourier coefficients; fw2 pi/Gw and
Figure FDA0002745341160000029
fundamental frequencies of the x-axis and the y-axis, respectively; h and w are the height and width of the video image, respectively.
6. The method of claim 5, wherein step 2 further comprises: discretizing the parameterized ith-order spatial mode shape to obtain:
φi=Hzi
φi=Hzi
Figure FDA00027453411600000210
wherein the content of the first and second substances,
Figure FDA00027453411600000211
Figure FDA0002745341160000031
Figure FDA0002745341160000032
Figure FDA0002745341160000033
Figure FDA0002745341160000034
7. the method of claim 1, wherein step 3 further comprises: using parameterized time-domain modal coordinates qiAnd parameterized spatial domain mode shape phiiEstablishing a time-space domain modal superposition model, wherein the expression is as follows:
Figure FDA0002745341160000035
wherein Q is the number of modes, and
Ω=[Ω1…ΩQ]
Θ=[Θ1…ΘQ]。
8. the method of claim 1, wherein step 4 further comprises: adopting a target optimization criterion of double regular parameters, wherein an optimization parameter matrix omega is as follows:
Figure FDA0002745341160000036
wherein | · | purple sweetFAn F norm representing a matrix; the last three F norms are regular terms to solve the ill-conditioned problem; lambda [ alpha ]1And λ2Are two canonical parameters.
9. The method of claim 8, wherein the parameter matrix
Figure FDA0002745341160000037
Comprises a plurality of sub-parameter matrixes,
Figure FDA0002745341160000038
using singular value decomposition method to each sub-matrix
Figure FDA0002745341160000039
Decomposing to obtain:
Figure FDA00027453411600000310
wherein
Figure FDA0002745341160000041
Is the maximum singular value, muiAnd upsiloniThe corresponding left singular vector and right singular vector.
10. The method of claim 9, wherein the modal shape and modal coordinates are reconstructed by the expression:
Figure FDA0002745341160000042
Figure FDA0002745341160000043
wherein the content of the first and second substances,
Figure FDA0002745341160000044
in order to estimate the mode shape of the mode,
Figure FDA0002745341160000045
is the estimated modal coordinates; i | · | purple windRepresenting an infinite norm of the vector.
CN202011164566.XA 2020-10-27 2020-10-27 Parameterized full-field visual vibration modal decomposition method Active CN112307932B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011164566.XA CN112307932B (en) 2020-10-27 2020-10-27 Parameterized full-field visual vibration modal decomposition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011164566.XA CN112307932B (en) 2020-10-27 2020-10-27 Parameterized full-field visual vibration modal decomposition method

Publications (2)

Publication Number Publication Date
CN112307932A true CN112307932A (en) 2021-02-02
CN112307932B CN112307932B (en) 2023-02-17

Family

ID=74331154

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011164566.XA Active CN112307932B (en) 2020-10-27 2020-10-27 Parameterized full-field visual vibration modal decomposition method

Country Status (1)

Country Link
CN (1) CN112307932B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100183225A1 (en) * 2009-01-09 2010-07-22 Rochester Institute Of Technology Methods for adaptive and progressive gradient-based multi-resolution color image segmentation and systems thereof
CN104933435A (en) * 2015-06-25 2015-09-23 中国计量学院 Machine vision construction method based on human vision simulation
CN107506333A (en) * 2017-08-11 2017-12-22 深圳市唯特视科技有限公司 A kind of visual token algorithm based on ego-motion estimation
CN108956614A (en) * 2018-05-08 2018-12-07 太原理工大学 A kind of pit rope dynamic method for detection fault detection and device based on machine vision
CN109801250A (en) * 2019-01-10 2019-05-24 云南大学 Infrared and visible light image fusion method based on ADC-SCM and low-rank matrix expression
US20200051255A1 (en) * 2017-03-06 2020-02-13 The Regents Of The University Of California Joint estimation with space-time entropy regularization

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100183225A1 (en) * 2009-01-09 2010-07-22 Rochester Institute Of Technology Methods for adaptive and progressive gradient-based multi-resolution color image segmentation and systems thereof
CN104933435A (en) * 2015-06-25 2015-09-23 中国计量学院 Machine vision construction method based on human vision simulation
US20200051255A1 (en) * 2017-03-06 2020-02-13 The Regents Of The University Of California Joint estimation with space-time entropy regularization
CN107506333A (en) * 2017-08-11 2017-12-22 深圳市唯特视科技有限公司 A kind of visual token algorithm based on ego-motion estimation
CN108956614A (en) * 2018-05-08 2018-12-07 太原理工大学 A kind of pit rope dynamic method for detection fault detection and device based on machine vision
CN109801250A (en) * 2019-01-10 2019-05-24 云南大学 Infrared and visible light image fusion method based on ADC-SCM and low-rank matrix expression

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WEI ZHOU等: "Empirical Fourier Decomposition", 《ARXIV.ORG》 *
YANYU LIU: "Robust spiking cortical model and total-variational decomposition for multimodal medical image fusion", 《BIOMEDICAL SIGNAL PROCESSING AND CONTROL》 *
赵爱东: "基于EMD-ICA的视觉稳态诱发电位运动伪迹去除", 《电子测量技术》 *

Also Published As

Publication number Publication date
CN112307932B (en) 2023-02-17

Similar Documents

Publication Publication Date Title
Bhowmik et al. First-order eigen-perturbation techniques for real-time damage detection of vibrating systems: Theory and applications
JP6386556B2 (en) Wide frequency band acoustic holography
CN113155973B (en) Beam damage identification method based on self-adaptive singular value decomposition
CN114925526B (en) Structural modal parameter identification method combining multi-task response
WO2006101080A1 (en) Curved surface generation method, program, and 3-dimensional shape processing device
CN112629786A (en) Working mode parameter identification method and equipment fault diagnosis method
CN113111547A (en) Frequency domain finite element model correction method based on reduced basis
Guan et al. Data-driven methods for operational modal parameters identification: A comparison and application
Pahlavan et al. Spectral formulation of finite element methods using Daubechies compactly-supported wavelets for elastic wave propagation simulation
CN105549078A (en) Five-dimensional interpolation processing method and apparatus of irregular seismic data
CN109143151A (en) The uniform surface battle array tensor reconstructing method and signal source locating method of part array element damage
CN112307932B (en) Parameterized full-field visual vibration modal decomposition method
CN107977939B (en) Reliability-based weighted least square phase unwrapping calculation method
US20130187814A1 (en) Scanning Measurements On Generalized Grids
Ghosh Application of the random eigenvalue problem in forced response analysis of a linear stochastic structure
CN113158500A (en) Sensor arrangement method for reducing uncertainty of structural mode matrix identification
Zeldin et al. Spectral identification of nonlinear structural systems
JP5467346B2 (en) Motion estimation method, motion estimation device, and motion estimation program
JP6665488B2 (en) Image processing method and image processing apparatus
CN111428342B (en) Random dynamic load identification method based on frequency domain spectrum decomposition
CN111272274B (en) Closed space low-frequency sound field reproduction method based on microphone random sampling
Blayo Compact finite difference schemes for ocean models: 1. ocean waves
CN105426916A (en) Image similarity calculation method
Edlund et al. Simulation of the white dwarf–white dwarf galactic background in the LISA data
JP6604537B2 (en) Keypoint detector, keypoint detection method, and keypoint detection program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant