CN1472673A - Data merging method based linear constrainted cut minimum binary multiply - Google Patents

Data merging method based linear constrainted cut minimum binary multiply Download PDF

Info

Publication number
CN1472673A
CN1472673A CNA031290582A CN03129058A CN1472673A CN 1472673 A CN1472673 A CN 1472673A CN A031290582 A CNA031290582 A CN A031290582A CN 03129058 A CN03129058 A CN 03129058A CN 1472673 A CN1472673 A CN 1472673A
Authority
CN
China
Prior art keywords
matrix
data
square
neural network
noise
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA031290582A
Other languages
Chinese (zh)
Other versions
CN1216338C (en
Inventor
敬忠良
施海燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN 03129058 priority Critical patent/CN1216338C/en
Publication of CN1472673A publication Critical patent/CN1472673A/en
Application granted granted Critical
Publication of CN1216338C publication Critical patent/CN1216338C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Radar Systems Or Details Thereof (AREA)
  • Testing Or Calibration Of Command Recording Devices (AREA)

Abstract

The method derivates squared value for each transducer and sets adaptive threshold based on it as well as the check whether there is an abnormal transducer data or not and to check which data has pulse noise in order to obtain detecting matrix. Then, initial mixing object function is set up base an truncation least square (TLS) and to change it to be optimum issue of LCTLC. Furthermose, Lagrangian function of the issue is derivated and to obtain equipment set of optimal solution to be derivated accordingly based on Kuhn-Tucker condition. The equation set solution i.e solution of optimized issue can be obtained with setting up convergence recursive nervous network of the global.

Description

Block the data fusion method of least square based on linear restriction
Technical field:
The present invention relates to a kind of data fusion method of blocking least square (LCTLS) based on linear restriction, be in the information fusion field based on the multi-sensor data fusion method of signal level (Pixel-level), in systems such as environment measuring, fault diagnosis, target following and identification, all be widely used.
Background technology:
Multi-sensor Fusion is because the comprehensive sensor information of a plurality of identical or different kinds, thereby can eliminate information uncertainty and the limitation in time, space application that single-sensor brings, obtain the information more definite about object, that quality is higher, easier of people or Computer Processing.Along with people obtain the more and more higher requirement of proposition for information, Multi-sensor Fusion just is being widely used in a lot of aspects such as military confrontation, medical image, fault diagnosis, air traffic control gradually.
Data fusion is divided into signal level (Pixel-level) fusion, the feature level merges and decision level fusion.During signal level related to the present invention (Pixel-level) merged, the algorithm of comparative maturity had Kalman filtering algorithm, maximal possibility estimation etc. and their various improvement algorithms.These algorithms finally all need the sensor noise covariance information, and these information are difficult to obtain sometimes in actual applications, and these algorithms are being used existing problems in real time.
Linear restriction least square (LCLS) (Y.Zhou and H.Leung.Alinearly constrained least square approach for multisensor data fusion.Proc.SPIE ' the s llth AnnualSymposium on AeroSense that Y.Zhou and H.Leung propose, Orlando, Florida, 1997,118~129), and calculate easy without any need for prior imformation.Youshen Xia and H.Leung improve it, (the Y.S.Xia and H.Leung.Neural data fusion algorithms based on a linearlyconstrained least square method.IEEE Trans.Neural Networks of the neural network algorithm based on LCLS that proposes, 2002,13 (2): 320~329) solved the problem that exists when matrix is unusual.When noise satisfies Gaussian distribution, all more satisfactory on speed of convergence, calculated amount, hardware realization and convergence precision.But because on signal transmission, environmental impact or sensor self principle of work, impulsive noise appears in signal that sensor obtains sometimes, such as the speckle noise and the salt-pepper noise that often occur in the image.Find that by emulation the method is when handling this situation, when the abnormality sensor number was very little, fusion results and speed of convergence are influenced not to be very big; In case number is big slightly, speed of convergence reduces greatly, and cycle index can reach more than 1000 times sometimes, and convergency value and global minimum point differ bigger.
Summary of the invention:
The objective of the invention is at the deficiencies in the prior art, a kind of data fusion method of blocking least square based on linear restriction is provided, makes when keeping existing linear restriction least square method advantage, improve its robustness, and obtain fast merging and separate, be convenient to real-time application.
For realizing such purpose, combining adaptive sensing data detection method of the present invention, proposed to block the data fusion method of least square and recurrent neural network based on linear restriction, on the basis of each sensing data being asked for mean square value, adaptive threshold is set, and judge whether to exist the abnormality sensor data, and there is impulsive noise in which sensing data, obtains detecting matrix.Set up then based on the fusion objective function that blocks least square,, become the optimal problem of blocking least square of linear restriction by a series of conversion.Further ask for the Lagrangian function of problem, according to the Kuhn-Tucker condition, obtain the system of equations of corresponding optimum solution, set up the recurrent neural network of global convergence, obtain solution of equations, promptly optimization problem separates.
Method of the present invention mainly comprises the abnormality sensor Data Detection, set up to merge objective function, recurrent neural network is realized three basic steps: 1. abnormality sensor Data Detection
One of initialization is the unit diagonal matrix of dimension with the number of sensors.Ask the mean square value of each sensing data earlier,, judge whether to exist unusual sensing data by the variance after their normalization and the comparison of threshold value.The general self-adaptation of threshold value is arranged to the form that is inversely proportional to number of sensors.
If think to have the abnormality sensor data, then each all just normalized on a certain zone.If certain normalizing value surpasses certain setting threshold, think that then the corresponding sensor data contain impulsive noise, and the respective element of unit diagonal matrix is set to 0.Otherwise think that the corresponding sensor data are normal, the unit diagonal matrix is not done any change.This threshold value also is provided with according to the average and the standard deviation self-adaptation of normalization mean square value.2. set up and merge objective function
Obtain detecting matrix P after testing process finishes, set up, even square expectation of normal sensing data weighting fusion result and original signal difference reaches minimum based on the initial target function that blocks least square.Introduce linear restriction w TPa=1, wherein w=[w 1, w 2..., w K] TBe each sensor weights, a=[a 1, a 2..., a K] TBe the scale parameter of each sensor, K is a number of sensors, and the initial target function becomes the optimization problem of the noise covariance matrix that has linear restriction.It is zero utilizing the Gaussian noise expectation and the detection matrix is zero characteristic with the expectation that the abnormality sensor data product of impact noise occurs, develops into an optimization problem that has the measurement data covariance matrix of constraint, also is final fusion objective function.3. recurrent neural network is realized
Obtain merging after the objective function, ask for its corresponding Lagrangian function.According to the Kuhn-Tucker condition, obtain asking for the pairing system of equations of optimum weights then.Extract the matrix of coefficients of variable, premultiplication deducts system of equations right side part more simultaneously with the transposition of matrix of coefficients simultaneously on the system of equations both sides.Realize that for continuous neural network the left side of final system of equations is in the network optimization variable to the negative derivative of time; Realizing for discrete neural network, is with the continuous neural network discretize, then all measurement data covariance matrixes be multiply by a coefficient, this coefficient to satisfy it with the infinite norm of measuring the covariance matrix product less than 1, the step-length of network training less than
Figure A0312905800051
Wherein
Utilize recurrent neural network and the reason of the method for need not directly inverting is when number of sensors becomes big, the conditional number of covariance matrix can become greatly, directly inverts to make the quality decline of separating.And inversion process has been avoided in the application of neural network algorithm, thereby the quality of separating is guaranteed.If there is not omission, just reach stable with interior network common 10 times.The value of w is the optimum weights of being asked during network stabilization.
Data fusion method of the present invention has following beneficial effect:
The detection of abnormality sensor data has been eliminated original linear restriction least square method objective function and real objective function is inconsistent and fusion results is difficult to guarantee the shortcoming of its unbiasedness, solved simultaneously that linear restriction least square neural network algorithm network convergence speed when the abnormality sensor data occurring is slow, network convergence result and globally optimal solution differ bigger problem.The false drop rate and the loss of the self-adaptation abnormality sensor Data Detection Algorithm that is proposed are all very little, thereby guarantee the premium properties of algorithm.When not having abnormality sensor, algorithm develops into the linear restriction least square method, i.e. linear restriction least square method is a special case of the present invention.The present invention improves the robustness of algorithm greatly, even impulsive noise occurred, also can have unbiasedness, and in the very fast time, obtain separating of high-quality, for the subsequent treatment of using has in real time been saved the time, improved quality, for the significant and practical value of real-time processing of data fusion.
Description of drawings:
Fig. 1 the present invention is based on the data fusion method synoptic diagram that least square and recurrent neural network are blocked in linear restriction.
As shown in the figure, sensing data is carried out the abnormality sensor Data Detection, the detection matrix P and the sensing data that will obtain are then sent into recurrent neural network together, obtain optimum weights.Obtain fusion results by weights, detection matrix and sensing data at last.
Fig. 2 is a neural network state equation structural drawing.
Wherein, Fig. 2 (a) is a continuous neural network state equation structural drawing, and Fig. 2 (b) is discrete neural network state equation structural drawing.
The fusion results that least square is blocked in Fig. 3 linear restriction least square and linear restriction when salt-pepper noise occurring in embodiment 2 images.
Wherein, Fig. 3 (a) is the former figure with any noise, Fig. 3 (b) is for having the image of Gaussian noise, Fig. 3 (c) is for having the image of Gaussian noise and salt-pepper noise, the result of Fig. 3 (d) for merging with neural network linear restriction least square (LCLS) method, Fig. 3 (e) is for blocking the result that least square (LCTLS) merges with the neural network linear restriction.
Embodiment:
In order to understand technical scheme of the present invention better, embodiments of the present invention are further described below in conjunction with drawings and Examples.
Each parameter-definition is as follows among the embodiment 1: defining the impulsive noise number that occurs in certain sensing data is n=[fN], f ∈ [0,1] wherein, N is the number of data point, [] expression rounds the numeral in the square bracket.Each impulsive noise q=prandn (0,1), wherein p is called the amplitude of impulsive noise, and the random number of 0-1 normal distribution is satisfied in randn (0,1) representative.Exist the number of sensors of impulsive noise to be designated as l.
Each sensor signal
Figure A0312905800061
S in the formula (0) is that average is 0, and variance is 1 Gaussian process, s (t+1)=1.7exp (2s 2(t))-1 be a no program process, n i(t) and v (t) be that average is 0 Gaussian process.K=5 is set, N=60, the signal to noise ratio (S/N ratio) of normal sensing data is 5dB.Mean square deviation is defined as herein: MSE = Σ t = 1 N ( w T x ( t ) - s ( t ) ) 2 N
Fig. 1 the present invention is based on the data fusion method synoptic diagram that least square and recurrent neural network are blocked in linear restriction.The concrete implementation detail of each several part is as follows: 1. abnormality sensor Data Detection
This part mainly comprises following step:
The first step is set up the diagonal matrix P of unit of a K * K.P is 5 * 5 unit diagonal matrix herein.
Second step was calculated the mean square value m=[m of each sensing data 1, m 2..., m k], m i = 1 N Σ t = 1 N ( x i ( t ) ) 2 .
The order of the 3rd step m ^ = m / max ( m i ) , v = var ( m ^ ) , Wherein the variance of data in the bracket is asked in var () expression.If v<threshold value 1, then thinking does not have abnormality sensor, and P remains unchanged, and testing process finishes; Otherwise forwarded for the 4th step to.Threshold value 1 is arranged to 0.2/K.
The 4th step normalized to m on [0.1,0.9], was designated as
The 5th step calculated
Figure A0312905800076
Average and standard deviation, represent with mean and stdev respectively.Because mean square value other big a lot of all of the sensing data of impulsive noise occur, so right margin can be only considered in the detection of abnormality sensor, so threshold value 2 is defined as mean-n * stdev, n is set to 1/K here.If
Figure A0312905800077
Greater than threshold value 2, then P I, i=0; Otherwise P I, iRemain unchanged.
Wherein threshold value 1 and threshold value 2 are typically provided to K and are inversely proportional to.Testing process obtains detecting matrix P after finishing.Abnormality sensor data all among the embodiment 1 have all detected exactly.2. set up and merge objective function
The basic thought that blocks least square is to remove unusual observation data, utilizes least square to estimate to normal data then.Therefore the initial fusion objective function of setting up be make normal sensing data weighting fusion result and original signal difference square expectation reach minimum.Because the sensor measurement model is
x(t)=as(t)+n(t) (1)
A=[a wherein 1..., d K] T, x (t)=[x 1(t) ..., x K(t)] T, n (t)=[n 1(t) ..., n K(t)] TThe initial fusion objective function of setting up is shown below:
f 1(w)=E[w TPx (t)-s (t)] 2(2) get after the expansion:
f 1(w)=E[w TP (as (t)+n (t))-s (t)] 2=E[(w TPa-1) s (t)+w TPn (t)] (3) because original signal the unknown, makes w TPa=1.Objective function becomes like this:
f 1(w)=w TPE[n (t) n (t) T] Pw (4) again since the covariance information of noise generally all do not know, so carry out as down conversion:
x(t)x(t) T=s 2(t)aa T+n(t)n(t) T+s(t)an(t) T+s(t)n(t)a T (5)
When impulsive noise appears in certain several sensor, n (t)=n Gauss(t)+n Impulse(t), n wherein Gauss(t) represent Gaussian noise composition in the noise, n Impulse(t) represent impulsive noise composition in the noise.Because
E[n (t)]=E[n Gauss(t)]+E[n Impulse(t)]=E[n Impulse(t)] (6) are so w TPE[x (t) x (t) T] Pw=E[s (t)] 2+ w TPE[n Impulse(t)] E[s (t)]+E[n Impulse(t)] TE[s (t)] Pw (7) however the covariance of accurately calculating x (t) is again the comparison difficulty, adopt the average expectation of estimating to substitute here: R = 1 N Σ t = 1 N x ( t ) x ( t ) T . . . . . . ( 8 ) Can get: f 1(w)=w TPRPw-E[s (t)] 2-w TPE[n Impulse(t)] E[s (t)]-E[n Impulse(t)] TE[s (t)] Pw (9)
Because having the analog value on the P diagonal line of sensing data of impulsive noise is 0, so PE[n Impulse(t)]=0.Then LCTLS is represented by following linear restriction minimum problem:
min?w TPRPw
(10)
S.t.a TPw=1 fusion results z (t)=w TPx (t).Can see, even LCTLS under the situation that impulsive noise exists, still satisfies no inclined to one side characteristic:
E[z (t)]=E[w TPx (t)]=E[w TP (as (t)+n (t)]=E[s (t)]+E[w TPn Impulse)]=E[s (t)] (11) as can be seen, think that when detection algorithm when not having the abnormality sensor data, P still is a unit matrix, this moment, the LCTLS method just developed into the LCLS method, that is to say that the LCLS method is a special case of LCTLS method.3. recurrent neural network is realized
Because the Lagrangian function of (10) is
L (w, y)=w TPRPw+y T(a TPw-1) (12) will obtain optimum solution and need separate following equation according to the Kuhn-Tucker condition:
Order R ^ = PRP , a ^ = Pa , (13) can be expressed as 2 R ^ a ^ - a ^ T 0 w y = 0 1 . . . . . . ( 14 )
If with the method for directly inverting, then w = a ^ T R ^ - 1 a ^ T R ^ - 1 a ^ . Because when P is not the unit diagonal matrix, Must be unusual, invert and to ask its generalized inverse matrix.
If use neural network method, both sides are premultiplication simultaneously 2 R ^ a ^ - a ^ T 0 T , (14) become 2 R ^ a ^ - a ^ T 0 T 2 R ^ a ^ - a ^ T 0 w y = a ^ 0 Promptly 4 R ^ T R ^ + a ^ a ^ T 2 R ^ a ^ 2 a ^ T R ^ a ^ T a ^ w y - a ^ 0 = 0 0 . . . . . . ( 15 )
It is as follows then can to get continuous recurrent neural network blending algorithm:
State equation: d dt w y = - W 1 w + W 2 y - a ^ W 3 w + K a y
Output equation:
Z (t)=w TPx (t) (16) is y ∈ R wherein, w ∈ R K, K a = a ^ T a ^ , Each parameter is as follows
Fig. 2 (a) has provided the state equation structure of continuous neural network.
Discrete recurrent neural network algorithm is as follows:
State equation: w ( k + 1 ) y ( k + 1 ) = w ( k ) y ( k ) - h * W 1 w ( k ) + W 2 y ( k ) - a ^ W 3 w ( k ) + K a y ( k )
Output equation:
Z (t)=(w (k+1)) TPx (t) (18) wherein h>0 is a fixing step-length, h<1/K aα is one to be satisfied | | α R ^ | | ∞ ≤ 1 Scale parameter, all the other each parameters are as follows:
Figure A0312905800107
Fig. 2 (b) has provided the state equation structure of discrete neural network.Can prove that discrete neural network and continuous neural network can obtain separating of system of equations (13) when stablizing.When not having the abnormality sensor data, network is pressed exponential convergence in global minimum; When having detected the abnormality sensor data, network can converge to that the overall situation is minimum, and speed of convergence is also very fast, just reaches stable with interior general 10 times.
What embodiment 1 adopted is discrete neural network structure.Step-length h is 0.09 when K=5.Table 1 has provided under various situation, and the performance of LCLS and LCTLS relatively.Each data is the mean value of Monte Carlo random simulation 100 times.
The performance of LCLS and LCTLS relatively during table 1 K=5
l=1 l=2
f=0.1 f=0.3 f=0.1 f=0.3
p=10 p=20 p=10 p=20 p=10 p=20 p=10 p=20
LCTLS 0.0165 0.0166 0.0166 0.0165 0.0208 0.0198 0.0209 0.0201 merges mean square deviation
LCLS 0.0177 0.0188 0.0182 0.0181 0.0288 0.0326 0.0292 0.0280
LCTLS LCTLS 10 10 10 10 10 10 10 network cycle indexes
LCLS LCLS 34 66 54 76 426 444 308
Embodiment 2 is the example of image co-registration.The Rice image is one 256 * 256 a 8bit gray level image.It is 0.02 Gaussian noise that K=5, each width of cloth original image add upside deviation.It is 0.5 salt-pepper noise that 2 sensors are wherein gained in strength again.These images are merged with LCTLS and LCLS method respectively, all adopt neural network to realize that step is identical with embodiment 1.
Fig. 3 (a) is an original image of not being with any noise.Fig. 3 (b) is the image that a width of cloth has Gaussian noise.Fig. 3 (c) is the image that has Gaussian noise and salt-pepper noise simultaneously.Fig. 3 (d) adopts LCLS method fusion results afterwards.Fig. 3 (e) is based on the fusion results of the neural fusion method of LCTLS.
MSEL CTLSJust reach stable=0.0063,7 times; MSE LCLS=0.0078, the round-robin number of times is 964 when reaching stable state.

Claims (1)

1, a kind of data fusion method of blocking least square based on linear restriction is characterized in that comprising following concrete steps:
1) one of initialization is the unit diagonal matrix of dimension with the number of sensors, ask the mean square value of each sensing data earlier, by variance after their normalization and threshold value relatively judge whether exist unusual sensing data, threshold adaptive to be arranged to the form that is inversely proportional to number of sensors; As there are abnormality sensor data, then all just normalize to each on a certain zone, if certain normalizing value surpasses setting threshold, then the respective element of unit diagonal matrix is set to 0, otherwise the unit diagonal matrix is not done any change, and this threshold value also is provided with according to the average and the standard deviation self-adaptation of normalization mean square value;
2) obtain detecting matrix P after testing process finishes, set up,, introduce linear restriction w even square expectation of normal sensing data weighting fusion result and original signal difference reaches minimum based on the initial target function that blocks least square TPa=1, wherein w=[w 1, w 2..., w K] TBe each sensor weights, a=[a 1, a 2..., a K] TScale parameter for each sensor, K is a number of sensors, the initial target function becomes the optimization problem of the noise covariance matrix that has linear restriction, it is zero utilizing the Gaussian noise expectation and the detection matrix is zero characteristic with the expectation that the abnormality sensor data product of impact noise occurs, developing into an optimization problem that has the measurement data covariance matrix of constraint, also is final fusion objective function;
3) obtain merging after the objective function, ask for its corresponding Lagrangian function, then according to the Kuhn-Tucker condition, obtain asking for the pairing system of equations of optimum solution, extract the matrix of coefficients of variable, premultiplication deducts system of equations right side part more simultaneously with the transposition of matrix of coefficients simultaneously on the system of equations both sides, realize that for continuous neural network the left side of final system of equations is in the network optimization variable to the negative derivative of time; Realizing for discrete neural network, is with the continuous neural network discretize, then all measurement data covariance matrixes be multiply by a coefficient, this coefficient to satisfy it with the infinite norm of measuring the covariance matrix product less than 1, the step-length of network training less than
Figure A0312905800021
Wherein The value of w is the optimum weights of being asked during network stabilization.
CN 03129058 2003-06-05 2003-06-05 Data merging method based linear constrainted cut minimum binary multiply Expired - Fee Related CN1216338C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 03129058 CN1216338C (en) 2003-06-05 2003-06-05 Data merging method based linear constrainted cut minimum binary multiply

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 03129058 CN1216338C (en) 2003-06-05 2003-06-05 Data merging method based linear constrainted cut minimum binary multiply

Publications (2)

Publication Number Publication Date
CN1472673A true CN1472673A (en) 2004-02-04
CN1216338C CN1216338C (en) 2005-08-24

Family

ID=34153426

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 03129058 Expired - Fee Related CN1216338C (en) 2003-06-05 2003-06-05 Data merging method based linear constrainted cut minimum binary multiply

Country Status (1)

Country Link
CN (1) CN1216338C (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872433A (en) * 2010-05-21 2010-10-27 杭州电子科技大学 Beer flavor prediction method based on neural network technique
CN101815317B (en) * 2009-02-23 2013-04-03 中国科学院计算技术研究所 Method and system for measuring sensor nodes and sensor network
CN103606530A (en) * 2013-10-25 2014-02-26 清华大学 Method for fault detection in plasma etching process of fusion function data description
CN103743435A (en) * 2013-12-23 2014-04-23 广西科技大学 Multi-sensor data fusion method
CN104866462A (en) * 2015-05-08 2015-08-26 同济大学 Method for increasing spatial data accuracy based on total least squares with constraints
CN114378812A (en) * 2021-12-13 2022-04-22 扬州大学 Parallel mechanical arm prediction control method based on discrete recurrent neural network model

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101815317B (en) * 2009-02-23 2013-04-03 中国科学院计算技术研究所 Method and system for measuring sensor nodes and sensor network
CN101872433A (en) * 2010-05-21 2010-10-27 杭州电子科技大学 Beer flavor prediction method based on neural network technique
CN103606530A (en) * 2013-10-25 2014-02-26 清华大学 Method for fault detection in plasma etching process of fusion function data description
CN103606530B (en) * 2013-10-25 2016-01-06 清华大学 The fault detection method of the plasma etching process that fusion function type data describe
CN103743435A (en) * 2013-12-23 2014-04-23 广西科技大学 Multi-sensor data fusion method
CN104866462A (en) * 2015-05-08 2015-08-26 同济大学 Method for increasing spatial data accuracy based on total least squares with constraints
CN104866462B (en) * 2015-05-08 2017-12-26 同济大学 A kind of topological relation correcting method of Map Generalization adjacent space key element
CN114378812A (en) * 2021-12-13 2022-04-22 扬州大学 Parallel mechanical arm prediction control method based on discrete recurrent neural network model
CN114378812B (en) * 2021-12-13 2023-09-05 扬州大学 Parallel mechanical arm prediction control method based on discrete recurrent neural network model

Also Published As

Publication number Publication date
CN1216338C (en) 2005-08-24

Similar Documents

Publication Publication Date Title
CN103533214B (en) Video real-time denoising method based on kalman filtering and bilateral filtering
CN110599413B (en) Laser facula image denoising method based on deep learning convolutional neural network
CN112837303A (en) Defect detection method, device, equipment and medium for mold monitoring
CN1472954A (en) Circuit and method for improving image quality by fram correlation
Liu et al. A multi-metric fusion approach to visual quality assessment
CN1656824A (en) A method and system for estimating sharpness metrics based on local edge statistical distribution
CN1416651A (en) Reducable and enlargable objective measurement of evaluating automatic video quality
KR102157578B1 (en) Method for measuring significant wave height using artificial neural network in radar type wave gauge system
CN102891966A (en) Focusing method and device for digital imaging device
US11461875B2 (en) Displacement measurement device and displacement measurement method
CN103994062A (en) Hydraulic-pump fault feature signal extraction method
CN110400274A (en) A kind of vehicle mounted infrared pedestrian detection infrared image enhancing method
CN1472673A (en) Data merging method based linear constrainted cut minimum binary multiply
CN102509311B (en) Motion detection method and device
CN104880703B (en) A kind of Reverberation Rejection technology for side-scan sonar target detection
CN112183469B (en) Method for identifying congestion degree of public transportation and self-adaptive adjustment
CN105160679A (en) Local three-dimensional matching algorithm based on combination of adaptive weighting and image segmentation
CN112637104B (en) Abnormal flow detection method and system
CN1545812A (en) Device and process for estimating noise level, noise reduction system and coding system comprising such a device
CN109447952B (en) Semi-reference image quality evaluation method based on Gabor differential box weighting dimension
CN101047780A (en) Recursive 3D super precision method for smoothly changing area
CN115761672A (en) Detection method, detection system and detection device for dirt on vehicle camera
CN106548459B (en) Turbid water quality imaging target detection system and method based on logic stochastic resonance
Seghir et al. Edge-region information with distorted and displaced pixels measure for image quality evaluation
CN114638809A (en) Multi-scale micro-defect detection method based on PA-MLFPN workpiece surface

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C19 Lapse of patent right due to non-payment of the annual fee
CF01 Termination of patent right due to non-payment of annual fee