CN107895145A - Method based on convolutional neural networks combination super-Gaussian denoising estimation finger stress - Google Patents

Method based on convolutional neural networks combination super-Gaussian denoising estimation finger stress Download PDF

Info

Publication number
CN107895145A
CN107895145A CN201711044262.8A CN201711044262A CN107895145A CN 107895145 A CN107895145 A CN 107895145A CN 201711044262 A CN201711044262 A CN 201711044262A CN 107895145 A CN107895145 A CN 107895145A
Authority
CN
China
Prior art keywords
mrow
finger
msup
gaussian
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711044262.8A
Other languages
Chinese (zh)
Inventor
张小瑞
吴韵清
孙伟
宋爱国
牛建伟
蔡青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN201711044262.8A priority Critical patent/CN107895145A/en
Publication of CN107895145A publication Critical patent/CN107895145A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Abstract

The invention discloses the method based on convolutional neural networks combination super-Gaussian denoising estimation finger stress, it is characterised in that comprises the steps of:Step 1:Utilize the two dimensional image of fixed imaging source camera shooting finger;Step 2:Using super-Gaussian method to taken the photograph image denoising;Step 3:Predict that the two dimensional image after denoising to the transformed matrix of three-dimensional finger model, builds three-dimensional finger model using convolutional neural networks;Step 4:Utilize deformation of the Gaussian processes based on finger and its surrounding skin and color change prediction finger strength.This method obtains the image of finger from using a static camera outside certain distance so that the estimation of finger strength can be carried out under the physical disturbance of no any opponent, and the estimation of power is comprehensive and accurate.

Description

Method based on convolutional neural networks combination super-Gaussian denoising estimation finger stress
Technical field
It is more particularly to a kind of to be based on convolutional Neural net the present invention relates to computer vision and the research field of deep learning The method that finger stress is estimated in the denoising of network combination super-Gaussian.
Background technology
Fully to improve the stability of robot grip, enriching its operational capacity, human hand is often used as example.And it is Reach the purpose of the finger holding power of the estimation mankind, machine is appreciated that dynamics of the people in grasping and manipulation.Grasp and go for human hand For analysis and research can be used to develop operation skill and strength based on the mechanical hand of resistance during grasping. Current research focuses mostly in the position of finger, and for determination, the position of finger during grasping provides crucial letter for these researchs Breath, but few researchs are related to estimation finger strength.In a small amount of method existing at present, more uses can only simply estimate surface strength Piezoresistor, causing can only be from a pre-defined place prediction with all strength, and the data obtained is very limited;Or using non- Natural gloves, although this method can obtain relatively comprehensive mechanical information, result is often limited.
The content of the invention
The present invention is in order to solve problems of the prior art, there is provided one kind is based on convolutional neural networks combination super-Gaussian The method of denoising estimation finger stress carrys out accurate estimation finger strength comprehensively.
In order to achieve the above object, technical scheme proposed by the present invention is:One kind is based on convolutional neural networks combination superelevation The method that finger stress is estimated in this denoising, comprises the following steps:
Step 1: the two dimensional image using fixed imaging source camera shooting finger;
Step 2: denoising is carried out to the image shot in step 1 using super-Gaussian method;
Step 3: three-dimensional finger model, wherein two dimensional image to three-dimensional finger model are built to treated two dimensional image Transformed matrix utilize convolutional neural networks prediction;
Step 4: predict finger strength using deformation of the Gaussian processes based on finger and its surrounding skin and color change.
Above-mentioned technical proposal is further designed to:Resolution ratio is shot with 15 frames speed per second with camera in step 1 For the image of 1024*768 pixels.
Finger position coordinate array a in two dimensional image in step 30,b0Represent, the position of finger is sat in 3-D view Mark array x, y, z and represent that direction angle [alpha], beta, gamma represents, represent deviation angle respectively, be higher by angle, rotational angle;Three In the front view of dimension module, x, y, z, α, beta, gamma are all initialized as 0;The point of finger surface one C0Real coordinate C1It is by formula C1=MC0Provide.Wherein, M is transition matrix:
Wherein, H is angle parameter, H=Rotz(α)·Roty(β)·Rotx(γ), Rotz(α) represent spin matrix on Z-axis rotation alpha angle, Roty(β) represents spin matrix on y-axis rotation β angle, Rotx(γ) represents that spin matrix revolves on x-axis Turn γ angles;Q is position parameter, q=(x y z)T, x, y, z is the position coordinates of finger in 3-D view.
Texture mapping alignment is carried out to image after 3-D view is obtained in step 3.
It is a n n dimensional vector ns by the image obtained after alignment remodeling when in step 4 using Gaussian processes estimation finger strength (d1,d2,d3,...,dn), and as the input value of Gaussian processes estimation power, G=(G1,G2,G3,...,Gn)TIt is corresponding target, Therefore the power f (d of Gaussian processes estimation*) and input coordinate vector d associate matrix d*Distribution it is as follows:
Wherein, E [f (d*)] it is the power f (d that Gaussian processes is estimated*) mathematic expectaion;KijIt is i-th of power diWith j-th of power dj Covariance, i.e. Kij=k (di,dj);λnIt is the matrix of n noise variance hyper parameter composition;I is unit matrix;K is covariance Computing, k*TRepresent to obtain transposition after the covariance of associate matrix;G=(G1,G2,G3,...,Gn)TIt is power group to be estimated Into matrix;Var[f(d*)] it is the power f (d that Gaussian processes is estimated*) variance;k(d*,d*) it is the associate matrix d for inputting d* Covariance;The computational methods of noise variance parameter optimal value are as follows:
Wherein, p (G | d) is the probability of G under condition d, and the truth of a matter can use arbitrary value in principle, take 2 to be easy to Computing; It is square of the matrix of n noise variance hyper parameter composition.
Beneficial effects of the present invention are:
This method reconstructs the power at grip and finger with the picture of finger.This method is without using on finger Equipment, alternatively, using a static camera outside certain distance from obtain finger image, this causes finger The estimation of strength can be carried out on the premise of the interference of the physics of no any opponent, so as to obtain estimating for comprehensive and accurate power Meter.
Embodiment
With reference to specific embodiment, the present invention is described in detail.
The depth learning technology that the present embodiment uses can solve the problems, such as that two dimensional image is converted into threedimensional model.Depth Relatively it is adapted to the mainly depth convolutional neural networks of processing visual information in habit technology, it is a kind of method of supervised learning. Also the link that denoising is carried out to image is devised in the present embodiment;Finally, with reference to the Gaussian processes of comparative maturity at present Estimate finger strength from the image of alignment.
The specific steps of the method based on convolutional neural networks combination super-Gaussian denoising estimation finger stress of the present embodiment For:
Step 1:Prophase shoot
Fixed imaging source cameras capture video data shoots resolution ratio as 1024*768 pixels using 15 frames speed per second Image.
Because post-processing contains the torsion and correction in direction, the present invention for shooting distance, angle in this step not It is required.
Step 2:Denoising is carried out to the image shot in step 1 using super-Gaussian method
The present embodiment devises DnCNN (denoising convolutional neural networks) for image denoising, with feedforward convolutional neural networks, The method that residual error study is combined with batch processing normalization, noise is separated with noise image.
Relate generally to two steps for a particular task training depth convolutional network model:Network architecture design, from Training data learning.
(1) network architecture design
This method have modified VGG networks so that it is applied to the denoising of image.Network depth is determined first.Herein based on most Effective patch size in advanced denoising method carrys out the depth of setting network.Convolution filter size is designed to be 3 × 3 but delete All pond layers, therefore (convolutional network is connected the acceptance region that depth is D DnCNN by being forced between adjacent two layers using local Pattern is connect to utilize the local space characteristics of image, these regional areas are referred to as the continuous acceptance region in space) it should be (2D+1) ×(2D+1).Increase acceptance region size can utilize peripheral information in bigger image-region.Pass through fixed noise level η =25, the denoising effect of several main denoising methods is analyzed, selected depth D is 17, and corresponding acceptance region size is 35 × 35 DnCNN.For other in general image denoising tasks, bigger receptive field can be used, and it is 20 to set depth D.DnCNN Input U be a noisy observing and nursing, be expressed as U=U0+ V, U0For the model that noiseless is clean, V is to contain standard deviation The additive white Gaussian noise of difference.Can by hidden layer from noisy in isolate picture structure, this mechanism is similar to The methods of iterated denoising strategy, but the method be one end to end mode train.
(2) from training data learning
For model learning, using residual error learning method, and batch normalization is included to reach Fast Training and change Enter the purpose of denoising performance.Remaining mapping R (U) ≈ V are trained using residue study formula, obtain U0=U-R (U);Residue is learned Practising formula is:
Jt=g (W1f(W2lt+W3jt-1)) (4)
Wherein, JtIt is the output of t;W1,W2,W3It is to connect implicit layer unit and output layer unit, input block respectively 3 weight matrix between implicit layer unit, implicit layer unit;ltIt is the input of t, jt-1Be the t-1 moment hidden layer it is defeated Go out;
Mean square error between average required residual image and the input estimated from noise can be used as loss function The training parameter Θ in DnCNN, the Averaged Square Error of Multivariate calculation formula between noise inputs estimation and preferable residual error are as follows:
Wherein,Q is represented to noisy-pure training picture pair, UiIt is i-th of noisy picture, U0iIt is i-th Pure picture;Q to the mean square error of noisy-pure training picture, | | | | symbol be by Linear normed spaces to nonnegative real number mapping, with the distance between the point and origin in norm representation space.
Step 3:It is aligned using the three-dimensional modeling and image of convolutional neural networks
(1) three-dimensional modeling (training early stage of convolutional neural networks)
Because the position and direction of finger can change over time, in order to reduce the influence that these changes are brought, estimating Before power and torque, using CNN come the position and direction predicting and correct finger, therefore for training CNN, it is necessary to actual experiment it Before, 12-15 width finger-images are chosen from the image of step 1, are then gone out with AgisoftPhotoscan 0.9.1 software buildings 3D models, carry out training convolutional neural networks.
(2) finger detection and tracking
Background is also contains because video flowing not only contains finger, therefore needs finger to be extracted from image, this The threshold value that invention selection is usually used in the YCbCr color spaces of the nonlinear rgb signal of skin detection is partitioned into finger, then uses Average transformation algorithm tracks the finger in video.
(3) the transformation matrix estimation based on convolutional neural networks
Original two dimensional image can cause the loss of data, therefore estimate transformed matrix with CNN once tilting.
A, finger position coordinate representation
The position of finger in 3-D view is allowed to represent that direction angle [alpha], beta, gamma represents with coordinate array x, y, z, respectively table Show deviation angle, be higher by angle, rotational angle;In the front view of threedimensional model, x, y, z, α, beta, gamma are all initialized as 0; The point of finger surface one C0Real coordinate C1It is by formula C1=MC0Provide.Wherein, M is transition matrix:
Wherein, H is angle parameter, H=Rotz(α)·Roty(β)·Rotx(γ), Rotz(α) represent spin matrix on Z-axis rotation alpha angle, Roty(β) represents spin matrix on y-axis rotation β angle, Rotx(γ) represents that spin matrix revolves on x-axis Turn γ angles;Q is position parameter, q=(x y z)T, x, y, z is the position coordinates of finger in 3-D view.
Environmental factor such as light, the interference reflected are reduced to simplify the task of calibration, the present invention has put one on finger Mark.This method is insensitive to the position of mark, as long as in training and test, mark has been placed on same place. The mark determines four lines to identify the direction of nail, and the coordinate of nail can be estimated by positioning the red point in two dimensional image Meter.The black line on the left side clearly can be distinguished mutually.
40000 width two dimension training images are caused to have different position z and angle [alpha], β, γ using the threedimensional model of establishment; z ∈ [- 2,3], using 0.5 pseudo range as interval in model coordinate;α ∈ [- 25,30], β ∈ [- 15,35], γ ∈ [- 23,37], With 3.5 degree for interval.Pseudo range is pseudo range of the distance in a z-direction relative to 3D models, and unit depends on human hand The size ratio of the three-dimensional finger model of finger.
B, CNN estimates transformed matrix
CNN is a kind of netural net structure for returning and classifying, and can detect mistake, realizes and turns relatively robustly Change.The combination output intent architecture of network proposed by the present invention includes six layers.First convolutional layer is in first maximum pond After changing layer, another convolutional layer uses 5*5 wave filter after second maximum pond layer and two full connections in experiment, the One convolutional layer has used 8 kernels, and the second convolutional layer has used 25 kernels.Under normal circumstances, convolutional neural networks are in order to follow-up What the convolution in stage and maximum pondization designed.Characteristic pattern r (a, b) two dimensional image convolutional calculation method is as follows:
Wherein, (a, b) is the location of pixels on characteristic pattern;l1*l2It is the size of wave filter;S is kernel weight;U is weight Matrix u rows;V is weight matrix v row;G is input mapping;bmIt is biasing;
Maximum pondization activation computational methods are as follows:
Wherein, L is the Feature Mapping in maximum pond layer, l3*l4It is pond size;
Maximum pond is a kind of non-linear to Downsapling method, can reduce computation complexity.These layers take convolutional layer Output as input, reduce the resolution ratio of input.The MLP (multilayer neural network) connected entirely includes 50 hidden units. Direction θ ∈ { α, β, γ } computational methods are as follows:
Wherein, O1,O2,...,O7It is 7 outputs of last layer line, is designated as A=(O1,O2,...O7)T
Determine first output for Ofirst=O1, remaining 6 output valve contributes to three directions.It is worth noting that, by Can be micro- in the coding angle of this form, chain rule reverse propagated error gradient can be used to return network.
Assuming that output A and input B is linearly related, with conditional probability density, linear regression model (LRM) represents as follows:
P (A | B, ξ)=N (A | μ (B), σ2) (9)
Wherein, A is the output of linear regression;B is the input of linear regression and MLP output;ξ=(W, σ2) it is parameter, W It is weight matrix, σ2It is variance;N(A|μ(B),σ2) both Normal Distributions are represented, μ (B) is B expectation;It is desired Output is μ (WTB);
Assuming that training data is independent same distribution, to determine weight W optimal value, (negative logarithm is seemingly by the reduction NLL that should try one's best So), it is as follows to bear log-likelihood calculations method:
The summation of square error is defined as:
Wherein ALL is the quantity of data point to be optimized;AiIt is i-th of input, BiIt is i-th of output;ξ is parameter;
Because direction is periodic, von Mises distributions are selected to calculate log-likelihood function.The present invention uses a letter Single L2 norms, cost functionComputational methods are as follows:
Wherein,It is the angle of cost to be calculated, TEST is the number of samples for training;
By minimizing α, β, γ cost function and OfirstNegative log-likelihood, update the weights of convolutional neural networks.
The training process of observation combination output convolutional neural networks, position and direction variable are restrained respectively when starting, α, β, After γ is stable, OfirstRestrained, with unique connection, individually training position and direction, the orientation of output are arranged to this method The biasing of linear regression layer.It is therefore expected that OfirstOutput it is as follows:
μ (d)=Wαα+Wββ+Wγγ+W1d1+...WndTEST (13)
C, texture mapping
From the transformation matrix of convolutional neural networks estimation, image can be alignd using texture mapping.This is one effective Do not have cumbersome process from source images to create outward appearance method, such as modeling or to each details drawing three-dimensional curve.It Allow estimated location and direction of " bonding " the source two-dimensional framework to 3D finger surfaces.Then the threedimensional model finger of mapping passes through ginseng The perspective projection in position and orientation is examined, is plotted to target image.Source images are in texture space a0,b0Mark, in the three-dimensional model Marked with x, y, z, the image of alignment is in screen space (xp,yp) mark.
D, nail and skin extraction
Because different images may have different occlusion areas in motion process, therefore the finger-image edge to align may It is different.But common visibility region has identical outward appearance in the image that aligns, these images are included in the form of color change Pressure information.Noise caused by reduce finger edge and environment, extract the common factor of all visibility regions in whole video flowing.
Step 4:Gaussian process estimates power
Alignment image is obtained by above three steps, these alignment images will be divided into a training and test Collection, about 82% view data be used to train.One Gaussian random process (GP) is as follows:
M (d)=E [f (d)] (14)
k(d,dT)=E [(f (d)-m (d)) (f (dT)-m(dT))] (15)
Wherein, m (d) be input training set average, k (d, dT) it is the covariance for inputting training set;
Assuming that GP has 10 average functions, the derivation of square index covariance (SE) is as follows:
Wherein, l is length dimension;It is signal variance hyper parameter;| | | | symbol is by linear normed spaces to non-negative reality Several mapping, with the distance between the point and origin in norm representation space.
Point of the distance less than l may be considered that with similar value;The input of training points is (d1,d2,d3,...,dn), di (i=1,2,3 ..., n) be one alignment after image, remolded as a n dimensional vector n.In addition, the power G=(G of estimation1, G2,G3,...,Gn)TIt is corresponding target.Therefore desired value f (d*), d associate matrix d*Distribution is as follows:
Wherein, E [f (d*)] it is the power f (d that Gaussian processes is estimated*) mathematic expectaion;KijIt is i-th of power diWith j-th of power dj Covariance, i.e. Kij=k (di,dj);λnIt is the matrix of n noise variance hyper parameter composition;I is unit matrix;K is covariance Computing, k*TRepresent to obtain transposition after the covariance of associate matrix;G=(G1,G2,G3,...,Gn)TIt is power group to be estimated Into matrix; Var[f(d*)] it is the power f (d that Gaussian processes is estimated*) variance;k(d*,d*) it is the associate matrix d for inputting d* Covariance;
Log-likelihood function is maximized to be calculated as follows:
Wherein, p (G | d) is the probability of G under condition d, and the truth of a matter can use arbitrary value in principle, take 2 to be easy to Computing; It is square of the matrix of n noise variance hyper parameter composition.
The optimal value of three hyper parameters is obtained using training set, to obtain the estimation of accurate power.
The method based on convolutional neural networks combination super-Gaussian denoising estimation finger stress of the present invention is not limited to above-mentioned Each embodiment, all technical schemes obtained using equivalent substitution mode are all fallen within the scope of protection of present invention.

Claims (7)

1. the method based on convolutional neural networks combination super-Gaussian denoising estimation finger stress, it is characterised in that including following step Suddenly:
Step 1: the two dimensional image using fixed imaging source camera shooting finger;
Step 2: denoising is carried out to the image shot in step 1 using super-Gaussian method;
Step 3: to treated two dimensional image structure three-dimensional finger model, wherein turn of two dimensional image to three-dimensional finger model Change matrix to predict using convolutional neural networks;
Step 4: predict finger strength using deformation of the Gaussian processes based on finger and its surrounding skin and color change.
2. the method according to claim 1 based on convolutional neural networks combination super-Gaussian denoising estimation finger stress, it is special Sign is:Image of the resolution ratio as 1024*768 pixels is shot with camera using 15 frames speed per second in step 1.
3. the method according to claim 2 based on convolutional neural networks combination super-Gaussian denoising estimation finger stress, it is special Sign is:Finger position coordinate array a in two dimensional image in step 30,b0Represent, the position of finger is sat in 3-D view Mark array x, y, z and represent that direction angle [alpha], beta, gamma represents, represent deviation angle respectively, be higher by angle, rotational angle;Three In the front view of dimension module, x, y, z, α, beta, gamma are all initialized as 0;The point of finger surface one C0Real coordinate C1It is by formula C1=MC0Provide, wherein, M is transition matrix:
<mrow> <mi>M</mi> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>H</mi> </mtd> <mtd> <mi>q</mi> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow>
Wherein, H is angle parameter, H=Rotz(α)·Roty(β)·Rotx(γ), Rotz(α) represents that spin matrix revolves on z-axis Turn α angles, Roty(β) represents spin matrix on y-axis rotation β angle, Rotx(γ) represents that spin matrix rotates γ on x-axis Angle;Q is position parameter, q=(x y z)T, x, y, z is the position coordinates of finger in 3-D view.
4. the method according to claim 3 based on convolutional neural networks combination super-Gaussian denoising estimation finger stress, it is special Sign is:Texture mapping alignment is carried out to image after 3-D view is obtained in step 3.
5. the method according to claim 4 based on convolutional neural networks combination super-Gaussian denoising estimation finger stress, it is special Sign is:When in step 4 using Gaussian processes estimation finger strength, obtained after the coordinate of obtained 3-D view is alignd Image remodeling is n one-dimensional coordinate vector (d1,d2,d3,...,dn), and as the input value of Gaussian processes estimation power.
6. the method according to claim 5 based on convolutional neural networks combination super-Gaussian denoising estimation finger stress, it is special Sign is:Power f (the d of Gaussian processes estimation*) and input d associate matrix d*Distribution it is as follows:
<mrow> <mi>E</mi> <mo>&amp;lsqb;</mo> <mi>f</mi> <mrow> <mo>(</mo> <msup> <mi>d</mi> <mo>*</mo> </msup> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>=</mo> <msup> <mi>k</mi> <mrow> <mo>*</mo> <mi>T</mi> </mrow> </msup> <msup> <mrow> <mo>(</mo> <msub> <mi>K</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>+</mo> <msubsup> <mi>&amp;lambda;</mi> <mi>n</mi> <mn>2</mn> </msubsup> <mi>I</mi> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mi>G</mi> </mrow>
<mrow> <mi>V</mi> <mi>a</mi> <mi>r</mi> <mo>&amp;lsqb;</mo> <mi>f</mi> <mrow> <mo>(</mo> <msup> <mi>d</mi> <mo>*</mo> </msup> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>=</mo> <mi>k</mi> <mrow> <mo>(</mo> <msup> <mi>d</mi> <mo>*</mo> </msup> <mo>,</mo> <msup> <mi>d</mi> <mo>*</mo> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>k</mi> <mrow> <mo>*</mo> <mi>T</mi> </mrow> </msup> <msup> <mrow> <mo>(</mo> <msub> <mi>K</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>+</mo> <msubsup> <mi>&amp;lambda;</mi> <mi>n</mi> <mn>2</mn> </msubsup> <mi>I</mi> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <msup> <mi>k</mi> <mo>*</mo> </msup> </mrow>
Wherein, E [f (d*)] it is the power f (d that Gaussian processes is estimated*) mathematic expectaion;KijIt is i-th of power diWith j-th of power djAssociation Variance, i.e. Kij=k (di,dj);λnIt is the matrix of n noise variance hyper parameter composition;I is unit matrix;K is covariance computing, k*TRepresent to obtain transposition after the covariance of associate matrix;G=(G1,G2,G3,...,Gn)TIt is the square of power composition to be estimated Battle array;Var[f(d*)] it is the power f (d that Gaussian processes is estimated*) variance;k(d*,d*) it is the associate matrix d for inputting d*Association side Difference.
7. the method according to claim 6 based on convolutional neural networks combination super-Gaussian denoising estimation finger stress, it is special Sign is:The computational methods of noise variance hyper parameter optimal value are:
<mrow> <mi>log</mi> <mi> </mi> <mi>p</mi> <mrow> <mo>(</mo> <mi>G</mi> <mo>|</mo> <mi>d</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msup> <mi>G</mi> <mi>T</mi> </msup> <msup> <mrow> <mo>(</mo> <msub> <mi>K</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>+</mo> <msubsup> <mi>&amp;lambda;</mi> <mi>n</mi> <mn>2</mn> </msubsup> <mi>I</mi> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mi>G</mi> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mo>|</mo> <msub> <mi>K</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>+</mo> <msubsup> <mi>&amp;lambda;</mi> <mi>n</mi> <mn>2</mn> </msubsup> <mi>I</mi> <mo>|</mo> <mo>-</mo> <mfrac> <mi>n</mi> <mn>2</mn> </mfrac> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mn>2</mn> <mi>&amp;pi;</mi> </mrow>
Wherein, p (G | d) is the probability of G under condition d;It is square of the matrix of n noise variance hyper parameter composition.
CN201711044262.8A 2017-10-31 2017-10-31 Method based on convolutional neural networks combination super-Gaussian denoising estimation finger stress Pending CN107895145A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711044262.8A CN107895145A (en) 2017-10-31 2017-10-31 Method based on convolutional neural networks combination super-Gaussian denoising estimation finger stress

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711044262.8A CN107895145A (en) 2017-10-31 2017-10-31 Method based on convolutional neural networks combination super-Gaussian denoising estimation finger stress

Publications (1)

Publication Number Publication Date
CN107895145A true CN107895145A (en) 2018-04-10

Family

ID=61802528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711044262.8A Pending CN107895145A (en) 2017-10-31 2017-10-31 Method based on convolutional neural networks combination super-Gaussian denoising estimation finger stress

Country Status (1)

Country Link
CN (1) CN107895145A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584507A (en) * 2018-11-12 2019-04-05 深圳佑驾创新科技有限公司 Driver behavior modeling method, apparatus, system, the vehicles and storage medium
CN110033419A (en) * 2019-04-17 2019-07-19 山东超越数控电子股份有限公司 A kind of processing method being adapted to warship basic image defogging
CN110838088A (en) * 2018-08-15 2020-02-25 Tcl集团股份有限公司 Multi-frame noise reduction method and device based on deep learning and terminal equipment
CN111207875A (en) * 2020-02-25 2020-05-29 青岛理工大学 Electromyographic signal-torque matching method based on multi-granularity parallel CNN model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2218426A1 (en) * 2007-11-07 2010-08-18 Activelink Co., Ltd. Operation assist device
CN103745058A (en) * 2014-01-09 2014-04-23 南京信息工程大学 Method for simulating tension/deformation on soft tissue epidermis of any shape
CN103886115A (en) * 2012-12-20 2014-06-25 上海工程技术大学 Building and calling method of three-dimension virtual body form based on different body types
CN106067190A (en) * 2016-05-27 2016-11-02 俞怡斐 A kind of fast face threedimensional model based on single image generates and alternative approach
CN107248144A (en) * 2017-04-27 2017-10-13 东南大学 A kind of image de-noising method based on compression-type convolutional neural networks

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2218426A1 (en) * 2007-11-07 2010-08-18 Activelink Co., Ltd. Operation assist device
CN103886115A (en) * 2012-12-20 2014-06-25 上海工程技术大学 Building and calling method of three-dimension virtual body form based on different body types
CN103745058A (en) * 2014-01-09 2014-04-23 南京信息工程大学 Method for simulating tension/deformation on soft tissue epidermis of any shape
CN106067190A (en) * 2016-05-27 2016-11-02 俞怡斐 A kind of fast face threedimensional model based on single image generates and alternative approach
CN107248144A (en) * 2017-04-27 2017-10-13 东南大学 A kind of image de-noising method based on compression-type convolutional neural networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
NUTAN CHEN 等: "Estimating finger grip force from an image of the hand using Convolutional Neural Networks and Gaussian Processes", 《2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION》 *
SEBASTIAN URBAN 等: "Computing grip force and torque from finger nail images using Gaussian processes", 《2013 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS》 *
王婷婷: "二维图像的触觉特征建模与提取技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110838088A (en) * 2018-08-15 2020-02-25 Tcl集团股份有限公司 Multi-frame noise reduction method and device based on deep learning and terminal equipment
CN110838088B (en) * 2018-08-15 2023-06-02 Tcl科技集团股份有限公司 Multi-frame noise reduction method and device based on deep learning and terminal equipment
CN109584507A (en) * 2018-11-12 2019-04-05 深圳佑驾创新科技有限公司 Driver behavior modeling method, apparatus, system, the vehicles and storage medium
CN110033419A (en) * 2019-04-17 2019-07-19 山东超越数控电子股份有限公司 A kind of processing method being adapted to warship basic image defogging
CN111207875A (en) * 2020-02-25 2020-05-29 青岛理工大学 Electromyographic signal-torque matching method based on multi-granularity parallel CNN model
CN111207875B (en) * 2020-02-25 2021-06-25 青岛理工大学 Electromyographic signal-torque matching method based on multi-granularity parallel CNN model

Similar Documents

Publication Publication Date Title
CN113065558B (en) Lightweight small target detection method combined with attention mechanism
CN111156984B (en) Monocular vision inertia SLAM method oriented to dynamic scene
CN106204638B (en) It is a kind of based on dimension self-adaption and the method for tracking target of taking photo by plane for blocking processing
CN108010078B (en) Object grabbing detection method based on three-level convolutional neural network
CN111311666B (en) Monocular vision odometer method integrating edge features and deep learning
Chen et al. Underwater image enhancement based on deep learning and image formation model
Yan et al. A factorization-based approach for articulated nonrigid shape, motion and kinematic chain recovery from video
CN107895145A (en) Method based on convolutional neural networks combination super-Gaussian denoising estimation finger stress
CN105046664B (en) A kind of image de-noising method based on adaptive EPLL algorithms
WO2019227479A1 (en) Method and apparatus for generating face rotation image
CN107705322A (en) Motion estimate tracking and system
CN104299245B (en) Augmented reality tracking based on neutral net
CN110879982B (en) Crowd counting system and method
CN105469041A (en) Facial point detection system based on multi-task regularization and layer-by-layer supervision neural networ
CN111563878A (en) Space target positioning method
CN110781736A (en) Pedestrian re-identification method combining posture and attention based on double-current network
CN107085733A (en) Offshore infrared ship recognition methods based on CNN deep learnings
CN113610046B (en) Behavior recognition method based on depth video linkage characteristics
CN107862680A (en) A kind of target following optimization method based on correlation filter
CN108537822A (en) Motion target tracking method based on weighting reliability estimating
CN112016454A (en) Face alignment detection method
Zhou et al. Faster R-CNN for marine organism detection and recognition using data augmentation
Hirner et al. FC-DCNN: A densely connected neural network for stereo estimation
CN110490165B (en) Dynamic gesture tracking method based on convolutional neural network
CN115205750B (en) Motion real-time counting method and system based on deep learning model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180410