CN107633873A - A kind of Urology Surgery extracorporeal lithotiptor control method based on internet - Google Patents
A kind of Urology Surgery extracorporeal lithotiptor control method based on internet Download PDFInfo
- Publication number
- CN107633873A CN107633873A CN201710943688.0A CN201710943688A CN107633873A CN 107633873 A CN107633873 A CN 107633873A CN 201710943688 A CN201710943688 A CN 201710943688A CN 107633873 A CN107633873 A CN 107633873A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- mtd
- mtr
- mover
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Abstract
The invention belongs to calculi therapy technical field, discloses a kind of Urology Surgery extracorporeal lithotiptor control method based on internet, human body is scanned by using Ultrasonic-B probe, and the image is presented on the computer of access internet;CPU calculates space length and superposition of movement track therebetween according to the wave source focal position information of positional information of the calculus on human body on scanning section and stone crusher;According to the space length between the calculus on human body and the wave source focus of stone crusher and superposition of movement track, calculus accurate location is obtained;Acoustic waveform and ultrasonic wave waveform transfer are crushed to calculus accurate location via waveguide axis.The present invention can improve calculus-smashing efficiency, and calculus-smashing effect is good;The wave source focus that fast and accurately can be automatically positioned calculus simultaneously in shock wave source cup.Reduce the operation difficulty that doctor uses extra chock wave lithotriptor.
Description
Technical field
The invention belongs to calculi therapy technical field, more particularly to a kind of Urology Surgery extracorporeal lithotiptor based on internet
Control method.
Background technology
Calculus is in the catheter lumen in body or the chamber of luminal organ (such as kidney, ureter, gall-bladder or bladder) in
The solid mass of formation.It is mainly seen in gall-bladder and bladder, renal plevis, also seen in the chamber of pancreas conduit, salivary ducts etc..Knot
Stone is made up of inorganic salts or organic matter.Typically there is a core in calculus, by the epithelial cell, bacterial aggregate, parasitic ovum to come off
Or polypide, excrement block or foreign matter composition, inorganic salts or organic matter are deposited on core layer by layer again.Due to the difference of afflicted organ, knot
Composition, shape, quality contained by the mechanism of stone formation, influence to body etc. differ.Generally speaking, calculus can cause pipe
Chamber blocks, and influences the discharge of afflicted organ's liquid, produces the symptoms such as pain, bleeding or secondary infection.However, existing positioning knot
Stone is actually the repetitive process searching for, position, searching again for, repositioning, and not only positioning time length, also makes doctor be pestered beyond endurance;
Existing broken calculus method is inefficient simultaneously, and calculus-smashing effect is poor.
To sum up, the problem of prior art is present be:Existing positioning calculus is actually to search for, position, searching again for, repositioning
Repetitive process, not only positioning time grow, also doctor is pestered beyond endurance;Existing broken calculus method is inefficient simultaneously, calculus
Crushing effect is poor.
The content of the invention
The problem of existing for prior art, the invention provides a kind of Urology Surgery extracorporeal lithotiptor based on internet
Control method.
The present invention is achieved in that a kind of Urology Surgery extracorporeal lithotiptor control method based on internet, the base
Urology Surgery extracorporeal lithotiptor control method in internet comprises the following steps:
Step 1, human body is scanned with Ultrasonic-B probe, gathers the section detection image data of Ultrasonic-B probe, and by the shadow
As being presented on the computer of access internet;
Interference relationships analysis method comprises the following steps between the Computer signal:
(1) some characteristic parameter CPs of the interference signal on wireless signal field are determined, and feature based parameter is formed pair
The interference space model answered, the interference space model based on foundation, determine interference signal characteristic vector to be analyzedWith reference
Character vector of signals
(2) interference space model is based on, for interference signal characteristic vectorDefinition is to contrast signal characteristic vector's
Displacement vector
(3) displacement vector is definedIt is interference signal feature to the projection of some latitude coordinates axle in interference space
VectorTo contrast signal characteristic vectorDistance in the CP dimensions, that is, have:
Wherein PRJ () operator representation is directed to the project of a certain CP dimensions;
(4) it is S to the disturbance state of contrast signal to define interference signal, to represent that interference signal is done to contrast signal
Disturb relation;
(5) on the premise of interference has been formed, it is necessary first to choose and determine interference effect parameter EP, believe for interference
For number, parameter is usually signal power p or energy e;
(6) it is G to the annoyance level of contrast signal to define interference signal, and contrast signal is done to weigh interference signal
Disturb influence degree;
Methods described further comprises:For the more of each self-contained some interference characteristic vectors of interference signal and contrast signal
Mould situation, disturbance state S (V nowI, VS), it is calculated as below:
Wherein S [VI, VS]M×NIt is referred to as disturbance state matrix, each element in matrixRepresent VIIn
K characteristic vector and VSIn l-th of characteristic vector disturbance state, each element is not in only two characteristic vector set
During interference, S (VI, VSThe interference signal of)=0 is not just formed to contrast signal and disturbed;Conversely, S (VI, VS) > 0, now interference signal
Interference will be formed to contrast signal;
Data for projection calculates the iterative model of target image during the image of the Ultrasonic-B probe obtains, the iterative model
Formula is expressed as:
Wherein, X is the target image, and M is sytem matrix, and G is the data for projection, and i represents iterations, XiRepresent
The iteration result obtained after ith iteration;λ represents convergence coefficient, and λ ∈ (0,1), M T represent the transposition to matrix M;Set
The initial value of the target image, and the iterative model is utilized in the target image according to the iterations pre-set
Each pixel be iterated renewal, obtain the target image, the current grayvalue of the pixel in the iterative model
With the gray value Uniform approximat of previous iteration;It is described by gray value in target image be less than 0 pixel zero setting;
Step 2, when scanning show to the calculus on human body on a certain position on the display unit, pass through input singly
Positional information of the calculus on scanning section is inputed to CPU and locked by member;
The display unit hides piece image using multiple hybrid parameters and several carrier images, by with image
Multiple mixing embedded technology image information is embedded into the time-varying parameter of digital image system, to digital image system establish
Mathematical modeling, with this important spy of completely estimation of the Iterative Learning Identification Method in finite time interval to time-varying parameter
Property, the complete reconstruction of the image information of digital image system is realized, empirical value result shows, with Iterative Learning Identification Method
Can recover hidden image completely, and by substantial amounts of experiment test proposed method resistance JPEG compression, shear, make an uproar
The ability of sound and medium filtering geometric attack;
Remember that original image G is θ (t) sequences, be that image G ' is x (t) sequences after encryption, carrier image group Fi(i=1,
2 ..., n) it is wi(t) sequence, i=1,2 ..., n, mixed image SnFor y (t), then system representation is:
t∈{0,1,2...N},x(t)∈Rn,θ(t)∈R1;y1(t)∈R1;y(t)∈R1, nonlinear function f (x (t), θ
(t), t) function that original image is encrypted is represented, nonlinear function g (x (t), t) represents the image and carrier image one after encryption
Secondary iterative mixing function, h (y1(t), t) represent that n overlaps for mixed function, when parameter true value is θ*(t) when, write as:
For estimating θ*(t) Iterative Learning Identification system is:
In formula, k is iterations, and initial value during each iteration is identical, it is assumed that partial derivatives of the f on x, θ, and g is on x's
Partial derivative, and h exist on g partial derivative, note:
And
It is C to remember its boundaryD,CC, CA, CB;
If:
Wherein ρ value is:
||1-γk(t)Dk(t+1)Ck(t+1)Bk(t) | |≤ρ < 1;
||γkDk(t)Ck(t)-γkDk(t+1)Ck(t+1)Ak(t)||≤CM';
Then as k → ∞, θk(t) θ is converged on section { 0,1 ..., N }*(t);
Prove:
According to Order Derivatives in Differential Mid-Value Theorem:
Note:M'k=γkDk(t)Ck(t)-γkDk(t+1)Ck(t+1)Ak(t);
Obtain:
Both ends take λ norms:
Note | | M'k||λ≤CM'Wushu (16) substitutes into (30) and obtained:
Inequality can be write as:
Because 0≤ρ≤1 takes λ sufficiently large, then:
The time-varying parameter θ that the CPU is obtained by Iterative Learning Identification Methodk(t) one group of image sequence
Row, hiding image is reconstructed according still further to the pixel rate of original image, for the hidden image and mixed image that are resumed still
So their error can be reflected using root-mean-square error, their object fidelity is measured using Y-PSNR, carried
Body image F and mixed image S root-mean-square error is:
Root-mean-square error is smaller, illustrates that two images are more similar, and wherein carrier image F image size is M × M, image S
Size be N × N;
Image F and the Y-PSNR PSNR of mixed image are:
For Y-PSNR PSNR as the criterion for weighing image object fidelity, its value is bigger, illustrates image blend
Fidelity is higher, and three width mixed image of selection, three mixed parameters realize hiding for original image, digital picture recovery system
State expression formula is:
During experiment, λ=3.65, initial value x are takenk(0)=0.47, original image θ (t) is gray level image, carrier image w1(t)、
w2(t)、w3(t) it is the gray level image of two width differences, mixed image, y is obtained after 4 mixing are hidden*(t);
According to formula αi+1=μ ' αi(1-αi), setting parameter μ '=3.82, initial value α1=0.75, in the iteration of Logistic mappings
Chaos sequence caused by lower is { αi, therefrom choose experiment parameter sequence;
The learning gains determined according to convergence adequate condition are:
In formula, βi=1- αi, i=1,2, it is check algorithm performance, defining target function is
Step 3, CPU according to the calculus on human body scanning section on positional information and stone crusher ripple
Source focal position information calculates space length and superposition of movement track therebetween;
Step 4, according to the space length and superposition of movement rail between the calculus on human body and the wave source focus of stone crusher
Mark, CPU driving power device said two devices do relative motion according to superposition of movement track, obtain the accurate position of calculus
Put;
Both step 5, the size for determining calculus, the size of the type for determining calculus or determination calculus and type;Selection
For the amplitude for the audio frequency for producing acoustic waveform, the amplitude of audio frequency is that the size based on calculus is selected;
Step 6, acoustic waveform is produced using sound wave actuator;Produced using ultrasonic drive with ultrasonic wave
The ultrasonic wave waveform of frequency;
Step 7, acoustic waveform and ultrasonic wave waveform transfer are crushed to calculus accurate location via waveguide axis.
Further, the input mode of described input block can be to be inputted using mouse, inputted with keyboard
Or touched and inputted with touch-screen.
Further, the wave source focal position information of positional information and stone crusher of the calculus on scanning section has one
Common location reference is the positional information of Ultrasonic-B probe.
Further, the superposition of movement track is that the calculus on human body remains static, and wave source focus is carried out with respect to it
Motion, terminates until the calculus on human body is located to move in wave source focus.
Further, the superposition of movement track is that wave source focus remains static, and the calculus on human body is carried out with respect to it
Motion, terminates until the calculus is located to move in wave source focus.
By the present invention in that knot can be improved with ultrasonic drive to produce the ultrasonic wave waveform with ultrasonic frequency
Stone crushing efficiency, calculus-smashing effect are good;Calculus can be automatically positioned fast and accurately by using the method for Ultrasonic-B probe simultaneously
To the wave source focus in shock wave source cup.Reduce the operation difficulty that doctor uses extra chock wave lithotriptor.
Brief description of the drawings
Fig. 1 is the Urology Surgery extracorporeal lithotiptor control method flow chart based on internet that the present invention implements to provide.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, with reference to embodiments, to the present invention
It is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not used to
Limit the present invention.
The application principle of the present invention is further described below in conjunction with the accompanying drawings.
As shown in figure 1, the Urology Surgery extracorporeal lithotiptor control method bag provided in an embodiment of the present invention based on internet
Include following steps:
S101, human body is scanned with Ultrasonic-B probe, gathers the section detection image data of Ultrasonic-B probe, and by the image
It is presented on the computer of access internet;
S102, when scanning is shown to the calculus on human body on a certain position on the display unit, pass through input block
Positional information of the calculus on scanning section is inputed into CPU to be locked;
S103, CPU according to the calculus on human body scanning section on positional information and stone crusher wave source
Focal position information calculates space length and superposition of movement track therebetween;
S104, according to the space length between the calculus on human body and the wave source focus of stone crusher and superposition of movement track,
CPU driving power device said two devices do relative motion according to superposition of movement track, obtain calculus accurate location;
Both S105, the size for determining calculus, the size of the type for determining calculus or determination calculus and type;Selection is used
In the amplitude for the audio frequency for producing acoustic waveform, the amplitude of audio frequency is that the size based on calculus is selected;
S106, acoustic waveform is produced using sound wave actuator;Produced using ultrasonic drive with ultrasonic wave frequency
The ultrasonic wave waveform of rate;
S107, acoustic waveform and ultrasonic wave waveform transfer are crushed to calculus accurate location via waveguide axis.
The input mode of input block provided by the invention can be inputted using mouse, inputted with keyboard or
Touched and inputted with touch-screen.
The wave source focal position information of positional information and stone crusher of the calculus provided by the invention on scanning section has one
Individual common location reference is the positional information of Ultrasonic-B probe.
Superposition of movement track provided by the invention is that the calculus on human body remains static, and wave source focus is carried out with respect to it
Motion, terminates until the calculus on human body is located to move in wave source focus.
Superposition of movement track provided by the invention is that wave source focus remains static, and the calculus on human body is carried out with respect to it
Motion, terminates until the calculus is located to move in wave source focus.
In a preferred embodiment of the invention, interference relationships analysis method comprises the following steps between Computer signal:
(1) some characteristic parameter CPs of the interference signal on wireless signal field are determined, and feature based parameter is formed pair
The interference space model answered, the interference space model based on foundation, determine interference signal characteristic vector to be analyzedWith reference
Character vector of signals
(2) interference space model is based on, for interference signal characteristic vectorDefinition is to contrast signal characteristic vector's
Displacement vector
(3) displacement vector is definedIt is interference signal feature to the projection of some latitude coordinates axle in interference space
VectorTo contrast signal characteristic vectorDistance in the CP dimensions, that is, have:
Wherein PRJ () operator representation is directed to the project of a certain CP dimensions;
(4) it is S to the disturbance state of contrast signal to define interference signal, to represent that interference signal is done to contrast signal
Disturb relation;
(5) on the premise of interference has been formed, it is necessary first to choose and determine interference effect parameter EP, believe for interference
For number, parameter is usually signal power p or energy e;
(6) it is G to the annoyance level of contrast signal to define interference signal, and contrast signal is done to weigh interference signal
Disturb influence degree;
Methods described further comprises:For the more of each self-contained some interference characteristic vectors of interference signal and contrast signal
Mould situation, disturbance state S (V nowI, VS), it is calculated as below:
Wherein S [VI, VS]M×NIt is referred to as disturbance state matrix, each element in matrixRepresent VIIn
K characteristic vector and VSIn l-th of characteristic vector disturbance state, each element is not in only two characteristic vector set
During interference, S (VI, VSThe interference signal of)=0 is not just formed to contrast signal and disturbed;Conversely, S (VI, VS) > 0, now interference signal
Interference will be formed to contrast signal;
In a preferred embodiment of the invention, data for projection calculates the iteration of target image during the image of Ultrasonic-B probe obtains
Model, the formula of the iterative model are expressed as:
Wherein, X is the target image, and M is sytem matrix, and G is the data for projection, and i represents iterations, XiRepresent
The iteration result obtained after ith iteration;λ represents convergence coefficient, and λ ∈ (0,1), MT represent the transposition to matrix M;Institute is set
The initial value of target image is stated, and the iterative model is utilized in the target image according to the iterations pre-set
Each pixel is iterated renewal, obtains the target image, the current grayvalue of the pixel in the iterative model and
The gray value Uniform approximat of previous iteration;It is described by gray value in target image be less than 0 pixel zero setting;
In a preferred embodiment of the invention, display unit hides one using multiple hybrid parameters and several carrier images
Width image, image information is embedded into the time-varying parameter of digital image system by the multiple mixing embedded technology with image
In, to digital image system founding mathematical models, with Iterative Learning Identification Method in finite time interval to time-varying parameter
Estimation completely this key property, realize the complete reconstruction of the image information of digital image system, empirical value result shows,
With Iterative Learning Identification Method it can recover hidden image completely, and the proposed method by substantial amounts of experiment test
Resist the ability of JPEG compression, shearing, noise and medium filtering geometric attack;
Remember that original image G is θ (t) sequences, be that image G ' is x (t) sequences after encryption, carrier image group Fi(i=1,
2 ..., n) it is wi(t) sequence, i=1,2 ..., n, mixed image SnFor y (t), then system representation is:
t∈{0,1,2...N},x(t)∈Rn,θ(t)∈R1;y1(t)∈R1;y(t)∈R1, nonlinear function f (x (t), θ
(t), t) function that original image is encrypted is represented, nonlinear function g (x (t), t) represents the image and carrier image one after encryption
Secondary iterative mixing function, h (y1(t), t) represent that n overlaps for mixed function, when parameter true value is θ*(t) when, write as:
For estimating θ*(t) Iterative Learning Identification system is:
In formula, k is iterations, and initial value during each iteration is identical, it is assumed that partial derivatives of the f on x, θ, and g is on x's
Partial derivative, and h exist on g partial derivative, note:
And
It is C to remember its boundaryD,CC, CA, CB;
If:
Wherein ρ value is:
||1-γk(t)Dk(t+1)Ck(t+1)Bk(t) | |≤ρ < 1;
||γkDk(t)Ck(t)-γkDk(t+1)Ck(t+1)Ak(t)||≤CM';
Then as k → ∞, θk(t) θ is converged on section { 0,1 ..., N }*(t);
Prove:
According to Order Derivatives in Differential Mid-Value Theorem:
Note:M'k=γkDk(t)Ck(t)-γkDk(t+1)Ck(t+1)Ak(t);
Obtain:
Both ends take λ norms:
Note | | M'k||λ≤CM'Wushu (16) substitutes into (30) and obtained:
Inequality can be write as:
Because 0≤ρ≤1 takes λ sufficiently large, then:
The time-varying parameter θ that the CPU is obtained by Iterative Learning Identification Methodk(t) one group of image sequence
Row, hiding image is reconstructed according still further to the pixel rate of original image, for the hidden image and mixed image that are resumed still
So their error can be reflected using root-mean-square error, their object fidelity is measured using Y-PSNR, carried
Body image F and mixed image S root-mean-square error is:
Root-mean-square error is smaller, illustrates that two images are more similar, and wherein carrier image F image size is M × M, image S
Size be N × N;
Image F and the Y-PSNR PSNR of mixed image are:
For Y-PSNR PSNR as the criterion for weighing image object fidelity, its value is bigger, illustrates image blend
Fidelity is higher, and three width mixed image of selection, three mixed parameters realize hiding for original image, digital picture recovery system
State expression formula is:
During experiment, λ=3.65, initial value x are takenk(0)=0.47, original image θ (t) is gray level image, carrier image w1(t)、
w2(t)、w3(t) it is the gray level image of two width differences, mixed image, y is obtained after 4 mixing are hidden*(t);
According to formula αi+1=μ ' αi(1-αi), setting parameter μ '=3.82, initial value α1=0.75, in the iteration of Logistic mappings
Chaos sequence caused by lower is { αi, therefrom choose experiment parameter sequence;
The learning gains determined according to convergence adequate condition are:
In formula, βi=1- αi, i=1,2, it is check algorithm performance, defining target function is
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention
All any modification, equivalent and improvement made within refreshing and principle etc., should be included in the scope of the protection.
Claims (5)
1. a kind of Urology Surgery extracorporeal lithotiptor control method based on internet, it is characterised in that described based on internet
Urology Surgery extracorporeal lithotiptor control method comprises the following steps:
Step 1, human body is scanned with Ultrasonic-B probe, gathers the section detection image data of Ultrasonic-B probe, and be in by the image
Now on the computer of access internet;
Interference relationships analysis method comprises the following steps between the Computer signal:
(1) some characteristic parameter CPs of the interference signal on wireless signal field are determined, and corresponding to feature based parameter formed
Interference space model, the interference space model based on foundation, determine interference signal characteristic vector to be analyzedWith contrast signal
Characteristic vector
(2) interference space model is based on, for interference signal characteristic vectorDefinition is to contrast signal characteristic vectorDisplacement
Vector
(3) displacement vector is definedIt is interference signal characteristic vector to the projection of some latitude coordinates axle in interference spaceTo contrast signal characteristic vectorDistance in the CP dimensions, that is, have:
Wherein PRJ () operator representation is directed to the project of a certain CP dimensions;
(4) it is S to the disturbance state of contrast signal to define interference signal, to represent that interference of the interference signal to contrast signal is closed
System;
<mrow>
<mi>S</mi>
<mrow>
<mo>(</mo>
<mover>
<msub>
<mi>V</mi>
<mi>I</mi>
</msub>
<mo>&RightArrow;</mo>
</mover>
<mo>,</mo>
<mover>
<msub>
<mi>V</mi>
<mi>S</mi>
</msub>
<mo>&RightArrow;</mo>
</mover>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfenced open = '{' close = ''>
<mtable>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mrow>
<mo>&Exists;</mo>
<msub>
<mi>CP</mi>
<mi>i</mi>
</msub>
<mo>,</mo>
<msub>
<mi>d</mi>
<mrow>
<msub>
<mi>CP</mi>
<mi>i</mi>
</msub>
<mo>,</mo>
<mrow>
<mo>(</mo>
<mi>I</mi>
<mo>,</mo>
<mi>S</mi>
<mo>)</mo>
</mrow>
</mrow>
</msub>
<mo>&GreaterEqual;</mo>
<msub>
<mi>&Delta;</mi>
<mrow>
<msub>
<mi>CP</mi>
<mi>i</mi>
</msub>
</mrow>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
<mtd>
<mrow>
<mo>&ForAll;</mo>
<msub>
<mi>CP</mi>
<mi>i</mi>
</msub>
<mo>,</mo>
<msub>
<mi>d</mi>
<mrow>
<msub>
<mi>CP</mi>
<mi>i</mi>
</msub>
<mo>,</mo>
<mrow>
<mo>(</mo>
<mi>I</mi>
<mo>,</mo>
<mi>S</mi>
<mo>)</mo>
</mrow>
</mrow>
</msub>
<mo><</mo>
<msub>
<mi>&Delta;</mi>
<mrow>
<msub>
<mi>CP</mi>
<mi>i</mi>
</msub>
</mrow>
</msub>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>;</mo>
</mrow>
(5) on the premise of interference has been formed, it is necessary first to choose and determine interference effect parameter EP, for interference signal
Speech, parameter is usually signal power p or energy e;
(6) it is G to the annoyance level of contrast signal to define interference signal, to weigh interference shadow of the interference signal to contrast signal
The degree of sound;
Methods described further comprises:For the multimode feelings of each self-contained some interference characteristic vectors of interference signal and contrast signal
Condition, disturbance state S (V nowI, VS), it is calculated as below:
Wherein S [VI, VS]M×NIt is referred to as disturbance state matrix, each element in matrixRepresent VIIn k-th
Characteristic vector and VSIn l-th of characteristic vector disturbance state, each element is not done in only two characteristic vector set
When disturbing, S (VI, VS) > 0 interference signal just not to contrast signal formed disturb;Conversely, S (VI, VS) > 0, now interference signal will
Contrast signal is formed and disturbed;
Data for projection calculates the iterative model of target image, the formula of the iterative model during the image of the Ultrasonic-B probe obtains
It is expressed as:
<mrow>
<msup>
<mi>X</mi>
<mi>i</mi>
</msup>
<mo>=</mo>
<msup>
<mi>X</mi>
<mrow>
<mi>i</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>+</mo>
<mfrac>
<mrow>
<mo>(</mo>
<msub>
<mi>G</mi>
<mrow>
<mi>i</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mi>M</mi>
<mrow>
<mi>i</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>&CenterDot;</mo>
<msup>
<mi>X</mi>
<mrow>
<mi>i</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>)</mo>
<mo>&CenterDot;</mo>
<msubsup>
<mi>M</mi>
<mrow>
<mi>i</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
<mi>T</mi>
</msubsup>
<mo>&CenterDot;</mo>
<mi>&lambda;</mi>
</mrow>
<mrow>
<mo>(</mo>
<msub>
<mi>M</mi>
<mrow>
<mi>i</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>&CenterDot;</mo>
<msub>
<mi>M</mi>
<mrow>
<mi>i</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>)</mo>
</mrow>
</mfrac>
<mo>;</mo>
</mrow>
Wherein, X is the target image, and M is sytem matrix, and G is the data for projection, and i represents iterations, XiRepresent ith
The iteration result obtained after iteration;λ represents convergence coefficient, and λ ∈ (0,1), M T represent the transposition to matrix M;The mesh is set
The initial value of logo image, and the iterative model is utilized to each in the target image according to the iterations pre-set
Pixel is iterated renewal, obtains the target image, the current grayvalue of the pixel in the iterative model with it is previous
The gray value Uniform approximat of iteration;It is described by gray value in target image be less than 0 pixel zero setting;
Step 2, when scanning show to the calculus on human body on a certain position on the display unit, pass through input block general
Positional information of the calculus on scanning section inputs to CPU and locked;
The display unit hides piece image using multiple hybrid parameters and several carrier images, by with the more of image
Image information is embedded into the time-varying parameter of digital image system by mixing embedded technology again, and mathematics is established to digital image system
Model, it is real with estimation completely this key property of Iterative Learning Identification Method in finite time interval to time-varying parameter
Now the complete reconstruction of the image information of digital image system, empirical value result show, can with Iterative Learning Identification Method
Recover hidden image completely, and by substantial amounts of experiment test proposed method resistance JPEG compression, shearing, noise and
The ability of medium filtering geometric attack;
Remember that original image G is θ (t) sequences, be that image G ' is x (t) sequences after encryption, carrier image group Fi(i=1,2 ..., n) be
wi(t) sequence, i=1,2 ..., n, mixed image SnFor y (t), then system representation is:
<mrow>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>x</mi>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>f</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>,</mo>
<mi>&theta;</mi>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>,</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msup>
<mi>y</mi>
<mn>1</mn>
</msup>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>g</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>,</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>y</mi>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>h</mi>
<mrow>
<mo>(</mo>
<msup>
<mi>y</mi>
<mn>1</mn>
</msup>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>,</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>;</mo>
</mrow>
t∈{0,1,2...N},x(t)∈Rn,θ(t)∈R1;y1(t)∈R1;y(t)∈R1, nonlinear function f (x (t), θ (t),
T) function of original image encryption is represented, nonlinear function g (x (t), t) represents that the image after encryption and carrier image once change
For mixed function, h (y1(t), t) represent that n overlaps for mixed function, when parameter true value is θ*(t) when, write as:
<mrow>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<msup>
<mi>x</mi>
<mo>*</mo>
</msup>
<mo>(</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
<mo>=</mo>
<mi>f</mi>
<mo>(</mo>
<msup>
<mi>x</mi>
<mo>*</mo>
</msup>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<msup>
<mi>&theta;</mi>
<mo>*</mo>
</msup>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mi>t</mi>
<mo>)</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<msup>
<mi>y</mi>
<mn>1</mn>
</msup>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>=</mo>
<mi>g</mi>
<mo>(</mo>
<msup>
<mi>x</mi>
<mo>*</mo>
</msup>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mi>t</mi>
<mo>)</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<msup>
<mi>y</mi>
<mo>*</mo>
</msup>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>=</mo>
<mi>h</mi>
<mo>(</mo>
<msup>
<mi>y</mi>
<mn>1</mn>
</msup>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mi>t</mi>
<mo>)</mo>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>;</mo>
</mrow>
For estimating θ*(t) Iterative Learning Identification system is:
<mrow>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mi>k</mi>
</msub>
<mo>(</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
<mo>=</mo>
<mi>f</mi>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<msub>
<mi>&theta;</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mi>t</mi>
<mo>)</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>y</mi>
<mi>k</mi>
<mn>1</mn>
</msubsup>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>=</mo>
<mi>g</mi>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mi>t</mi>
<mo>)</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>y</mi>
<mi>k</mi>
</msub>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>=</mo>
<mi>h</mi>
<mo>(</mo>
<msubsup>
<mi>y</mi>
<mi>k</mi>
<mn>1</mn>
</msubsup>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mi>t</mi>
<mo>)</mo>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>;</mo>
</mrow>
In formula, k is iterations, and initial value during each iteration is identical, it is assumed that partial derivatives of the f on x, θ, and local derviations of the g on x
Number, and h exist on g partial derivative, note:
<mfenced open = "" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>D</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<mo>&part;</mo>
<mi>h</mi>
<mrow>
<mo>(</mo>
<msubsup>
<mi>y</mi>
<mi>k</mi>
<mn>1</mn>
</msubsup>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>,</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mo>&part;</mo>
<mi>g</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>k</mi>
</msub>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>,</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<msub>
<mo>|</mo>
<mrow>
<mi>g</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>k</mi>
</msub>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mi>t</mi>
<mo>)</mo>
<mo>=</mo>
<msub>
<mi>&xi;</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
</msub>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>&xi;</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<msub>
<mi>&sigma;</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
<mi>g</mi>
<mrow>
<mo>(</mo>
<msup>
<mi>x</mi>
<mo>*</mo>
</msup>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>,</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>&sigma;</mi>
<mn>1</mn>
</msub>
<mi>g</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>k</mi>
</msub>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>,</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<mn>0</mn>
<mo><</mo>
<msub>
<mi>&sigma;</mi>
<mn>1</mn>
</msub>
<mo><</mo>
<mn>1</mn>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mrow>
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>C</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<mo>&part;</mo>
<mi>g</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>k</mi>
</msub>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>,</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mo>&part;</mo>
<msub>
<mi>x</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<msub>
<mo>|</mo>
<mrow>
<msub>
<mi>x</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<msub>
<mi>&xi;</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
</msub>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>&xi;</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<msub>
<mi>&sigma;</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<msup>
<mi>x</mi>
<mo>*</mo>
</msup>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>&sigma;</mi>
<mn>2</mn>
</msub>
<msub>
<mi>x</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<mn>0</mn>
<mo><</mo>
<msub>
<mi>&sigma;</mi>
<mn>2</mn>
</msub>
<mo><</mo>
<mn>1</mn>
</mrow>
</mtd>
</mtr>
</mtable>
<mo>;</mo>
</mrow>
<mrow>
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>A</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<mo>&part;</mo>
<mi>f</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>k</mi>
</msub>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>,</mo>
<msub>
<mi>&theta;</mi>
<mi>k</mi>
</msub>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>,</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mo>&part;</mo>
<msub>
<mi>x</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<msub>
<mo>|</mo>
<mrow>
<msub>
<mi>x</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<msub>
<mi>&xi;</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
</msub>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>&zeta;</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<msub>
<mi>&sigma;</mi>
<mn>3</mn>
</msub>
<mo>)</mo>
</mrow>
<msup>
<mi>x</mi>
<mo>*</mo>
</msup>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>&sigma;</mi>
<mn>3</mn>
</msub>
<msub>
<mi>x</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<mn>0</mn>
<mo><</mo>
<msub>
<mi>&sigma;</mi>
<mn>3</mn>
</msub>
<mo><</mo>
<mn>1</mn>
</mrow>
</mtd>
</mtr>
</mtable>
<mo>;</mo>
</mrow>
ηk(t)=(1- σ4)θ*(t)+σ4θk(t), 0 < σ4< 1;And remember that its boundary is
CD,CC, CA, CB;
If:
<mrow>
<mo>-</mo>
<mn>1</mn>
<mo>&le;</mo>
<mi>&rho;</mi>
<mo>+</mo>
<mfrac>
<mrow>
<msub>
<mi>C</mi>
<msup>
<mi>M</mi>
<mo>&prime;</mo>
</msup>
</msub>
<mo>&CenterDot;</mo>
<msub>
<mi>C</mi>
<mi>B</mi>
</msub>
</mrow>
<mrow>
<msubsup>
<mi>C</mi>
<mi>A</mi>
<mi>&lambda;</mi>
</msubsup>
<mo>-</mo>
<msub>
<mi>C</mi>
<mi>A</mi>
</msub>
</mrow>
</mfrac>
<mo><</mo>
<mn>1</mn>
<mo>;</mo>
</mrow>
Wherein ρ value is:
||1-γk(t)Dk(t+1)Ck(t+1)Bk(t) | |≤ρ < 1;
||γkDk(t)Ck(t)-γkDk(t+1)Ck(t+1)Ak(t)||≤CM';
Then as k → ∞, θk(t) θ is converged on section { 0,1 ..., N }*(t);
Prove:
According to Order Derivatives in Differential Mid-Value Theorem:
<mfenced open = "" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mo>=</mo>
<msub>
<mover>
<mi>&theta;</mi>
<mo>~</mo>
</mover>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>&gamma;</mi>
<mi>k</mi>
</msub>
<mo>{</mo>
<msub>
<mi>D</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<mo>&lsqb;</mo>
<mi>g</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>d</mi>
</msub>
<mo>(</mo>
<mrow>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mo>)</mo>
<mo>,</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mi>g</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>k</mi>
</msub>
<mo>(</mo>
<mrow>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mo>)</mo>
<mo>,</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<mo>&rsqb;</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<msub>
<mi>D</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>&lsqb;</mo>
<mi>g</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>d</mi>
</msub>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>,</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mi>g</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>k</mi>
</msub>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>,</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>&rsqb;</mo>
<mo>}</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>=</mo>
<msub>
<mover>
<mi>&theta;</mi>
<mo>~</mo>
</mover>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>&gamma;</mi>
<mi>k</mi>
</msub>
<mo>{</mo>
<msub>
<mi>D</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<msub>
<mi>C</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<mo>&lsqb;</mo>
<mi>f</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>d</mi>
</msub>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>,</mo>
<msub>
<mi>&theta;</mi>
<mi>d</mi>
</msub>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>,</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mi>f</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>k</mi>
</msub>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>,</mo>
<msub>
<mi>&theta;</mi>
<mi>k</mi>
</msub>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>,</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>&rsqb;</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<msub>
<mi>D</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<msub>
<mi>C</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>&lsqb;</mo>
<msub>
<mi>x</mi>
<mi>d</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>x</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>&rsqb;</mo>
<mo>}</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>=</mo>
<msub>
<mover>
<mi>&theta;</mi>
<mo>~</mo>
</mover>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>&gamma;</mi>
<mi>k</mi>
</msub>
<mo>{</mo>
<msub>
<mi>D</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<msub>
<mi>C</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<mo>{</mo>
<msub>
<mi>A</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>&lsqb;</mo>
<msub>
<mi>x</mi>
<mi>d</mi>
</msub>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>-</mo>
<msub>
<mi>x</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>&rsqb;</mo>
<mo>+</mo>
<msub>
<mi>B</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>&lsqb;</mo>
<msub>
<mi>&theta;</mi>
<mi>d</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>&theta;</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>&rsqb;</mo>
<mo>}</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<msub>
<mi>D</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<msub>
<mi>C</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>&lsqb;</mo>
<msub>
<mi>x</mi>
<mi>d</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>x</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>&rsqb;</mo>
<mo>}</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>=</mo>
<msub>
<mover>
<mi>&theta;</mi>
<mo>~</mo>
</mover>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>&gamma;</mi>
<mi>k</mi>
</msub>
<mo>{</mo>
<msub>
<mi>D</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<msub>
<mi>C</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<mo>{</mo>
<msub>
<mi>A</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<msub>
<mover>
<mi>x</mi>
<mo>~</mo>
</mover>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>B</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<msub>
<mover>
<mi>&theta;</mi>
<mo>~</mo>
</mover>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>}</mo>
<mo>-</mo>
<msub>
<mi>D</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<msub>
<mi>C</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<msub>
<mover>
<mi>x</mi>
<mo>~</mo>
</mover>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>}</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>=</mo>
<msub>
<mover>
<mi>&theta;</mi>
<mo>~</mo>
</mover>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>&gamma;</mi>
<mi>k</mi>
</msub>
<msub>
<mi>D</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<msub>
<mi>C</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<msub>
<mi>B</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<msub>
<mover>
<mi>&theta;</mi>
<mo>~</mo>
</mover>
<mi>k</mi>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<msub>
<mi>&gamma;</mi>
<mi>k</mi>
</msub>
<msub>
<mi>D</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<msub>
<mi>C</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<msub>
<mi>A</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<msub>
<mover>
<mi>x</mi>
<mo>~</mo>
</mover>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>&gamma;</mi>
<mi>k</mi>
</msub>
<msub>
<mi>D</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<msub>
<mi>C</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<msub>
<mover>
<mi>x</mi>
<mo>~</mo>
</mover>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>=</mo>
<mo>&lsqb;</mo>
<mn>1</mn>
<mo>-</mo>
<msub>
<mi>&gamma;</mi>
<mi>k</mi>
</msub>
<msub>
<mi>D</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<msub>
<mi>C</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<msub>
<mi>B</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>&rsqb;</mo>
<msub>
<mover>
<mi>&theta;</mi>
<mo>~</mo>
</mover>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<msub>
<mi>&gamma;</mi>
<mi>k</mi>
</msub>
<msub>
<mi>D</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<msub>
<mi>C</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<msub>
<mi>A</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<msub>
<mover>
<mi>x</mi>
<mo>~</mo>
</mover>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>&gamma;</mi>
<mi>k</mi>
</msub>
<msub>
<mi>D</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<msub>
<mi>C</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<msub>
<mover>
<mi>x</mi>
<mo>~</mo>
</mover>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>=</mo>
<mi>&rho;</mi>
<msub>
<mover>
<mi>&theta;</mi>
<mo>~</mo>
</mover>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>&gamma;</mi>
<mi>k</mi>
</msub>
<msub>
<mi>D</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<msub>
<mi>C</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<msub>
<mi>A</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<msub>
<mover>
<mi>x</mi>
<mo>~</mo>
</mover>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>&gamma;</mi>
<mi>k</mi>
</msub>
<msub>
<mi>D</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<msub>
<mi>C</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<msub>
<mover>
<mi>x</mi>
<mo>~</mo>
</mover>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>=</mo>
<mi>&rho;</mi>
<msub>
<mover>
<mi>&theta;</mi>
<mo>~</mo>
</mover>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mo>&lsqb;</mo>
<msub>
<mi>&gamma;</mi>
<mi>k</mi>
</msub>
<msub>
<mi>D</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<msub>
<mi>C</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>&gamma;</mi>
<mi>k</mi>
</msub>
<msub>
<mi>D</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<msub>
<mi>C</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<msub>
<mi>A</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>&rsqb;</mo>
<msub>
<mover>
<mi>x</mi>
<mo>~</mo>
</mover>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
Note:M'k=γkDk(t)Ck(t)-γkDk(t+1)Ck(t+1)Ak(t);
Obtain:
Both ends take λ norms:
Note | | M'k||λ≤CM'Wushu (16) substitutes into (30) and obtained:
Inequality can be write as:
<mrow>
<mo>|</mo>
<mo>|</mo>
<msub>
<mover>
<mi>&theta;</mi>
<mo>~</mo>
</mover>
<mrow>
<mi>k</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
<msub>
<mo>|</mo>
<mi>&lambda;</mi>
</msub>
<mo>&le;</mo>
<mi>&rho;</mi>
<mo>|</mo>
<mo>|</mo>
<msub>
<mover>
<mi>&theta;</mi>
<mo>~</mo>
</mover>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
<msub>
<mo>|</mo>
<mi>&lambda;</mi>
</msub>
<mo>+</mo>
<mfrac>
<mrow>
<msub>
<mi>C</mi>
<msup>
<mi>M</mi>
<mo>&prime;</mo>
</msup>
</msub>
<mo>&CenterDot;</mo>
<msub>
<mi>C</mi>
<mi>B</mi>
</msub>
</mrow>
<mrow>
<msubsup>
<mi>C</mi>
<mi>A</mi>
<mi>&lambda;</mi>
</msubsup>
<mo>-</mo>
<msub>
<mi>C</mi>
<mi>A</mi>
</msub>
</mrow>
</mfrac>
<mo>|</mo>
<mo>|</mo>
<msub>
<mover>
<mi>&theta;</mi>
<mo>~</mo>
</mover>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
<msub>
<mo>|</mo>
<mi>&lambda;</mi>
</msub>
<mo>;</mo>
</mrow>
Because 0≤ρ≤1 takes λ sufficiently large, then:
<mrow>
<munder>
<mi>lim</mi>
<mrow>
<mi>k</mi>
<mo>&RightArrow;</mo>
<mi>&infin;</mi>
</mrow>
</munder>
<mo>|</mo>
<mo>|</mo>
<msub>
<mover>
<mi>&theta;</mi>
<mo>~</mo>
</mover>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
<msub>
<mo>|</mo>
<mi>&lambda;</mi>
</msub>
<mo>=</mo>
<mn>0</mn>
<mo>;</mo>
</mrow>
The time-varying parameter θ that the CPU is obtained by Iterative Learning Identification Methodk(t) one group of image sequence, then press
Hiding image is reconstructed according to the pixel rate of original image, still can be adopted for the hidden image and mixed image being resumed
Reflect their error with root-mean-square error, their object fidelity measured using Y-PSNR, carrier image F and
Mixed image S root-mean-square error is:
<mrow>
<mi>R</mi>
<mi>M</mi>
<mi>S</mi>
<mi>E</mi>
<mo>=</mo>
<msup>
<mrow>
<mo>&lsqb;</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mi>M</mi>
<mi>N</mi>
</mrow>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>N</mi>
</munderover>
<msup>
<mrow>
<mo>&lsqb;</mo>
<mi>F</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mi>S</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>&rsqb;</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>&rsqb;</mo>
</mrow>
<mfrac>
<mn>1</mn>
<mn>2</mn>
</mfrac>
</msup>
<mo>;</mo>
</mrow>
Root-mean-square error is smaller, illustrates that two images are more similar, and wherein carrier image F image size is M × M, and image S's is big
Small is N × N;
Image F and the Y-PSNR PSNR of mixed image are:
<mrow>
<mi>P</mi>
<mi>S</mi>
<mi>N</mi>
<mi>R</mi>
<mo>=</mo>
<mn>10</mn>
<mi>lg</mi>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<mi>M</mi>
<mo>&times;</mo>
<mi>N</mi>
<mo>&times;</mo>
<msup>
<mn>255</mn>
<mn>2</mn>
</msup>
</mrow>
<mrow>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>N</mi>
</munderover>
<msup>
<mrow>
<mo>(</mo>
<mi>F</mi>
<mo>(</mo>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
</mrow>
<mo>)</mo>
<mo>-</mo>
<mi>S</mi>
<mo>(</mo>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
<mo>;</mo>
</mrow>
For Y-PSNR PSNR as the criterion for weighing image object fidelity, its value is bigger, illustrates the fidelity of image blend
Degree is higher, chooses three width mixed image, three mixed parameters realize hiding for original image, the state of digital picture recovery system
Expression formula is:
<mrow>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>x</mi>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mrow>
<mo>(</mo>
<mi>&lambda;</mi>
<mo>+</mo>
<mi>&theta;</mi>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mi>x</mi>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<mi>x</mi>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msub>
<mi>y</mi>
<mn>1</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<msub>
<mi>&alpha;</mi>
<mn>1</mn>
</msub>
<msub>
<mi>w</mi>
<mn>1</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<msub>
<mi>&alpha;</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
<mi>x</mi>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<msub>
<mi>&alpha;</mi>
<mn>2</mn>
</msub>
<msub>
<mi>w</mi>
<mn>2</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<msub>
<mi>&alpha;</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<msub>
<mi>y</mi>
<mn>1</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msub>
<mi>y</mi>
<mn>3</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<msub>
<mi>&alpha;</mi>
<mn>3</mn>
</msub>
<msub>
<mi>w</mi>
<mn>3</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<msub>
<mi>&alpha;</mi>
<mn>3</mn>
</msub>
<mo>)</mo>
</mrow>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>y</mi>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<msub>
<mi>y</mi>
<mn>3</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>;</mo>
</mrow>
During experiment, λ=3.65, initial value x are takenk(0)=0.47, original image θ (t) is gray level image, carrier image w1(t)、w2
(t)、w3(t) it is the gray level image of two width differences, mixed image, y is obtained after 4 mixing are hidden*(t);
According to formula αi+1=μ ' αi(1-αi), setting parameter μ '=3.82, initial value α1=0.75, the institute under the iteration of Logistic mappings
Caused chaos sequence is { αi, therefrom choose experiment parameter sequence;
The learning gains determined according to convergence adequate condition are:
<mrow>
<msub>
<mi>&gamma;</mi>
<mi>k</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mo>(</mo>
<msub>
<mi>&beta;</mi>
<mn>1</mn>
</msub>
<msub>
<mi>&beta;</mi>
<mn>2</mn>
</msub>
<msub>
<mi>&beta;</mi>
<mn>3</mn>
</msub>
<mo>)</mo>
<msub>
<mi>x</mi>
<mi>k</mi>
</msub>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<msub>
<mi>x</mi>
<mi>k</mi>
</msub>
<mo>(</mo>
<mi>t</mi>
<mo>)</mo>
<mo>)</mo>
</mrow>
</mfrac>
<mo>;</mo>
</mrow>
In formula, βi=1- αi, i=1,2, it is check algorithm performance, defining target function is
Step 3, CPU are burnt according to the wave source of positional information of the calculus on human body on scanning section and stone crusher
Dot position information calculates space length and superposition of movement track therebetween;
Step 4, according to the space length between the calculus on human body and the wave source focus of stone crusher and superposition of movement track, in
Central Processing Unit driving power device said two devices do relative motion according to superposition of movement track, obtain calculus accurate location;
Both step 5, the size for determining calculus, the size of the type for determining calculus or determination calculus and type;Select to be used for
The amplitude of the audio frequency of acoustic waveform is produced, the amplitude of audio frequency is that the size based on calculus is selected;
Step 6, acoustic waveform is produced using sound wave actuator;Produced using ultrasonic drive with ultrasonic frequency
Ultrasonic wave waveform;
Step 7, acoustic waveform and ultrasonic wave waveform transfer are crushed to calculus accurate location via waveguide axis.
2. the Urology Surgery extracorporeal lithotiptor control method based on internet as claimed in claim 1, it is characterised in that described
Input block input mode can be inputted using mouse, inputted with keyboard or with touch-screen touch carry out it is defeated
Enter.
3. the Urology Surgery extracorporeal lithotiptor control method based on internet as claimed in claim 1, it is characterised in that described
The wave source focal position information of positional information and stone crusher of the calculus on scanning section has a common location reference to be visited for B ultrasound
The positional information of head.
4. the Urology Surgery extracorporeal lithotiptor control method based on internet as claimed in claim 1, it is characterised in that described
Superposition of movement track is that the calculus on human body remains static, and wave source focus is moved with respect to it, until the knot on human body
Stone, which is located to move in wave source focus, to be terminated.
5. the Urology Surgery extracorporeal lithotiptor control method based on internet as claimed in claim 1, it is characterised in that described
Superposition of movement track is that wave source focus remains static, and the calculus on human body is moved with respect to it, until the calculus is located at
Motion terminates in wave source focus.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710943688.0A CN107633873A (en) | 2017-10-11 | 2017-10-11 | A kind of Urology Surgery extracorporeal lithotiptor control method based on internet |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710943688.0A CN107633873A (en) | 2017-10-11 | 2017-10-11 | A kind of Urology Surgery extracorporeal lithotiptor control method based on internet |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107633873A true CN107633873A (en) | 2018-01-26 |
Family
ID=61103947
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710943688.0A Pending CN107633873A (en) | 2017-10-11 | 2017-10-11 | A kind of Urology Surgery extracorporeal lithotiptor control method based on internet |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107633873A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113343179A (en) * | 2021-06-02 | 2021-09-03 | 江苏邦鼎科技有限公司 | Striking and crushing method and system based on oblique shearing |
CN114550943A (en) * | 2022-04-21 | 2022-05-27 | 武汉烽火凯卓科技有限公司 | Shock wave incident point simulation planning method and system based on medical image |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103156643A (en) * | 2013-03-12 | 2013-06-19 | 深圳市海德医疗设备有限公司 | Method and device for extracorporeal shock wave lithotripter to locate stone automatically by using B ultrasound |
CN105101894A (en) * | 2013-05-09 | 2015-11-25 | 美国奥林匹斯外科技术吉鲁斯阿克米公司 | Multi-mode oscillating lithotripter |
CN105608717A (en) * | 2015-12-22 | 2016-05-25 | 肖古华 | CT system and CT image reconstruction method |
CN105049141B (en) * | 2015-05-26 | 2017-04-05 | 西安电子科技大学 | A kind of inter-signal interference relationship analysis method based on multidimensional interference space model |
CN106823150A (en) * | 2016-07-18 | 2017-06-13 | 山东省肿瘤防治研究院 | It is a kind of to facilitate breast tumor radiotherapy combined type locating frame device |
US20170221202A1 (en) * | 2016-01-29 | 2017-08-03 | Toshiba Medical Systems Corporation | Ultrasonic diagnostic apparatus and medical image processing apparatus |
WO2017142281A1 (en) * | 2016-02-15 | 2017-08-24 | Samsung Electronics Co., Ltd. | Image processing apparatus, image processing method and recording medium thereof |
-
2017
- 2017-10-11 CN CN201710943688.0A patent/CN107633873A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103156643A (en) * | 2013-03-12 | 2013-06-19 | 深圳市海德医疗设备有限公司 | Method and device for extracorporeal shock wave lithotripter to locate stone automatically by using B ultrasound |
CN105101894A (en) * | 2013-05-09 | 2015-11-25 | 美国奥林匹斯外科技术吉鲁斯阿克米公司 | Multi-mode oscillating lithotripter |
CN105049141B (en) * | 2015-05-26 | 2017-04-05 | 西安电子科技大学 | A kind of inter-signal interference relationship analysis method based on multidimensional interference space model |
CN105608717A (en) * | 2015-12-22 | 2016-05-25 | 肖古华 | CT system and CT image reconstruction method |
US20170221202A1 (en) * | 2016-01-29 | 2017-08-03 | Toshiba Medical Systems Corporation | Ultrasonic diagnostic apparatus and medical image processing apparatus |
WO2017142281A1 (en) * | 2016-02-15 | 2017-08-24 | Samsung Electronics Co., Ltd. | Image processing apparatus, image processing method and recording medium thereof |
CN106823150A (en) * | 2016-07-18 | 2017-06-13 | 山东省肿瘤防治研究院 | It is a kind of to facilitate breast tumor radiotherapy combined type locating frame device |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113343179A (en) * | 2021-06-02 | 2021-09-03 | 江苏邦鼎科技有限公司 | Striking and crushing method and system based on oblique shearing |
CN114550943A (en) * | 2022-04-21 | 2022-05-27 | 武汉烽火凯卓科技有限公司 | Shock wave incident point simulation planning method and system based on medical image |
CN114550943B (en) * | 2022-04-21 | 2022-07-29 | 武汉烽火凯卓科技有限公司 | Shock wave incident point simulation planning method and system based on medical image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102844789B (en) | System and method for correcting data for deformations during image-guided procedures | |
US20190050999A1 (en) | Dilated Fully Convolutional Network for Multi-Agent 2D/3D Medical Image Registration | |
Hu et al. | Modelling prostate motion for data fusion during image-guided interventions | |
CN110074813A (en) | A kind of ultrasonic image reconstruction method and system | |
CN107633873A (en) | A kind of Urology Surgery extracorporeal lithotiptor control method based on internet | |
CN104603836A (en) | Enhanced method for correcting data for deformations during image guided procedures | |
Mendizabal et al. | Physics-based deep neural network for real-time lesion tracking in ultrasound-guided breast biopsy | |
Koutsourelakis | A novel Bayesian strategy for the identification of spatially varying material properties and model validation: an application to static elastography | |
CN104574329A (en) | Ultrasonic fusion imaging method and ultrasonic fusion imaging navigation system | |
CN107592802A (en) | Strengthen the system and method for guide abdominal videoendoscopic surgery process by anatomical model | |
CN106170784A (en) | For analyzing, store and the method and system of regenerating information | |
Liang et al. | Synthesis and edition of ultrasound images via sketch guided progressive growing GANS | |
Qin et al. | Reconstructing the full tongue contour from EMA/X-ray microbeam | |
Sedeh et al. | Modeling, simulation, and optimal initiation planning for needle insertion into the liver | |
CN106388774B (en) | A kind of pocket induction type magnetosonic two-dimensional conductivity imaging device | |
Van Reeth et al. | The use of inexact helium wavefunctions in positron-helium scattering | |
Li et al. | A framework for correcting brain retraction based on an eXtended Finite Element Method using a laser range scanner | |
CN103680279A (en) | Cystoscope surgery simulated training method and system | |
Ye et al. | Filling model based soft tissue deformation model | |
Zayed et al. | Automatic frame selection using MLP neural network in ultrasound elastography | |
CN113842164A (en) | System and method for detecting BRAF-V600E mutation | |
CN104220893A (en) | Coordinate transformation of graphical objects registered to magnetic resonance image | |
Azampour et al. | Anatomy‐aware computed tomography‐to‐ultrasound spine registration | |
CN107049315A (en) | Based on the Injection Current formula thermoacoustic resistivity image method for reconstructing for optimizing alternative manner | |
Orkisz et al. | Real-time target tracking applied to improve fragmentation of renal stones in extra-corporeal lithotripsy |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180126 |