CN117494058B - Respiratory motion prediction method, equipment and medium for assisting surgical robot puncture - Google Patents

Respiratory motion prediction method, equipment and medium for assisting surgical robot puncture Download PDF

Info

Publication number
CN117494058B
CN117494058B CN202410002323.8A CN202410002323A CN117494058B CN 117494058 B CN117494058 B CN 117494058B CN 202410002323 A CN202410002323 A CN 202410002323A CN 117494058 B CN117494058 B CN 117494058B
Authority
CN
China
Prior art keywords
respiratory
data
signals
signal
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410002323.8A
Other languages
Chinese (zh)
Other versions
CN117494058A (en
Inventor
刘斌
史文青
沙连森
张文彬
黄锟
姚兴亮
邹学坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Institute of Biomedical Engineering and Technology of CAS
Original Assignee
Suzhou Institute of Biomedical Engineering and Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Institute of Biomedical Engineering and Technology of CAS filed Critical Suzhou Institute of Biomedical Engineering and Technology of CAS
Priority to CN202410002323.8A priority Critical patent/CN117494058B/en
Publication of CN117494058A publication Critical patent/CN117494058A/en
Application granted granted Critical
Publication of CN117494058B publication Critical patent/CN117494058B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B17/34Trocars; Puncturing needles
    • A61B17/3403Needle locating or guiding means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B34/32Surgical robots operating autonomously
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/113Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb occurring during breathing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • G06F18/15Statistical pre-processing, e.g. techniques for normalisation or restoring missing data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Pathology (AREA)
  • Computational Linguistics (AREA)
  • Robotics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physiology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention provides a respiratory motion prediction method, equipment and medium for assisting surgical robot puncture, wherein the method comprises the following steps: acquiring an acquired physiological signal of a human body; preprocessing human physiological signals to prepare a multi-source data set; carrying out respiratory motion prediction through the preprocessed human physiological signals based on the FEDformer model to obtain a position change prediction result of the target; and controlling the mechanical arm of the surgical robot to follow the position change of the target according to the position change prediction result of the target. Aiming at the problem that the respiratory motion prediction algorithm under the auxiliary puncture of the puncture operation robot has poor precision in long-time prediction, the invention builds a respiratory signal multisource information acquisition platform to acquire richer respiratory data, adopts the FEDformer model based on frequency domain enhancement for respiratory motion prediction of multisource information fusion, and improves the prediction performance of the model through a frequency domain enhancement module and a frequency domain attention mechanism.

Description

Respiratory motion prediction method, equipment and medium for assisting surgical robot puncture
Technical Field
The invention relates to the technical field of deep learning, in particular to a respiratory motion prediction method, equipment and medium for assisting surgical robot puncture.
Background
The widespread use of lancing surgical robots in cancer diagnosis and treatment provides many advantages over conventional manual lancing, high precision positioning and motion control capabilities, long-term stability, and reduced patient trauma. However, respiratory movements of the patient can cause positional changes in the position of tissues or organs within the chest and abdomen relative to external fixed reference points, thereby significantly affecting the accuracy of the penetration procedure.
The respiratory motion signal is a half-cycle non-stationary signal whose amplitude and period vary over time and from patient to patient. Using real-time imaging guidance refers to using ultrasound or other real-time imaging techniques, where a physician can observe the target location during respiratory motion and then puncture at a proper timing, which can help the physician to better adjust the timing of the puncture, but is limited by the robotic controller transmitting position data and the robotic end effector delays. The system delay of a typical robot is about 300ms or more, while the position of the tumor varies with respiratory motion. Therefore, in order to realize accurate puncture operation under the real-time imaging guidance, long-time respiratory motion prediction is necessary to predict the change of the tumor position in advance, and the mechanical arm is timely adjusted according to the prediction data to compensate the delay of the robot system, so that the accuracy of the puncture operation is improved.
Respiratory motion prediction is a typical time series prediction task, and deep learning neural networks are currently an effective method for solving such tasks. Such as multi-layer perceptron networks (ADMLP-NN), recurrent neural networks (Recurrent neural networks, RNN), and transducer models based on artificial neural adaptive boosting. ADMLP-NN has high requirements on data sets, needs to input data into a stable sequence, and has poor prediction accuracy in respiratory prediction tasks. The basic idea of a Recurrent Neural Network (RNN) is to introduce timing information into the network training so that time series data can be processed more accurately, but RNN networks have structural consistency constraints and limitations of weak long-term dependencies, as the length of the sequence increases, prediction errors increase and prediction speed decreases. The transducer model can capture rich features in respiratory signals through a multi-head attention mechanism with residual connection, but the transducer is very complex in prediction in a time domain, the space complexity is secondarily increased along with an input window, the calculation load is overlarge, the accuracy is reduced, a plurality of signals have sparsity in a frequency domain, and the performance of the transducer model can be effectively improved by introducing frequency domain information.
Data sets are important factors affecting the predictive effectiveness of a model, prediction using data of a single dimension can easily lead to overfitting, while prediction using multidimensional data can provide more comprehensive and accurate predictions. Respiratory motion causes chest and abdominal cavity changes while respiratory signals and electrocardiographic signals have an interaction mechanism. Therefore, the method is an effective method based on multi-source signal fusion, and fully considers complex correlations between respiratory motion and other physiological parameters, so that the robustness and the accuracy of respiratory motion prediction are improved. Based on the analysis, in order to meet the requirements of the delay of the puncture operation robot system and improve the prediction precision of the respiratory prediction under a longer-time task, a respiratory motion prediction method with multi-source information fusion is urgently needed.
Disclosure of Invention
To achieve the above and other advantages and in accordance with the purpose of the present invention, a first object of the present invention is to provide a respiratory motion prediction method for assisting a surgical robot puncture, comprising the steps of:
acquiring an acquired physiological signal of a human body; wherein the human physiological signals comprise electrocardiosignals, chest respiratory strain signals and abdomen respiratory signals;
preprocessing the human physiological signals to prepare a multi-source data set;
carrying out respiratory motion prediction through the preprocessed human physiological signals based on the FEDformer model to obtain a position change prediction result of the target;
and controlling the mechanical arm of the surgical robot to follow the position change of the target according to the position change prediction result of the target.
Further, the method also comprises the following steps:
and setting a corresponding puncture threshold value according to the position change prediction result of the target to obtain the optimal puncture time.
Further, training of the FEDformer model includes partitioning data of a training set with sliding windows.
Further, the data for dividing the training set by adopting the sliding window comprises sliding a window with the length of n+Noutput at intervals of 1 time step along the direction of a time axis, and acquiring one piece of data by each sliding, and combining the data to obtain a training set matrix for model training; where n is the time step of the input data and Noutput is the time step of the output data.
Further, the training of the FEDformer model comprises the following steps:
preprocessing the collected human physiological signals to prepare a multi-source data set;
the position column of the mark point with the largest abdomen variation amplitude is used as a target column, and the signal column of the non-target column is used as a characteristic learning column;
extracting a time column to manufacture a time tag containing four dimensions of time, minutes, seconds and milliseconds, and encoding the time tag;
dividing a data column of a non-time column into a training set, a verification set and a test set, and respectively entering an encoder;
dividing an input sequence into a period term and a trend term in an encoder architecture;
the period item is transmitted into a frequency domain enhancement module, an input sequence on an original time domain is mapped to a frequency domain, and then random sampling is carried out on the frequency domain;
in the characteristic learning stage, the sequence is transmitted into a full-connection layer as a learnable parameter, and finally, the full-connection layer is subjected to frequency domain complementation and projected back to a time domain, and then enters a mixed expert decomposition module;
repeatedly using a mixed expert decomposition module to decompose the sequence into a period item and a trend item, giving the period item to a subsequent layer for learning, and finally transmitting the period item to a decoder;
in the decoder, the input of the decoder is decomposed into a period term and a trend term through a mixed expert decomposition module, the period term is transmitted to a subsequent layer for learning, and the frequency domain correlation learning is carried out on the period term of the encoder and the decoder through a frequency domain attention module;
and accumulating the trend items through the full connection layer and returning the trend items to the period items to obtain an output sequence.
Further, the preprocessing of the physiological signal of the human body comprises the following steps:
processing the chest respiratory strain signal and the electrocardiosignal by adopting a Butterworth filter, removing random noise in the signals and solving the problem of baseline drift;
removing abnormal values from abnormal value points and sharp peak values in the abdominal respiratory signals by adopting a quartile method, arranging all data in an ascending order, calculating to obtain four quartiles, finding an upper quartile and a lower quartile, and expressing the abnormal values as formula (1):
(1),
wherein Q is 3 For the upper quartile, Q 1 For the lower quartile, μ is the outlier, X is the data in the abdominal respiration signal, doutliers is the outlier;
smoothing the electrocardiosignal, the chest respiratory strain signal and the abdomen respiratory signal by adopting a Savitzky-Golay filter, wherein the smoothed data is shown as a formula (2):
(2),
wherein x is the data to be fitted, y is the output data after fitting,is a convolution coefficient;
resampling the electrocardiosignal, the chest respiratory strain signal and the abdomen respiratory signal to align the electrocardiosignal, the chest respiratory strain signal and the abdomen respiratory signal at the same time point;
and normalizing the electrocardiosignal, the chest respiratory strain signal and the abdomen respiratory signal.
The invention provides respiratory signal acquisition equipment for assisting the puncture of a surgical robot, which is used for realizing the acquisition of human physiological signals by the method, and comprises intelligent electrocardio wearable equipment, an optical positioning system, a plurality of markers and a base; wherein,
the marker is positioned at the abdomen position of the intelligent electrocardio wearable device;
the base is used for being set as a reference coordinate system;
the method comprises the steps that an optical positioning system collects positions and three-dimensional angles of a plurality of markers relative to a coordinate system of the optical positioning system, and the positions and the three-dimensional angles of the markers relative to the coordinate system of the optical positioning system are converted into positions and three-dimensional angles of marker points relative to the coordinate system of the base through a conversion matrix from the coordinate system of the optical positioning system to a reference coordinate system of the base, so that abdomen respiration signals are obtained;
and the intelligent electrocardio wearable equipment acquires electrocardiosignals and chest respiration strain signals when the optical positioning system acquires data.
Further, a displacement sensor and an electrocardiograph monitor are arranged in the intelligent electrocardiograph wearable device, the displacement sensor collects signals of the up-and-down fluctuation of the chest of a human body during breathing to serve as chest breathing strain signals, and the electrocardiograph monitor collects electrocardiograph signals.
Further, the optical positioning system is arranged right above the abdomen position of the intelligent electrocardio wearable device.
A third object of the present invention is to provide a computer readable storage medium having stored thereon program instructions that when executed implement a respiratory motion prediction method that assists in surgical robot penetration.
Compared with the prior art, the invention has the beneficial effects that:
aiming at the problem that the respiratory motion prediction algorithm under the auxiliary puncture of the puncture operation robot has poor precision in long-time prediction, the invention builds a respiratory signal multisource information acquisition platform to acquire richer respiratory data, adopts the FEDformer model based on frequency domain enhancement for respiratory motion prediction of multisource information fusion, and improves the prediction performance of the model through a frequency domain enhancement module and a frequency domain attention mechanism. The FEDformer model has very small errors in multi-scale time delay prediction of different breathing modes, has better robustness, has better generalization capability for practical application scenes, and has practical application value for improving the real-time tracking technology and puncture accuracy of the puncture operation robot.
The foregoing description is only an overview of the present invention, and is intended to provide a better understanding of the present invention, as it is embodied in the following description, with reference to the preferred embodiments of the present invention and the accompanying drawings. Specific embodiments of the present invention are given in detail by the following examples and the accompanying drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
FIG. 1 is a schematic diagram of a respiratory signal acquisition apparatus for assisting a surgical robot penetration of embodiment 1;
FIG. 2 is a schematic diagram of real-time tracking of three-dimensional coordinates of a marker by the optical positioning system of example 1;
fig. 3 is a schematic diagram of an intelligent electrocardiographic wearable device of embodiment 1 collecting electrocardiographic signals and chest respiratory strain signals;
FIG. 4 is a flowchart of a respiratory motion prediction method for assisting a surgical robot penetration of example 2;
FIG. 5 is a flowchart showing the whole respiratory motion prediction method for assisting the surgical robot puncture according to embodiment 2;
FIG. 6 is a flowchart for obtaining the optimal puncture timing according to embodiment 2;
FIG. 7 is a training flow chart of the FEDformer model of example 2;
FIG. 8 is a flowchart of preprocessing human physiological signals in embodiment 2;
FIG. 9 is a schematic diagram of FEDformer model structure of example 2;
FIG. 10 is a schematic diagram of respiratory prediction in FEDformer modeling of example 2;
FIG. 11 is an enlarged view of a portion of FIG. 10;
fig. 12 is a schematic diagram of a storage medium of embodiment 3.
In the figure: 1. an optical positioning system; 2. intelligent electrocardiographic wearable devices; 3. a marker; 4. a base.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and detailed description, wherein it is to be understood that, on the premise of no conflict, the following embodiments or technical features may be arbitrarily combined to form new embodiments.
Example 1
The impact of the data set on the deep learning is crucial and the performance of the deep learning model depends largely on the data used for training.
The embodiment provides respiratory signal acquisition equipment for assisting the surgical robot in puncturing, which is used for realizing the acquisition of respiratory signal multi-source information and is used for manufacturing a respiratory motion data set with multi-source information fusion.
Since respiratory movements can lead to chest and abdominal cavity changes, these changes appear as periodic expansions and contractions, while also interacting with the heart and lungs, there is a close physiological relationship between respiratory signals and cardiac electrical signals. The data collected by the respiratory signal multisource information collection platform mainly comprises chest respiratory strain signals, abdomen respiratory signals and electrocardiosignals, and respiratory movement is tracked more comprehensively from the three aspects.
As shown in fig. 1, the device comprises an optical positioning system 1, an intelligent electrocardio wearable device 2, a plurality of markers 3 and a base 4; wherein,
the marker is positioned at the abdomen of the intelligent electrocardio wearable device;
the base is used for being set as a reference coordinate system;
the optical positioning system collects the positions and three-dimensional angles of a plurality of markers relative to a coordinate system of the optical positioning system, and converts the positions and three-dimensional angles of the markers relative to the coordinate system of the optical positioning system into the positions and three-dimensional angles of the marker points relative to the coordinate system of the base through a conversion matrix from the coordinate system of the optical positioning system to the reference coordinate system of the base, so that abdomen breathing signals, namely the coordinate change condition of a human body from the back to the chest under the world coordinate system vertical to the ground, are obtained. The three-dimensional coordinates of the markers tracked in real time by the optical positioning system are shown in fig. 2.
And the intelligent electrocardio wearable equipment acquires electrocardio signals and chest respiration strain signals when the optical positioning system acquires data.
In this embodiment, a displacement sensor and an electrocardiograph monitor are arranged in the intelligent electrocardiograph wearable device, the displacement sensor collects signals of up-and-down fluctuation of the chest of a human body during breathing as chest respiratory strain signals, and the electrocardiograph monitor collects electrocardiograph signals. The electrocardiosignal and chest respiration strain signal collected by the intelligent electrocardio wearable device are shown in figure 3.
In this embodiment, the intelligent electrocardiograph wearable device may be an intelligent electrocardiograph T-shirt. As shown in fig. 1, in order to simulate the operation scene of the puncture operation to the greatest extent, the volunteer wears the intelligent electrocardiograph T-shirt to lie on the back on the experimental bed, and arranges the optical positioning system directly above the abdomen of the human body, that is, directly above the abdomen position of the intelligent electrocardiograph wearable device. Meanwhile, the base is also placed on an experiment bed, 6 markers are placed at the abdomen position of the intelligent electrocardiograph T-shirt, the optical positioning system obtains the optimal acquisition visual field by adopting a overlooking angle, the optical positioning system records the position coordinates of the 6 markers relative to a base reference coordinate system, and the sampling frequency is 60Hz. The intelligent electrocardio T-shirt can record various physiological signals of human bodies such as electrocardiosignals, chest respiratory strain signals, skin electric activities and the like at the same time, and in the embodiment, the intelligent electrocardio T-shirt records chest respiratory strain signals and three-lead electrocardiosignals, and the sampling frequencies of the intelligent electrocardio T-shirt are 50Hz and 250Hz respectively.
In order to meet the requirements of the puncture operation robot on system delay and improve the prediction precision of the respiration prediction under a longer-time task, the embodiment establishes a respiration signal multisource information acquisition platform, adopts a frequency domain enhanced transducer model-FEDformer model subsequently, designs a multisource information fusion respiration motion prediction method, and has practical application value in improving the puncture operation robot real-time tracking technology and the puncture precision. For detailed description of the method, reference may be made to corresponding descriptions in the following method embodiments, which are not repeated here.
Example 2
A respiratory motion prediction method for assisted surgery robot puncture adopts respiratory signal multisource information acquired by respiratory signal acquisition equipment for assisted surgery robot puncture provided in embodiment 1 to manufacture a respiratory motion data set with multisource information fusion. As shown in fig. 4 and 5, the method comprises the following steps:
s1, acquiring an acquired physiological signal of a human body; wherein, the physiological signals of the human body comprise electrocardiosignals, chest breathing strain signals and abdomen breathing signals;
s2, preprocessing human physiological signals to manufacture a multi-source data set;
in order to reduce the influence of random noise in the acquisition environment and the sensor on the signals, data preprocessing is performed on the premise of keeping the periodicity and the shape of the original signals. The data preprocessing includes noise reduction, smoothing, resampling, and normalization. As shown in fig. 8, preprocessing the physiological signal of the human body includes the steps of:
s21, processing chest respiratory strain signals and three-lead electrocardiosignals by adopting a first-order Butterworth filter so as to effectively remove random noise in the chest respiratory strain signals and the three-lead electrocardiosignals and solve the problem of baseline drift.
S22, removing abnormal values of some abnormal value points and sharp peak values in abdominal respiratory signals by adopting a quartile method, arranging all data in an ascending order, calculating to obtain four quartiles, finding an upper quartile and a lower quartile, and expressing the abnormal values as formula (1):
(1),
wherein Q is 3 For the upper quartile, Q 1 For the lower quartile, μ is an outlier, which can be set to 1.5 according to empirical values, x is the data in the abdominal respiration signal, doutliers is an outlier;
s23, smoothing electrocardiosignals, chest respiratory strain signals and abdomen respiratory signals by adopting a Savitzky-Golay (SG) filter, wherein the filter is used for carrying out polynomial fitting by using a least square method in a given window, so that the signals can be effectively smoothed, the interference of high-frequency noise can be reduced to a certain extent, meanwhile, the trend and the characteristics of respiratory signals are reserved, and the smoothed data are as shown in a formula (2):
(2),
wherein x is the data to be fitted, y is the output data after fitting,is a convolution coefficient;
s24, resampling the electrocardiosignal, the chest respiratory strain signal and the abdomen respiratory signal to align the electrocardiosignal, the chest respiratory strain signal and the abdomen respiratory signal at the same time point; in this embodiment, the sampling frequencies of the three data are respectively 60Hz, 50Hz and 250Hz, and the rescale function of MATLAB is converted into the sampling frequency of 50Hz, that is, each time step is 20ms, so that the subsequent model training and analysis are convenient.
And S25, normalizing the electrocardiosignal, the chest respiratory strain signal and the abdomen respiratory signal. Normalization refers to converting data into the same dimension and a range of a specified size, such as between 0 and 1 or-1 and 1. In the neural network training process, the dimension influence among different indexes can be eliminated through normalization, the calculation burden and the training time are reduced, and the training model convergence is quickened. This example uses Z-Score normalization with a transformation function as shown in equation (3):
(3),
wherein mean is the mean of the original data, std is the standard deviation of the original data.
In order to realize that output data of Noutput time steps are predicted from input data of n time steps, the embodiment adopts a sliding window to divide data of a training set, the window with the length of n+noutput slides at intervals of 1 time step along the direction of a time axis, and one piece of data is obtained by each sliding, so that a training set matrix for model training is obtained by combining. The sliding window method is helpful for constructing the association relation between the input sequence and the output sequence, so that the model can learn the characteristics and modes of time sequence data, and the prediction of future Noutput time steps is realized.
The Fourier transform and the inverse Fourier transform can mutually transform signals between a time domain and a frequency domain, and the signals generally have sparsity in the frequency domain, namely important information of the signals is mainly concentrated on a few frequency components, and key features are extracted by retaining the important frequency components, so that the complexity of data is reduced, and the prediction accuracy of a model is improved. As shown in fig. 9, in this embodiment, the frequency domain enhanced FEDformer model is used to predict respiratory motion under a multi-scale time delay window, and the trained model and data are saved for invoking real-time prediction to assist in puncture during the subsequent puncture operation. The biggest difference of the FEDformer model is in frequency domain representation learning, and more accurate respiratory motion prediction is realized through a frequency domain enhancement module and a frequency domain attention mechanism.
The main architecture of the FEDformer model is similar to the fransformer model, and adopts a coder-decoder architecture, and mainly comprises a frequency domain enhancement module (Frequency Enhanced Block, FEB), a frequency domain attention module (Frequency Enhanced Attention, FEA), a mixed expert decomposition module (MOE Decomp) and a Forward propagation module (Feed Forward).
In the long-time sequence prediction task, an input sequence of the encoder architecture is firstly transmitted to a frequency domain enhancement module, and the frequency domain enhancement module completes the task of characterization learning in the frequency domain. Firstly, an input sequence in an original time domain is mapped to a frequency domain, and then random sampling is carried out on the frequency domain, so that the length and the computational complexity of an input vector can be effectively reduced. In the characteristic learning stage, the sequence is transmitted into a full-connection layer as a learnable parameter, and finally, the sequence is subjected to frequency domain complementation and projected back to a time domain to enter a next step of mixed expert decomposition module. In order to reduce the distribution difference of the input and output, the mixed expert decomposition module decomposes the sequence into a period term (S) and a trend term (T), wherein the trend component is discarded, and the seal component is submitted to the following layer for learning and finally transmitted to a decoder.
In the decoder, the input of the decoder is also subjected to three layers of mixed expert decomposition modules, each layer decomposes a signal into a seal component and a end component, the seal component is transmitted to the following layers for learning, and the frequency domain correlation learning is carried out on the seal item of the encoder and the decoder through a frequency domain attention module, so that the flow is similar to that of a frequency domain enhancement module: frequency domain projection, frequency domain sampling, feature learning, frequency domain complement, projection back to the time domain. The frequency domain attention module will focus more on the correlation of the encoder and decoder's seal in order to better learn the inherent correlation of the two-part signal. And finally, accumulating the end component through the full connection layer and finally returning to the seal item, so as to obtain an output sequence, calculating a loss function of the output sequence and the tag value, wherein the loss function adopts root mean square error.
Specifically, as shown in fig. 7, the training of the FEDformer model includes the following steps:
in order to realize respiratory motion prediction based on the multisource signal fusion of the FEDformer model, S31 is used for preprocessing the acquired physiological signals of the human body and manufacturing a multisource data set.
S32, taking the position of the marking point with the largest abdomen variation amplitude as a target column and taking the rest signal columns as feature learning columns;
s33, firstly dividing a data set, firstly extracting a time sequence to prepare a time tag containing four dimensions of time, minutes, seconds and milliseconds, and encoding the time tag;
s34, dividing the rest data columns into a training set, a verification set and a test set, and respectively entering an encoder;
s35, firstly dividing an input sequence into a period item and a trend item in an encoder architecture; the trend item represents the overall change level of the data, and is decomposed first without feature learning.
S36, the periodic items are transmitted to a frequency domain enhancement module, and the frequency domain enhancement module completes the task of characterization learning on the frequency domain. Firstly, an input sequence in an original time domain is mapped to a frequency domain, and then random sampling is carried out on the frequency domain, so that the length and the computational complexity of an input vector can be effectively reduced.
S37, in a feature learning stage, the sequence is transmitted into a full-connection layer to serve as a learnable parameter, and finally, the full-connection layer is subjected to frequency domain complementation and projected back to a time domain to enter a next step of mixed expert decomposition module;
in order to reduce the distribution difference of the input and output, S38, the sequence is decomposed into a period term (serial, S) and a trend term (trend, T) by repeatedly using the hybrid expert decomposition module, wherein the trend component is discarded, and the serial component is submitted to the following layer for learning, and finally is transmitted to the decoder.
S39, in the decoder, the input of the decoder is decomposed into a period term and a trend term through a mixed expert decomposition module, the period term is transmitted to a subsequent layer for learning, and the frequency domain correlation learning is carried out on the period term of the encoder and the decoder through a frequency domain attention module;
and S310, accumulating the trend items through the full connection layer and returning the trend items to the period items to obtain an output sequence.
S3, carrying out respiratory motion prediction on the basis of an FEDformer model through the preprocessed human physiological signals to obtain a position change prediction result of the target; the respiratory motion prediction results are shown in fig. 10.
The breathing signal obtained through the breathing signal multisource information acquisition platform can effectively track the quasi-periodic movement of the tumor, and the external breathing has strong correlation with the movement of the tumor, so that the FEDformer model can accurately and effectively predict the position change of the tumor. The embodiment sets a closed-loop control mechanical arm following control algorithm to realize that the tail end of the mechanical arm follows the change of the tumor position.
That is, S4, the surgical robot arm is controlled to follow the position change of the target according to the tumor position change prediction result.
Meanwhile, in order to adapt to different crowds, on the selection of puncture time, a puncture threshold d which can be adjusted by a doctor can be set for control, and through the evaluation of the waveform change amplitude and frequency of a patient, different puncture thresholds d can be set for obtaining the optimal puncture time. As shown in fig. 6, the method further comprises the following steps:
s5, setting a corresponding puncture threshold d according to a target position change prediction result to obtain an optimal puncture time, wherein the optimal puncture time is shown in fig. 11.
In order to realize accurate puncture operation under the guidance of real-time imaging, the invention provides a respiratory motion prediction method for assisting the puncture of an operation robot to predict long-time respiratory motion so as to predict the change of a tumor position in advance, and the mechanical arm is adjusted in time according to prediction data to compensate the delay of a robot system, so that the accuracy of the puncture operation is improved.
In order to meet the requirements of the delay of the puncture operation robot system and improve the prediction precision of the respiration prediction under a longer-time task, the invention builds a respiration signal multisource information acquisition platform, adopts a frequency domain enhanced transducer model-FEDformer model, designs a multisource information fusion respiration motion prediction method, and has practical application value for improving the real-time tracking technology and the puncture precision of the puncture operation robot.
Example 3
A computer readable storage medium having stored thereon program instructions that when executed implement a respiratory motion prediction method for assisting a surgical robot in puncturing, as shown in fig. 12. For detailed description of the method, reference may be made to corresponding descriptions in the above method embodiments, and details are not repeated here.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises an element.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.
The foregoing is illustrative of the embodiments of the present disclosure and is not to be construed as limiting the scope of the one or more embodiments of the present disclosure. Various modifications and alterations to one or more embodiments of this description will be apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of one or more embodiments of the present disclosure, are intended to be included within the scope of the claims of one or more embodiments of the present disclosure.

Claims (8)

1. The respiratory motion prediction method for assisting the puncture of the surgical robot is characterized by comprising the following steps of:
acquiring an acquired physiological signal of a human body; wherein the human physiological signals comprise electrocardiosignals, chest respiratory strain signals and abdomen respiratory signals;
preprocessing the human physiological signals to prepare a multi-source data set;
carrying out respiratory motion prediction through the preprocessed human physiological signals based on the FEDformer model to obtain a position change prediction result of the target;
according to the position change prediction result of the target, controlling the mechanical arm of the surgical robot to follow the position change of the target;
the training of the FEDformer model comprises the following steps:
preprocessing the collected human physiological signals to prepare a multi-source data set;
the position column of the mark point with the largest abdomen variation amplitude is used as a target column, and the signal column of the non-target column is used as a characteristic learning column;
extracting a time column to manufacture a time tag containing four dimensions of time, minutes, seconds and milliseconds, and encoding the time tag;
dividing a data column of a non-time column into a training set, a verification set and a test set, and respectively entering an encoder;
dividing an input sequence into a period term and a trend term in an encoder architecture;
the period item is transmitted into a frequency domain enhancement module, an input sequence on an original time domain is mapped to a frequency domain, and then random sampling is carried out on the frequency domain;
in the characteristic learning stage, the sequence is transmitted into a full-connection layer as a learnable parameter, and finally, the full-connection layer is subjected to frequency domain complementation and projected back to a time domain, and then enters a mixed expert decomposition module;
repeatedly using a mixed expert decomposition module to decompose the sequence into a period item and a trend item, giving the period item to a subsequent layer for learning, and finally transmitting the period item to a decoder;
in the decoder, the input of the decoder is decomposed into a period term and a trend term through a mixed expert decomposition module, the period term is transmitted to a subsequent layer for learning, and the frequency domain correlation learning is carried out on the period term of the encoder and the decoder through a frequency domain attention module;
accumulating the trend items through the full connection layer and returning the trend items to the period items to obtain an output sequence;
the preprocessing of the human physiological signal comprises the following steps:
processing the chest respiratory strain signal and the electrocardiosignal by adopting a Butterworth filter, removing random noise in the signals and solving the problem of baseline drift;
removing abnormal values from abnormal value points and sharp peak values in the abdominal respiratory signals by adopting a quartile method, arranging all data in an ascending order, calculating to obtain four quartiles, finding an upper quartile and a lower quartile, and expressing the abnormal values as the following formula:
wherein,is the upper quartile, & lt & gt>For the lower quartile, μ is the anomaly coefficient, X is the data in the abdominal respiration signal,is an outlier;
smoothing the electrocardiosignal, the chest respiratory strain signal and the abdomen respiratory signal by adopting a Savitzky-Golay filter, wherein the smoothed data are expressed as the following formula:
wherein x is the data to be fitted, y is the output data after fitting,is a convolution coefficient;
resampling the electrocardiosignal, the chest respiratory strain signal and the abdomen respiratory signal to align the electrocardiosignal, the chest respiratory strain signal and the abdomen respiratory signal at the same time point;
and normalizing the electrocardiosignal, the chest respiratory strain signal and the abdomen respiratory signal.
2. A method of respiratory motion prediction for assisting a surgical robotic penetration as recited in claim 1, further comprising the steps of:
and setting a corresponding puncture threshold value according to the position change prediction result of the target to obtain the optimal puncture time.
3. A method of respiratory motion prediction for assisting surgical robotic penetration as recited in claim 1, wherein: the training of the FEDformer model includes partitioning data of a training set with sliding windows.
4. A method of respiratory motion prediction for assisting a surgical robotic penetration as recited in claim 3, wherein: the data for dividing the training set by adopting the sliding window comprises sliding the window with the length of n+Noutput at intervals of 1 time step along the direction of a time axis, and acquiring one piece of data by sliding each time, and combining the data to obtain a training set matrix for model training; where n is the time step of the input data and Noutput is the time step of the output data.
5. A respiratory signal acquisition device for assisting surgical robot penetration, for performing human physiological signal acquisition according to the method of claim 1, characterized in that: comprises intelligent electrocardio wearable equipment, an optical positioning system, a plurality of markers and a base; wherein,
the marker is positioned at the abdomen position of the intelligent electrocardio wearable device;
the base is used for being set as a reference coordinate system;
the method comprises the steps that an optical positioning system collects positions and three-dimensional angles of a plurality of markers relative to a coordinate system of the optical positioning system, and the positions and the three-dimensional angles of the markers relative to the coordinate system of the optical positioning system are converted into positions and three-dimensional angles of marker points relative to the coordinate system of the base through a conversion matrix from the coordinate system of the optical positioning system to a reference coordinate system of the base, so that abdomen respiration signals are obtained;
and the intelligent electrocardio wearable equipment acquires electrocardiosignals and chest respiration strain signals when the optical positioning system acquires data.
6. A respiratory signal acquisition device for assisting a surgical robotic penetration as recited in claim 5, wherein: the intelligent electrocardio wearable device is internally provided with a displacement sensor and an electrocardio monitor, wherein the displacement sensor is used for collecting signals of the up-and-down fluctuation of the chest of a human body when the chest breathes to be used as chest respiration strain signals, and the electrocardio monitor is used for collecting electrocardio signals.
7. A respiratory signal acquisition device for assisting a surgical robotic penetration as recited in claim 6, wherein: the optical positioning system is arranged right above the abdomen position of the intelligent electrocardio wearable device.
8. A computer readable storage medium, having stored thereon program instructions which, when executed, implement the method of claim 1.
CN202410002323.8A 2024-01-02 2024-01-02 Respiratory motion prediction method, equipment and medium for assisting surgical robot puncture Active CN117494058B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410002323.8A CN117494058B (en) 2024-01-02 2024-01-02 Respiratory motion prediction method, equipment and medium for assisting surgical robot puncture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410002323.8A CN117494058B (en) 2024-01-02 2024-01-02 Respiratory motion prediction method, equipment and medium for assisting surgical robot puncture

Publications (2)

Publication Number Publication Date
CN117494058A CN117494058A (en) 2024-02-02
CN117494058B true CN117494058B (en) 2024-04-09

Family

ID=89683310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410002323.8A Active CN117494058B (en) 2024-01-02 2024-01-02 Respiratory motion prediction method, equipment and medium for assisting surgical robot puncture

Country Status (1)

Country Link
CN (1) CN117494058B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101843955A (en) * 2010-03-30 2010-09-29 江苏瑞尔医疗科技有限公司 Hybrid forecasting method for position signal of breath synchronous tracking system and forecaster
CN109727672A (en) * 2018-12-28 2019-05-07 江苏瑞尔医疗科技有限公司 Patient's thorax and abdomen malignant respiratory movement predicting tracing method
CN111067622A (en) * 2019-12-09 2020-04-28 天津大学 Respiratory motion compensation method for percutaneous lung puncture
CN115670675A (en) * 2022-10-12 2023-02-03 武汉大学 Double-arm puncture robot system integrating ultrasonic information and tactile information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101843955A (en) * 2010-03-30 2010-09-29 江苏瑞尔医疗科技有限公司 Hybrid forecasting method for position signal of breath synchronous tracking system and forecaster
CN109727672A (en) * 2018-12-28 2019-05-07 江苏瑞尔医疗科技有限公司 Patient's thorax and abdomen malignant respiratory movement predicting tracing method
CN111067622A (en) * 2019-12-09 2020-04-28 天津大学 Respiratory motion compensation method for percutaneous lung puncture
CN115670675A (en) * 2022-10-12 2023-02-03 武汉大学 Double-arm puncture robot system integrating ultrasonic information and tactile information

Also Published As

Publication number Publication date
CN117494058A (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN107378944B (en) Multidimensional surface electromyographic signal artificial hand control method based on principal component analysis method
Bronzino Medical devices and systems
Zhang et al. Surgical tools detection based on modulated anchoring network in laparoscopic videos
US20210241908A1 (en) Multi-sensor based hmi/ai-based system for diagnosis and therapeutic treatment of patients with neurological disease
CN113940856B (en) Hand rehabilitation training device and method based on myoelectricity-inertia information
US20040162495A1 (en) Device for analysis of a signal, in particular a physiological signal such as a ECG signal
CN112022619A (en) Multi-mode information fusion sensing system of upper limb rehabilitation robot
Zhou et al. Analysis of interventionalists’ natural behaviors for recognizing motion patterns of endovascular tools during percutaneous coronary interventions
US20070050046A1 (en) Methods for generating a signal indicative of an intended movement
Banerjee et al. Deep neural network based missing data prediction of electrocardiogram signal using multiagent reinforcement learning
AU2022335276A1 (en) Recognition, autonomous positioning and scanning method for visual image and medical image fusion
CN115177273A (en) Movement intention identification method and system based on multi-head re-attention mechanism
CN117494058B (en) Respiratory motion prediction method, equipment and medium for assisting surgical robot puncture
Tosin et al. SEMG-based upper limb movement classifier: Current scenario and upcoming challenges
CN116313029B (en) Method, system and device for dynamic control optimization of digital acupuncture
Jain et al. Premovnet: Premovement eeg-based hand kinematics estimation for grasp-and-lift task
CN115813409A (en) Ultra-low-delay moving image electroencephalogram decoding method
Rodríguez et al. Hilbert transform and neural networks for identification and modeling of ECG complex
CN114847959A (en) Myocardial infarction positioning system and method in remote cardiac intervention operation
CN116685284A (en) Generating a mapping function for tracking the position of an electrode
Sikder et al. Heterogeneous hand guise classification based on surface electromyographic signals using multichannel convolutional neural network
JP2560651B2 (en) Body condition estimation device
Kæseler et al. Brain patterns generated while using a tongue control interface: a preliminary study with two individuals with ALS
Omisore et al. On Task-specific Autonomy in Robotic Interventions: A Multimodal Learning-based Approach for Multi-level Skill Assessment during Cyborg Catheterization
CN113520413B (en) Lower limb multi-joint angle estimation method based on surface electromyogram signals

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant