CN116152610B - Intelligent heart ultrasonic probe pose estimation model training method and pose estimation method - Google Patents
Intelligent heart ultrasonic probe pose estimation model training method and pose estimation method Download PDFInfo
- Publication number
- CN116152610B CN116152610B CN202310349908.2A CN202310349908A CN116152610B CN 116152610 B CN116152610 B CN 116152610B CN 202310349908 A CN202310349908 A CN 202310349908A CN 116152610 B CN116152610 B CN 116152610B
- Authority
- CN
- China
- Prior art keywords
- loss function
- pose
- target
- probe
- heart ultrasonic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 239000000523 sample Substances 0.000 title claims abstract description 88
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000012549 training Methods 0.000 title claims abstract description 49
- 239000013598 vector Substances 0.000 claims abstract description 107
- 230000000747 cardiac effect Effects 0.000 claims abstract description 79
- 230000006870 function Effects 0.000 claims description 118
- 238000002604 ultrasonography Methods 0.000 claims description 93
- 238000004364 calculation method Methods 0.000 claims description 8
- 230000002787 reinforcement Effects 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 3
- 238000013473 artificial intelligence Methods 0.000 abstract description 3
- 238000012545 processing Methods 0.000 abstract description 3
- 230000008569 process Effects 0.000 description 8
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000000295 complement effect Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 208000024172 Cardiovascular disease Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000017531 blood circulation Effects 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000012938 design process Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000002592 echocardiography Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000012285 ultrasound imaging Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
The invention discloses a training method of an intelligent heart ultrasonic probe pose estimation model and a pose estimation method, and belongs to the technical field of intelligent heart ultrasonic data processing. The training method comprises the following steps: acquiring training data; encoding the target heart ultrasonic image and the acquired heart ultrasonic image respectively; decoding the coded cardiac ultrasonic coding vector and the target pose coding vector to obtain a decoded cardiac ultrasonic image; calculating a first loss function, a second loss function, a third loss function and a total loss function; and optimizing parameters of the pose estimation model by reducing the total loss function to obtain the trained pose estimation model. The pose parameters of the probe can be more accurately estimated according to the currently acquired ultrasonic image by using the trained pose estimation model, and the moving direction of the ultrasonic probe is guided, so that a clearer and more accurate echocardiographic tangent plane is acquired. The invention solves the problem of insufficient accuracy in the artificial intelligence auxiliary ultrasonic scanning in the existing method.
Description
Technical Field
The invention relates to the technical field of intelligent heart ultrasonic data processing, in particular to an intelligent heart ultrasonic probe pose estimation model training method and a pose estimation method.
Background
Ultrasound imaging of the heart (clinically known as echocardiography) using ultrasound to display images of the heart, large blood vessels and blood flow in real time is the most common imaging technique for patients with cardiovascular disease and can only be done by professionally trained sonographers in specialized examination rooms. The ultrasonic doctor needs to acquire dynamic images of different sections for real-time diagnosis, and the acquisition and diagnosis of the ultrasonic images are very dependent on the experience of the doctor. Therefore, only the reduction of the ultrasonic use threshold can make the ultrasonic become a truly universal and portable diagnostic tool.
The artificial intelligent model has been brought into the spotlight in the fields of image classification, target recognition, automatic navigation and the like, and has also been progressed in breakthrough in the medical field for assisting the task of ultrasonic diagnosis and the like.
At present, a common artificial intelligent auxiliary ultrasonic scanning system takes an ultrasonic image obtained by scanning an ultrasonic probe as input, judges imaging quality through a depth convolution network, generates pose parameters including positions and directions, and prompts an operator to select the moving direction of the probe according to the parameters. The artificial intelligence auxiliary ultrasonic scanning system can enable a novice operator without ultrasonic examination experience to acquire multi-view standard sections of the transthoracic echocardiogram under the assistance of the deep learning system, so that accurate ultrasonic images of heart anatomy are obtained, and key parameters of the heart are evaluated. However, the success rate of collecting a certain common cardiac section by using the method is only 58% and is far lower than that of a professional doctor. In addition, the method does not disclose technical details, so that the effect of lateral comparison cannot be reproduced.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides the following technical scheme.
The first aspect of the invention provides a training method of an intelligent heart ultrasonic probe pose estimation model, which comprises the following steps:
acquiring training data, including a target heart ultrasonic image and corresponding probe pose information thereof, and an acquired heart ultrasonic image and corresponding probe pose information thereof;
encoding the target heart ultrasonic image into a target heart ultrasonic encoding vector and a target pose encoding vector, and encoding the acquired heart ultrasonic image into a heart ultrasonic encoding vector and a pose encoding vector;
decoding the heart ultrasonic coding vector and the target pose coding vector to obtain a decoded heart ultrasonic image;
calculating a first loss function by using probe pose information corresponding to the target heart ultrasonic image, probe pose information corresponding to the acquired heart ultrasonic image and pose coding vectors; calculating a second loss function by using the target cardiac ultrasound encoding vector and the cardiac ultrasound encoding vector; calculating a third loss function using the target cardiac ultrasound image and the decoded cardiac ultrasound image; calculating a total loss function using the first loss function, the second loss function, and the third loss function;
and optimizing parameters of the pose estimation model by reducing the total loss function to obtain the trained pose estimation model.
Preferably, the calculating the first loss function by using the probe pose information corresponding to the target cardiac ultrasound image, the probe pose information corresponding to the acquired cardiac ultrasound image and the pose coding vector comprises:
wherein,,as a first loss function; />Probe pose information corresponding to the acquired heart ultrasonic image; />Probe pose information corresponding to the target heart ultrasonic image; />Vectors are encoded for the pose.
Preferably, the calculating the second loss function using the target cardiac ultrasound encoding vector and the cardiac ultrasound encoding vector comprises:;
wherein,,is a second loss function; />Encoding vectors for the target heart ultrasound; />Vector is encoded for cardiac ultrasound.
PreferablyThe calculating a third loss function using the target cardiac ultrasound image and the decoded cardiac ultrasound image includes:-/>;
wherein,,is a third loss function; />An ultrasound image of the target heart; />To decode the cardiac ultrasound image.
Preferably, the calculating the total loss function using the first loss function, the second loss function, and the third loss function includes:;
wherein,,as a total loss function; />As a first loss function; />Is a second loss function; />Is a third loss function.
Preferably, the encoding and decoding are performed using a deep convolutional network.
The second aspect of the invention provides a method for estimating the pose of an intelligent heart ultrasonic probe, which comprises the following steps:
inputting the currently acquired heart ultrasonic image into a trained pose estimation model to obtain an estimated pose of the intelligent heart ultrasonic probe;
the trained pose estimation model is obtained by training in advance by adopting the training method of the intelligent heart ultrasonic probe pose estimation model according to the first aspect.
Preferably, the intelligent heart ultrasonic probe pose estimation method further comprises the following steps: and iteratively reducing the estimated pose by using the reinforcement learning model to obtain positive feedback, so that the estimated pose is iteratively close to the target position.
The third aspect of the invention provides a training device for an intelligent heart ultrasonic probe pose estimation model, which comprises:
the training data acquisition module is used for acquiring training data, and comprises a target heart ultrasonic image and corresponding probe pose information thereof, and an acquired heart ultrasonic image and corresponding probe pose information thereof;
the encoding module is used for encoding the target heart ultrasonic image into a target heart ultrasonic encoding vector and a target pose encoding vector, and encoding the acquired heart ultrasonic image into a heart ultrasonic encoding vector and a pose encoding vector;
the decoding module is used for decoding the heart ultrasonic coding vector and the target pose coding vector to obtain a decoded heart ultrasonic image;
the loss function calculation module is used for calculating a first loss function by using probe pose information corresponding to the target heart ultrasonic image, probe pose information corresponding to the acquired heart ultrasonic image and pose coding vectors; calculating a second loss function by using the target cardiac ultrasound encoding vector and the cardiac ultrasound encoding vector; calculating a third loss function using the target cardiac ultrasound image and the decoded cardiac ultrasound image; calculating a total loss function using the first loss function, the second loss function, and the third loss function;
and the optimization module is used for optimizing parameters of the pose estimation model by reducing the total loss function to obtain the trained pose estimation model.
A fourth aspect of the present invention provides a memory storing a plurality of instructions for implementing the training method of the predictive model according to the first aspect and the pose prediction method according to the second aspect.
A fifth aspect of the present invention provides an electronic device, which is characterized by comprising a processor and a memory connected to the processor, wherein the memory stores a plurality of instructions, and the instructions can be loaded and executed by the processor, so that the processor can execute the training method of the estimation model according to the first aspect and the pose estimation method according to the second aspect.
The beneficial effects of the invention are as follows: according to the technical scheme provided by the invention, the pose estimation model of the intelligent heart ultrasonic probe can be obtained through training, and the pose parameters of the probe can be estimated more accurately by utilizing the model according to the currently acquired ultrasonic image, so that the moving direction of the ultrasonic probe is guided, and a clearer and more accurate echocardiographic section is acquired. Solves the problem of insufficient accuracy in the artificial intelligence auxiliary ultrasonic scanning of the existing method.
Drawings
FIG. 1 is a schematic flow chart of a training method of an intelligent heart ultrasonic probe pose estimation model;
FIG. 2 is a schematic flow chart of an example training process according to the present invention;
fig. 3 is a functional structure schematic diagram of a training device of the intelligent heart ultrasonic probe pose estimation model.
Detailed Description
In order to better understand the above technical solutions, the following detailed description will be given with reference to the accompanying drawings and specific embodiments.
The method provided by the invention can be implemented in a terminal environment, and the terminal can comprise one or more of the following components: processor, memory and display screen. Wherein the memory stores at least one instruction that is loaded and executed by the processor to implement the method described in the embodiments below.
The processor may include one or more processing cores. The processor connects various parts within the overall terminal using various interfaces and lines, performs various functions of the terminal and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory, and invoking data stored in the memory.
The Memory may include random access Memory (Random Access Memory, RAM) or Read-Only Memory (ROM). The memory may be used to store instructions, programs, code, sets of codes, or instructions.
The display screen is used for displaying a user interface of each application program.
In addition, it will be appreciated by those skilled in the art that the structure of the terminal described above is not limiting and that the terminal may include more or fewer components, or may combine certain components, or a different arrangement of components. For example, the terminal further includes components such as a radio frequency circuit, an input unit, a sensor, an audio circuit, a power supply, and the like, which are not described herein.
Example 1
As shown in fig. 1, an embodiment of the present invention provides a training method for an intelligent cardiac ultrasound probe pose estimation model, including:
s101, acquiring training data, wherein the training data comprises a target heart ultrasonic image and corresponding probe pose information thereof, and an acquired heart ultrasonic image and corresponding probe pose information thereof;
s102, encoding the target heart ultrasonic image into a target heart ultrasonic encoding vector and a target pose encoding vector, and encoding the acquired heart ultrasonic image into a heart ultrasonic encoding vector and a pose encoding vector;
s103, decoding the heart ultrasonic coding vector and the target pose coding vector to obtain a decoded heart ultrasonic image;
s104, calculating a first loss function by using probe pose information corresponding to the target heart ultrasonic image, probe pose information corresponding to the acquired heart ultrasonic image and pose coding vectors; calculating a second loss function by using the target cardiac ultrasound encoding vector and the cardiac ultrasound encoding vector; calculating a third loss function using the target cardiac ultrasound image and the decoded cardiac ultrasound image; calculating a total loss function using the first loss function, the second loss function, and the third loss function;
s105, optimizing parameters of the pose estimation model by reducing the total loss function, and obtaining the trained pose estimation model.
In a preferred embodiment of the present invention, the calculating the first loss function using the probe pose information corresponding to the target cardiac ultrasound image, the probe pose information corresponding to the acquired cardiac ultrasound image, and the pose coding vector includes:
wherein,,for the first loss function, +.>Probe pose information corresponding to the acquired heart ultrasonic image; />Probe pose information corresponding to the target heart ultrasonic image; />Vectors are encoded for the pose.
Calculating a second loss function using the target cardiac ultrasound encoding vector and the cardiac ultrasound encoding vector comprises:;
wherein,,is a second loss function; />Encoding vectors for the target heart ultrasound; />Is heart ultrasoundEncoding the vector.
Calculating a third loss function using the target cardiac ultrasound image and the decoded cardiac ultrasound image includes:-/>;
wherein,,is a third loss function; />An ultrasound image of the target heart; />To decode the cardiac ultrasound image.
In another preferred embodiment of the present invention, the calculating the total loss function using the first loss function, the second loss function, and the third loss function includes:;
wherein,,as a total loss function; />As a first loss function; />Is a second loss function; />Is a third loss function.
Wherein, the invention can adopt a depth convolution network for encoding and decoding.
The main purpose of the invention is to establish the mapping relation between the ultrasonic probe pose and the ultrasonic image obtained by scanning the current pose in the training process; in the use process, the ultrasonic image is converted into the prediction of the ultrasonic probe pose parameters through the mapping model obtained through training. The better the training effect is, the higher the model prediction precision is, and the more the moving direction of the ultrasonic probe can be accurately guided.
In order to achieve the above purpose, the invention designs a training method of a model, and in the design process, the following method thinking is used as a reference: the latest point cloud pattern complement method adopts the modes of information decoupling, shape restoration and the like by means of a deep convolution network, so that a better effect is obtained. The method includes the steps that after the complete point cloud pattern is partially covered, the complete point cloud pattern is encoded into two outputs through an encoder (a depth convolution network), the complete point cloud pattern is encoded and shielded, the dimension of the shielded encoding is identical to the dimension of the complete point cloud pattern, the value of each dimension is between 0 and 1, and the shielded degree of each dimension is represented. And giving the inputs with different degrees of shielding, keeping the complete shape codes of the outputs unchanged, converting the changes generated by the shielding of the point cloud patterns into the changes of the shielding code contents, and completing the decoupling of the information. And then combining and restoring the complete shape code and the code under the non-shielding condition (the value of each dimension of the shielding code is 1) to generate a non-shielding point cloud graph, and enhancing the corresponding relation between the shielding code and the point cloud shape shielding through double constraint to reduce the correlation between the decoupling information.
Unlike the above-mentioned point cloud pattern complement task, the present invention aims to predict the pose information of the probe according to the currently acquired heart ultrasonic image, and the ultrasonic image acquired by the probe under different poses cannot be modeled in the form of shielding percentage, etc., so the present invention adopts a design flow different from the above-mentioned point cloud pattern complement method:
1. information decoupling
A deep convolutional network is designed whose input is set to an ultrasound image (of a specific size) and whose output is set to two vectors, and which is named encoder. The ultrasonic image is encoded into a heart ultrasonic encoding vector and a pose encoding vector through an encoder, wherein the pose encoding vector comprises attributes such as direction, angle, distance and the like. The acquired data is divided into two types, one is a target ultrasonic image and a corresponding target pose coding vector (serving as a reference), and the other is any ultrasonic image and a corresponding pose coding vector. In the training process, in order to strengthen the corresponding relation between the ultrasonic image and the pose coding vector, the heart ultrasonic coding vector generated by any ultrasonic image is kept to be highly similar, and meanwhile, the heart ultrasonic coding vector generated by the target ultrasonic image is kept to be highly similar; and enabling the pose coding vector generated by any ultrasonic image to be kept to be similar to the pose coding vector corresponding to the ultrasonic image in the data acquisition process.
2. Restoring a target image
A deep convolutional network is designed whose inputs are set to two vectors: the target pose coding vector and the cardiac ultrasound coding vector are output as ultrasound images (of a specific size) and named as decoders. It is understood that the inverse of the encoder. The meaning of the introduction of the decoding module is to ensure that the heart ultrasonic coding vector generated by any ultrasonic image contains enough information and can restore the target heart ultrasonic image by combining the target pose coding vector, thereby further reducing the characteristic loss in the model training process and improving the model precision.
3. Reinforcement learning
A reinforcement learning module is designed, the input of the reinforcement learning module is a pose coding vector, positive feedback can be obtained by reducing the value of the vector, and negative feedback can be obtained otherwise. The method has the significance that the probe is guided to move towards the pose close to the target through the mechanism, finally reaches the target area, and an ideal heart ultrasonic image is acquired.
By utilizing the design thought, the training method of the intelligent heart ultrasonic probe pose estimation model is obtained. The specific training process may be as shown in fig. 2:
each of the cardiac surfaces has an ultrasound image of the target heartThe corresponding data is marked as probe pose information. DataDuring acquisition, the ultrasound image of the heart acquired by the probe is called +.>,/>The corresponding probe pose information is +.>。
Encoder with a plurality of sensorsThe input of (2) is +.>The output is the ultrasonic coding vector of heart +.>Pose coding vector +.>。/>Corresponding real flag value +.>The calculation method of (2) is->And->Difference of->。/>The attribute value of (2) increases as the difference between the acquired heart ultrasound image and the target heart ultrasound image increases in the attribute; when->For encoder->The output of which is the target cardiac ultrasound encoding vector +.>Target pose coding vector +.>Hope->(i.e. constraint 1:>)。
will then、/>Input decoder->The output is decoded picture +.>. Then will->And->For comparison, expect->=/>(i.e. constraint 2:>=/>). In actual calculation, the formula +.>-/>Calculating to obtain a third loss function->Using the formula/>Calculating to obtain a second loss function->And utilize the formula ∈ ->Calculating to obtain a first loss function->. Finally searching for function for effectively reducing total loss through random gradient algorithmTo obtain an ideal model.
Example two
The embodiment of the invention provides an intelligent heart ultrasonic probe pose estimation method, which comprises the following steps:
inputting the currently acquired heart ultrasonic image into a trained pose estimation model to obtain an estimated pose of the intelligent heart ultrasonic probe;
the trained pose estimation model is obtained by training in advance by adopting the training method of the intelligent heart ultrasonic probe pose estimation model according to the first embodiment.
The method for estimating the pose of the intelligent heart ultrasonic probe provided by the invention can further comprise the following steps: and iteratively reducing the estimated pose by using the reinforcement learning model to obtain positive feedback, so that the estimated pose is iteratively close to the target position.
Example III
As shown in fig. 3, an embodiment of the present invention provides a training device for an intelligent cardiac ultrasound probe pose estimation model, including:
the training data acquisition module 301 is configured to acquire training data, including a target cardiac ultrasound image and corresponding probe pose information thereof, and an acquired cardiac ultrasound image and corresponding probe pose information thereof;
the encoding module 302 is configured to encode the target cardiac ultrasound image into a target cardiac ultrasound encoding vector and a target pose encoding vector, and encode the acquired cardiac ultrasound image into a cardiac ultrasound encoding vector and a pose encoding vector;
the decoding module 303 is configured to decode the cardiac ultrasound encoding vector and the target pose encoding vector to obtain a decoded cardiac ultrasound image;
the loss function calculation module 304 is configured to calculate a first loss function using probe pose information corresponding to the target cardiac ultrasound image, probe pose information corresponding to the acquired cardiac ultrasound image, and pose coding vector; calculating a second loss function by using the target cardiac ultrasound encoding vector and the cardiac ultrasound encoding vector; calculating a third loss function using the target cardiac ultrasound image and the decoded cardiac ultrasound image; calculating a total loss function using the first loss function, the second loss function, and the third loss function;
and the optimizing module 305 is configured to optimize parameters of the pose estimation model by reducing the total loss function, so as to obtain a trained pose estimation model.
Further, in the loss function calculation module, the calculating the first loss function by using probe pose information corresponding to the target cardiac ultrasound image, probe pose information corresponding to the acquired cardiac ultrasound image and pose coding vector includes:
wherein,,for the first loss function, +.>Probe pose information corresponding to the acquired heart ultrasonic image; />Probe pose information corresponding to the target heart ultrasonic image; />Vectors are encoded for the pose.
Further, in the loss function calculation module, the calculating a second loss function using the target cardiac ultrasound encoding vector and the cardiac ultrasound encoding vector includes:;
wherein,,is a second loss function; />Encoding vectors for the target heart ultrasound; />Vector is encoded for cardiac ultrasound.
Further, in the loss function calculation module, the calculating a third loss function using the target cardiac ultrasound image and the decoded cardiac ultrasound image includes:-/>;
wherein,,is a third loss function; />An ultrasound image of the target heart; />To decode the cardiac ultrasound image.
In the depth optimization module, the calculating the total loss function using the first loss function, the second loss function, and the third loss function includes:;
wherein,,as a total loss function; />As a first loss function; />Is a second loss function; />Is a third loss function.
The training device of the intelligent heart ultrasonic probe pose estimation model provided by the invention can adopt a depth convolution network to carry out encoding and decoding.
The embodiment of the invention also provides a memory, which stores a plurality of instructions for realizing the training method of the estimation model according to the first embodiment and the pose estimation method according to the second embodiment.
The embodiment of the invention also provides an electronic device, which comprises a processor and a memory connected with the processor, wherein the memory stores a plurality of instructions which can be loaded and executed by the processor so that the processor can execute the training method of the estimation model as in the first embodiment and the pose estimation method as in the second embodiment.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention. It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims (8)
1. A training method of an intelligent heart ultrasonic probe pose estimation model is characterized by comprising the following steps:
acquiring training data, including a target heart ultrasonic image and corresponding probe pose information thereof, and an acquired heart ultrasonic image and corresponding probe pose information thereof;
encoding the target heart ultrasonic image into a target heart ultrasonic encoding vector and a target pose encoding vector, and encoding the acquired heart ultrasonic image into a heart ultrasonic encoding vector and a pose encoding vector;
decoding the heart ultrasonic coding vector and the target pose coding vector to obtain a decoded heart ultrasonic image;
calculating a first loss function by using probe pose information corresponding to the target heart ultrasonic image, probe pose information corresponding to the acquired heart ultrasonic image and pose coding vectors; calculating a second loss function by using the target cardiac ultrasound encoding vector and the cardiac ultrasound encoding vector; calculating a third loss function using the target cardiac ultrasound image and the decoded cardiac ultrasound image; calculating a total loss function using the first loss function, the second loss function, and the third loss function;
optimizing parameters of the pose estimation model by reducing the total loss function to obtain a trained pose estimation model;
the first loss function is calculated by using probe pose information corresponding to the target heart ultrasonic image, probe pose information corresponding to the acquired heart ultrasonic image and pose coding vectorsComprising the following steps:;/>;
wherein,,as a first loss function; />Probe pose information corresponding to the acquired heart ultrasonic image; />Probe pose information corresponding to the target heart ultrasonic image; />Encoding a vector for the pose; />Is->A corresponding true mark value;
the calculating a second loss function using the target cardiac ultrasound encoding vector and the cardiac ultrasound encoding vector comprises:;
wherein,,is a second loss function; />Encoding vectors for the target heart ultrasound; />Encoding vectors for cardiac ultrasound;
the calculating a third loss function using the target cardiac ultrasound image and the decoded cardiac ultrasound image comprises:-/>;
2. The method of training a pose estimation model of an intelligent cardiac ultrasound probe of claim 1, wherein calculating a total loss function using a first loss function, a second loss function, and a third loss function comprises:;
3. The training method of the intelligent heart ultrasound probe pose estimation model according to claim 1, wherein the encoding and decoding are performed by using a depth convolution network.
4. An intelligent heart ultrasonic probe pose estimation method is characterized by comprising the following steps:
inputting the currently acquired heart ultrasonic image into a trained pose estimation model to obtain an estimated pose of the intelligent heart ultrasonic probe;
the trained pose estimation model is obtained by training in advance by adopting the training method of the intelligent heart ultrasonic probe pose estimation model according to any one of claims 1-3.
5. The intelligent cardiac ultrasound probe pose estimation method of claim 4, further comprising: and iteratively reducing the estimated pose by using the reinforcement learning model to obtain positive feedback, so that the estimated pose is iteratively close to the target position.
6. The utility model provides a training device of intelligent heart ultrasonic probe pose estimated model which characterized in that includes:
the training data acquisition module is used for acquiring training data, and comprises a target heart ultrasonic image and corresponding probe pose information thereof, and an acquired heart ultrasonic image and corresponding probe pose information thereof;
the encoding module is used for encoding the target heart ultrasonic image into a target heart ultrasonic encoding vector and a target pose encoding vector, and encoding the acquired heart ultrasonic image into a heart ultrasonic encoding vector and a pose encoding vector;
the decoding module is used for decoding the heart ultrasonic coding vector and the target pose coding vector to obtain a decoded heart ultrasonic image;
the loss function calculation module is used for calculating a first loss function by using probe pose information corresponding to the target heart ultrasonic image, probe pose information corresponding to the acquired heart ultrasonic image and pose coding vectors; calculating a second loss function by using the target cardiac ultrasound encoding vector and the cardiac ultrasound encoding vector; calculating a third loss function using the target cardiac ultrasound image and the decoded cardiac ultrasound image; calculating a total loss function using the first loss function, the second loss function, and the third loss function;
the calculating the first loss function by using probe pose information corresponding to the target heart ultrasonic image, probe pose information corresponding to the acquired heart ultrasonic image and pose coding vectors comprises the following steps:;/>;
wherein,,as a first loss function; />Probe pose information corresponding to the acquired heart ultrasonic image; />Probe pose information corresponding to the target heart ultrasonic image; />Encoding a vector for the pose; />Is->A corresponding true mark value;
the calculating a second loss function using the target cardiac ultrasound encoding vector and the cardiac ultrasound encoding vector comprises:;
wherein,,is a second loss function; />Encoding vectors for the target heart ultrasound; />Encoding vectors for cardiac ultrasound;
the calculating a third loss function using the target cardiac ultrasound image and the decoded cardiac ultrasound image comprises:-/>;
wherein,,is a third loss function; />An ultrasound image of the target heart; />To decode the cardiac ultrasound image;
and the optimization module is used for optimizing parameters of the pose estimation model by reducing the total loss function to obtain the trained pose estimation model.
7. A memory, wherein a plurality of instructions for implementing the training method of the predictive model according to any one of claims 1 to 3 or the pose prediction method according to any one of claims 4 to 5 are stored.
8. An electronic device comprising a processor and a memory coupled to the processor, the memory storing a plurality of instructions that are loadable and executable by the processor to enable the processor to perform the method of training the predictive model of any one of claims 1-3 or the method of pose estimation of any one of claims 4-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310349908.2A CN116152610B (en) | 2023-04-04 | 2023-04-04 | Intelligent heart ultrasonic probe pose estimation model training method and pose estimation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310349908.2A CN116152610B (en) | 2023-04-04 | 2023-04-04 | Intelligent heart ultrasonic probe pose estimation model training method and pose estimation method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116152610A CN116152610A (en) | 2023-05-23 |
CN116152610B true CN116152610B (en) | 2023-06-23 |
Family
ID=86340957
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310349908.2A Active CN116152610B (en) | 2023-04-04 | 2023-04-04 | Intelligent heart ultrasonic probe pose estimation model training method and pose estimation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116152610B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112399828A (en) * | 2018-05-15 | 2021-02-23 | 纽约大学 | System and method for capture orientation of ultrasound images |
CN113160265A (en) * | 2021-05-13 | 2021-07-23 | 四川大学华西医院 | Construction method of prediction image for brain corpus callosum segmentation for corpus callosum state evaluation |
CN115511703A (en) * | 2022-10-31 | 2022-12-23 | 北京安德医智科技有限公司 | Method, device, equipment and medium for generating two-dimensional heart ultrasonic sectional image |
CN115546287A (en) * | 2022-09-28 | 2022-12-30 | 香港中文大学深圳研究院 | Method, system, terminal device and medium for processing transesophageal echocardiogram |
WO2023282743A1 (en) * | 2021-07-06 | 2023-01-12 | Corbotics B.V. | Robotized imaging system |
CN115615427A (en) * | 2022-09-09 | 2023-01-17 | 北京百度网讯科技有限公司 | Ultrasonic probe navigation method, device, equipment and medium |
CN115633216A (en) * | 2022-09-05 | 2023-01-20 | 北京智源人工智能研究院 | Training method of time domain motion consistency video generation model and video generation method |
-
2023
- 2023-04-04 CN CN202310349908.2A patent/CN116152610B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112399828A (en) * | 2018-05-15 | 2021-02-23 | 纽约大学 | System and method for capture orientation of ultrasound images |
CN113160265A (en) * | 2021-05-13 | 2021-07-23 | 四川大学华西医院 | Construction method of prediction image for brain corpus callosum segmentation for corpus callosum state evaluation |
WO2023282743A1 (en) * | 2021-07-06 | 2023-01-12 | Corbotics B.V. | Robotized imaging system |
CN115633216A (en) * | 2022-09-05 | 2023-01-20 | 北京智源人工智能研究院 | Training method of time domain motion consistency video generation model and video generation method |
CN115615427A (en) * | 2022-09-09 | 2023-01-17 | 北京百度网讯科技有限公司 | Ultrasonic probe navigation method, device, equipment and medium |
CN115546287A (en) * | 2022-09-28 | 2022-12-30 | 香港中文大学深圳研究院 | Method, system, terminal device and medium for processing transesophageal echocardiogram |
CN115511703A (en) * | 2022-10-31 | 2022-12-23 | 北京安德医智科技有限公司 | Method, device, equipment and medium for generating two-dimensional heart ultrasonic sectional image |
Also Published As
Publication number | Publication date |
---|---|
CN116152610A (en) | 2023-05-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111012377B (en) | Echocardiogram heart parameter calculation and myocardial strain measurement method and device | |
CN110009669B (en) | 3D/2D medical image registration method based on deep reinforcement learning | |
CN109697741B (en) | PET image reconstruction method, device, equipment and medium | |
US20220327701A1 (en) | Systems and methods for medical acquisition processing and machine learning for anatomical assessment | |
WO2020134769A1 (en) | Image processing method and apparatus, electronic device, and computer readable storage medium | |
CN114119549B (en) | Multi-mode medical image three-dimensional point cloud registration optimization method | |
CN104584074B (en) | Coupled segmentation in 3D conventional and contrast-enhanced ultrasound images | |
CN108701354A (en) | Identify the method and system of area-of-interest profile in ultrasonoscopy | |
CN103914823B (en) | The method of the quick exact non-linear registration solid medical image based on rarefaction representation | |
CN117078692B (en) | Medical ultrasonic image segmentation method and system based on self-adaptive feature fusion | |
CN112151169A (en) | Ultrasonic robot autonomous scanning method and system based on human-simulated operation | |
CN116958217B (en) | MRI and CT multi-mode 3D automatic registration method and device | |
CN112270993A (en) | Ultrasonic robot online decision-making method and system with diagnosis result as feedback | |
CN108171737B (en) | Medical image registration method and system with incompressible organ | |
CN115615427A (en) | Ultrasonic probe navigation method, device, equipment and medium | |
CN110197472A (en) | A kind of method and system for ultrasonic contrast image stabilization quantitative analysis | |
Guo et al. | Automatic segmentation of a fetal echocardiogram using modified active appearance models and sparse representation | |
CN116152610B (en) | Intelligent heart ultrasonic probe pose estimation model training method and pose estimation method | |
CN111383236B (en) | Method, apparatus and computer-readable storage medium for labeling regions of interest | |
CN114787867A (en) | Organ deformation compensation for medical image registration | |
CN115969414A (en) | Method and system for using analytical aids during ultrasound imaging | |
CN114010227B (en) | Right ventricle characteristic information identification method and device | |
CN114332271A (en) | Dynamic parameter image synthesis method and system based on static PET image | |
CN110189369B (en) | Ultrasonic and magnetic resonance image fusion registration method and terminal equipment | |
CN111932443A (en) | Method for improving registration accuracy of ultrasound and magnetic resonance by combining multi-scale expression with contrast agent |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |