CN116152610B - Intelligent heart ultrasonic probe pose estimation model training method and pose estimation method - Google Patents

Intelligent heart ultrasonic probe pose estimation model training method and pose estimation method Download PDF

Info

Publication number
CN116152610B
CN116152610B CN202310349908.2A CN202310349908A CN116152610B CN 116152610 B CN116152610 B CN 116152610B CN 202310349908 A CN202310349908 A CN 202310349908A CN 116152610 B CN116152610 B CN 116152610B
Authority
CN
China
Prior art keywords
loss function
pose
target
probe
heart ultrasonic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310349908.2A
Other languages
Chinese (zh)
Other versions
CN116152610A (en
Inventor
贾宁
杜超群
黄高
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhiyuan Artificial Intelligence Research Institute
Original Assignee
Beijing Zhiyuan Artificial Intelligence Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhiyuan Artificial Intelligence Research Institute filed Critical Beijing Zhiyuan Artificial Intelligence Research Institute
Priority to CN202310349908.2A priority Critical patent/CN116152610B/en
Publication of CN116152610A publication Critical patent/CN116152610A/en
Application granted granted Critical
Publication of CN116152610B publication Critical patent/CN116152610B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention discloses a training method of an intelligent heart ultrasonic probe pose estimation model and a pose estimation method, and belongs to the technical field of intelligent heart ultrasonic data processing. The training method comprises the following steps: acquiring training data; encoding the target heart ultrasonic image and the acquired heart ultrasonic image respectively; decoding the coded cardiac ultrasonic coding vector and the target pose coding vector to obtain a decoded cardiac ultrasonic image; calculating a first loss function, a second loss function, a third loss function and a total loss function; and optimizing parameters of the pose estimation model by reducing the total loss function to obtain the trained pose estimation model. The pose parameters of the probe can be more accurately estimated according to the currently acquired ultrasonic image by using the trained pose estimation model, and the moving direction of the ultrasonic probe is guided, so that a clearer and more accurate echocardiographic tangent plane is acquired. The invention solves the problem of insufficient accuracy in the artificial intelligence auxiliary ultrasonic scanning in the existing method.

Description

Intelligent heart ultrasonic probe pose estimation model training method and pose estimation method
Technical Field
The invention relates to the technical field of intelligent heart ultrasonic data processing, in particular to an intelligent heart ultrasonic probe pose estimation model training method and a pose estimation method.
Background
Ultrasound imaging of the heart (clinically known as echocardiography) using ultrasound to display images of the heart, large blood vessels and blood flow in real time is the most common imaging technique for patients with cardiovascular disease and can only be done by professionally trained sonographers in specialized examination rooms. The ultrasonic doctor needs to acquire dynamic images of different sections for real-time diagnosis, and the acquisition and diagnosis of the ultrasonic images are very dependent on the experience of the doctor. Therefore, only the reduction of the ultrasonic use threshold can make the ultrasonic become a truly universal and portable diagnostic tool.
The artificial intelligent model has been brought into the spotlight in the fields of image classification, target recognition, automatic navigation and the like, and has also been progressed in breakthrough in the medical field for assisting the task of ultrasonic diagnosis and the like.
At present, a common artificial intelligent auxiliary ultrasonic scanning system takes an ultrasonic image obtained by scanning an ultrasonic probe as input, judges imaging quality through a depth convolution network, generates pose parameters including positions and directions, and prompts an operator to select the moving direction of the probe according to the parameters. The artificial intelligence auxiliary ultrasonic scanning system can enable a novice operator without ultrasonic examination experience to acquire multi-view standard sections of the transthoracic echocardiogram under the assistance of the deep learning system, so that accurate ultrasonic images of heart anatomy are obtained, and key parameters of the heart are evaluated. However, the success rate of collecting a certain common cardiac section by using the method is only 58% and is far lower than that of a professional doctor. In addition, the method does not disclose technical details, so that the effect of lateral comparison cannot be reproduced.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides the following technical scheme.
The first aspect of the invention provides a training method of an intelligent heart ultrasonic probe pose estimation model, which comprises the following steps:
acquiring training data, including a target heart ultrasonic image and corresponding probe pose information thereof, and an acquired heart ultrasonic image and corresponding probe pose information thereof;
encoding the target heart ultrasonic image into a target heart ultrasonic encoding vector and a target pose encoding vector, and encoding the acquired heart ultrasonic image into a heart ultrasonic encoding vector and a pose encoding vector;
decoding the heart ultrasonic coding vector and the target pose coding vector to obtain a decoded heart ultrasonic image;
calculating a first loss function by using probe pose information corresponding to the target heart ultrasonic image, probe pose information corresponding to the acquired heart ultrasonic image and pose coding vectors; calculating a second loss function by using the target cardiac ultrasound encoding vector and the cardiac ultrasound encoding vector; calculating a third loss function using the target cardiac ultrasound image and the decoded cardiac ultrasound image; calculating a total loss function using the first loss function, the second loss function, and the third loss function;
and optimizing parameters of the pose estimation model by reducing the total loss function to obtain the trained pose estimation model.
Preferably, the calculating the first loss function by using the probe pose information corresponding to the target cardiac ultrasound image, the probe pose information corresponding to the acquired cardiac ultrasound image and the pose coding vector comprises:
Figure SMS_1
wherein,,
Figure SMS_2
as a first loss function; />
Figure SMS_3
Probe pose information corresponding to the acquired heart ultrasonic image; />
Figure SMS_4
Probe pose information corresponding to the target heart ultrasonic image; />
Figure SMS_5
Vectors are encoded for the pose.
Preferably, the calculating the second loss function using the target cardiac ultrasound encoding vector and the cardiac ultrasound encoding vector comprises:
Figure SMS_6
wherein,,
Figure SMS_7
is a second loss function; />
Figure SMS_8
Encoding vectors for the target heart ultrasound; />
Figure SMS_9
Vector is encoded for cardiac ultrasound.
PreferablyThe calculating a third loss function using the target cardiac ultrasound image and the decoded cardiac ultrasound image includes:
Figure SMS_10
-/>
Figure SMS_11
wherein,,
Figure SMS_12
is a third loss function; />
Figure SMS_13
An ultrasound image of the target heart; />
Figure SMS_14
To decode the cardiac ultrasound image.
Preferably, the calculating the total loss function using the first loss function, the second loss function, and the third loss function includes:
Figure SMS_15
wherein,,
Figure SMS_16
as a total loss function; />
Figure SMS_17
As a first loss function; />
Figure SMS_18
Is a second loss function; />
Figure SMS_19
Is a third loss function.
Preferably, the encoding and decoding are performed using a deep convolutional network.
The second aspect of the invention provides a method for estimating the pose of an intelligent heart ultrasonic probe, which comprises the following steps:
inputting the currently acquired heart ultrasonic image into a trained pose estimation model to obtain an estimated pose of the intelligent heart ultrasonic probe;
the trained pose estimation model is obtained by training in advance by adopting the training method of the intelligent heart ultrasonic probe pose estimation model according to the first aspect.
Preferably, the intelligent heart ultrasonic probe pose estimation method further comprises the following steps: and iteratively reducing the estimated pose by using the reinforcement learning model to obtain positive feedback, so that the estimated pose is iteratively close to the target position.
The third aspect of the invention provides a training device for an intelligent heart ultrasonic probe pose estimation model, which comprises:
the training data acquisition module is used for acquiring training data, and comprises a target heart ultrasonic image and corresponding probe pose information thereof, and an acquired heart ultrasonic image and corresponding probe pose information thereof;
the encoding module is used for encoding the target heart ultrasonic image into a target heart ultrasonic encoding vector and a target pose encoding vector, and encoding the acquired heart ultrasonic image into a heart ultrasonic encoding vector and a pose encoding vector;
the decoding module is used for decoding the heart ultrasonic coding vector and the target pose coding vector to obtain a decoded heart ultrasonic image;
the loss function calculation module is used for calculating a first loss function by using probe pose information corresponding to the target heart ultrasonic image, probe pose information corresponding to the acquired heart ultrasonic image and pose coding vectors; calculating a second loss function by using the target cardiac ultrasound encoding vector and the cardiac ultrasound encoding vector; calculating a third loss function using the target cardiac ultrasound image and the decoded cardiac ultrasound image; calculating a total loss function using the first loss function, the second loss function, and the third loss function;
and the optimization module is used for optimizing parameters of the pose estimation model by reducing the total loss function to obtain the trained pose estimation model.
A fourth aspect of the present invention provides a memory storing a plurality of instructions for implementing the training method of the predictive model according to the first aspect and the pose prediction method according to the second aspect.
A fifth aspect of the present invention provides an electronic device, which is characterized by comprising a processor and a memory connected to the processor, wherein the memory stores a plurality of instructions, and the instructions can be loaded and executed by the processor, so that the processor can execute the training method of the estimation model according to the first aspect and the pose estimation method according to the second aspect.
The beneficial effects of the invention are as follows: according to the technical scheme provided by the invention, the pose estimation model of the intelligent heart ultrasonic probe can be obtained through training, and the pose parameters of the probe can be estimated more accurately by utilizing the model according to the currently acquired ultrasonic image, so that the moving direction of the ultrasonic probe is guided, and a clearer and more accurate echocardiographic section is acquired. Solves the problem of insufficient accuracy in the artificial intelligence auxiliary ultrasonic scanning of the existing method.
Drawings
FIG. 1 is a schematic flow chart of a training method of an intelligent heart ultrasonic probe pose estimation model;
FIG. 2 is a schematic flow chart of an example training process according to the present invention;
fig. 3 is a functional structure schematic diagram of a training device of the intelligent heart ultrasonic probe pose estimation model.
Detailed Description
In order to better understand the above technical solutions, the following detailed description will be given with reference to the accompanying drawings and specific embodiments.
The method provided by the invention can be implemented in a terminal environment, and the terminal can comprise one or more of the following components: processor, memory and display screen. Wherein the memory stores at least one instruction that is loaded and executed by the processor to implement the method described in the embodiments below.
The processor may include one or more processing cores. The processor connects various parts within the overall terminal using various interfaces and lines, performs various functions of the terminal and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory, and invoking data stored in the memory.
The Memory may include random access Memory (Random Access Memory, RAM) or Read-Only Memory (ROM). The memory may be used to store instructions, programs, code, sets of codes, or instructions.
The display screen is used for displaying a user interface of each application program.
In addition, it will be appreciated by those skilled in the art that the structure of the terminal described above is not limiting and that the terminal may include more or fewer components, or may combine certain components, or a different arrangement of components. For example, the terminal further includes components such as a radio frequency circuit, an input unit, a sensor, an audio circuit, a power supply, and the like, which are not described herein.
Example 1
As shown in fig. 1, an embodiment of the present invention provides a training method for an intelligent cardiac ultrasound probe pose estimation model, including:
s101, acquiring training data, wherein the training data comprises a target heart ultrasonic image and corresponding probe pose information thereof, and an acquired heart ultrasonic image and corresponding probe pose information thereof;
s102, encoding the target heart ultrasonic image into a target heart ultrasonic encoding vector and a target pose encoding vector, and encoding the acquired heart ultrasonic image into a heart ultrasonic encoding vector and a pose encoding vector;
s103, decoding the heart ultrasonic coding vector and the target pose coding vector to obtain a decoded heart ultrasonic image;
s104, calculating a first loss function by using probe pose information corresponding to the target heart ultrasonic image, probe pose information corresponding to the acquired heart ultrasonic image and pose coding vectors; calculating a second loss function by using the target cardiac ultrasound encoding vector and the cardiac ultrasound encoding vector; calculating a third loss function using the target cardiac ultrasound image and the decoded cardiac ultrasound image; calculating a total loss function using the first loss function, the second loss function, and the third loss function;
s105, optimizing parameters of the pose estimation model by reducing the total loss function, and obtaining the trained pose estimation model.
In a preferred embodiment of the present invention, the calculating the first loss function using the probe pose information corresponding to the target cardiac ultrasound image, the probe pose information corresponding to the acquired cardiac ultrasound image, and the pose coding vector includes:
Figure SMS_20
wherein,,
Figure SMS_21
for the first loss function, +.>
Figure SMS_22
Probe pose information corresponding to the acquired heart ultrasonic image; />
Figure SMS_23
Probe pose information corresponding to the target heart ultrasonic image; />
Figure SMS_24
Vectors are encoded for the pose.
Calculating a second loss function using the target cardiac ultrasound encoding vector and the cardiac ultrasound encoding vector comprises:
Figure SMS_25
wherein,,
Figure SMS_26
is a second loss function; />
Figure SMS_27
Encoding vectors for the target heart ultrasound; />
Figure SMS_28
Is heart ultrasoundEncoding the vector.
Calculating a third loss function using the target cardiac ultrasound image and the decoded cardiac ultrasound image includes:
Figure SMS_29
-/>
Figure SMS_30
wherein,,
Figure SMS_31
is a third loss function; />
Figure SMS_32
An ultrasound image of the target heart; />
Figure SMS_33
To decode the cardiac ultrasound image.
In another preferred embodiment of the present invention, the calculating the total loss function using the first loss function, the second loss function, and the third loss function includes:
Figure SMS_34
wherein,,
Figure SMS_35
as a total loss function; />
Figure SMS_36
As a first loss function; />
Figure SMS_37
Is a second loss function; />
Figure SMS_38
Is a third loss function.
Wherein, the invention can adopt a depth convolution network for encoding and decoding.
The main purpose of the invention is to establish the mapping relation between the ultrasonic probe pose and the ultrasonic image obtained by scanning the current pose in the training process; in the use process, the ultrasonic image is converted into the prediction of the ultrasonic probe pose parameters through the mapping model obtained through training. The better the training effect is, the higher the model prediction precision is, and the more the moving direction of the ultrasonic probe can be accurately guided.
In order to achieve the above purpose, the invention designs a training method of a model, and in the design process, the following method thinking is used as a reference: the latest point cloud pattern complement method adopts the modes of information decoupling, shape restoration and the like by means of a deep convolution network, so that a better effect is obtained. The method includes the steps that after the complete point cloud pattern is partially covered, the complete point cloud pattern is encoded into two outputs through an encoder (a depth convolution network), the complete point cloud pattern is encoded and shielded, the dimension of the shielded encoding is identical to the dimension of the complete point cloud pattern, the value of each dimension is between 0 and 1, and the shielded degree of each dimension is represented. And giving the inputs with different degrees of shielding, keeping the complete shape codes of the outputs unchanged, converting the changes generated by the shielding of the point cloud patterns into the changes of the shielding code contents, and completing the decoupling of the information. And then combining and restoring the complete shape code and the code under the non-shielding condition (the value of each dimension of the shielding code is 1) to generate a non-shielding point cloud graph, and enhancing the corresponding relation between the shielding code and the point cloud shape shielding through double constraint to reduce the correlation between the decoupling information.
Unlike the above-mentioned point cloud pattern complement task, the present invention aims to predict the pose information of the probe according to the currently acquired heart ultrasonic image, and the ultrasonic image acquired by the probe under different poses cannot be modeled in the form of shielding percentage, etc., so the present invention adopts a design flow different from the above-mentioned point cloud pattern complement method:
1. information decoupling
A deep convolutional network is designed whose input is set to an ultrasound image (of a specific size) and whose output is set to two vectors, and which is named encoder. The ultrasonic image is encoded into a heart ultrasonic encoding vector and a pose encoding vector through an encoder, wherein the pose encoding vector comprises attributes such as direction, angle, distance and the like. The acquired data is divided into two types, one is a target ultrasonic image and a corresponding target pose coding vector (serving as a reference), and the other is any ultrasonic image and a corresponding pose coding vector. In the training process, in order to strengthen the corresponding relation between the ultrasonic image and the pose coding vector, the heart ultrasonic coding vector generated by any ultrasonic image is kept to be highly similar, and meanwhile, the heart ultrasonic coding vector generated by the target ultrasonic image is kept to be highly similar; and enabling the pose coding vector generated by any ultrasonic image to be kept to be similar to the pose coding vector corresponding to the ultrasonic image in the data acquisition process.
2. Restoring a target image
A deep convolutional network is designed whose inputs are set to two vectors: the target pose coding vector and the cardiac ultrasound coding vector are output as ultrasound images (of a specific size) and named as decoders. It is understood that the inverse of the encoder. The meaning of the introduction of the decoding module is to ensure that the heart ultrasonic coding vector generated by any ultrasonic image contains enough information and can restore the target heart ultrasonic image by combining the target pose coding vector, thereby further reducing the characteristic loss in the model training process and improving the model precision.
3. Reinforcement learning
A reinforcement learning module is designed, the input of the reinforcement learning module is a pose coding vector, positive feedback can be obtained by reducing the value of the vector, and negative feedback can be obtained otherwise. The method has the significance that the probe is guided to move towards the pose close to the target through the mechanism, finally reaches the target area, and an ideal heart ultrasonic image is acquired.
By utilizing the design thought, the training method of the intelligent heart ultrasonic probe pose estimation model is obtained. The specific training process may be as shown in fig. 2:
each of the cardiac surfaces has an ultrasound image of the target heart
Figure SMS_39
The corresponding data is marked as probe pose information
Figure SMS_40
. DataDuring acquisition, the ultrasound image of the heart acquired by the probe is called +.>
Figure SMS_41
,/>
Figure SMS_42
The corresponding probe pose information is +.>
Figure SMS_43
Encoder with a plurality of sensors
Figure SMS_57
The input of (2) is +.>
Figure SMS_44
The output is the ultrasonic coding vector of heart +.>
Figure SMS_55
Pose coding vector +.>
Figure SMS_47
。/>
Figure SMS_56
Corresponding real flag value +.>
Figure SMS_50
The calculation method of (2) is->
Figure SMS_54
And->
Figure SMS_46
Difference of->
Figure SMS_53
。/>
Figure SMS_45
The attribute value of (2) increases as the difference between the acquired heart ultrasound image and the target heart ultrasound image increases in the attribute; when->
Figure SMS_52
For encoder->
Figure SMS_51
The output of which is the target cardiac ultrasound encoding vector +.>
Figure SMS_58
Target pose coding vector +.>
Figure SMS_49
Hope->
Figure SMS_59
(i.e. constraint 1:>
Figure SMS_48
)。
will then
Figure SMS_64
、/>
Figure SMS_61
Input decoder->
Figure SMS_74
The output is decoded picture +.>
Figure SMS_62
. Then will->
Figure SMS_75
And->
Figure SMS_65
For comparison, expect->
Figure SMS_72
=/>
Figure SMS_73
(i.e. constraint 2:>
Figure SMS_77
=/>
Figure SMS_60
). In actual calculation, the formula +.>
Figure SMS_70
-/>
Figure SMS_63
Calculating to obtain a third loss function->
Figure SMS_68
Using the formula/>
Figure SMS_66
Calculating to obtain a second loss function->
Figure SMS_69
And utilize the formula ∈ ->
Figure SMS_67
Calculating to obtain a first loss function->
Figure SMS_71
. Finally searching for function for effectively reducing total loss through random gradient algorithm
Figure SMS_76
To obtain an ideal model.
Example two
The embodiment of the invention provides an intelligent heart ultrasonic probe pose estimation method, which comprises the following steps:
inputting the currently acquired heart ultrasonic image into a trained pose estimation model to obtain an estimated pose of the intelligent heart ultrasonic probe;
the trained pose estimation model is obtained by training in advance by adopting the training method of the intelligent heart ultrasonic probe pose estimation model according to the first embodiment.
The method for estimating the pose of the intelligent heart ultrasonic probe provided by the invention can further comprise the following steps: and iteratively reducing the estimated pose by using the reinforcement learning model to obtain positive feedback, so that the estimated pose is iteratively close to the target position.
Example III
As shown in fig. 3, an embodiment of the present invention provides a training device for an intelligent cardiac ultrasound probe pose estimation model, including:
the training data acquisition module 301 is configured to acquire training data, including a target cardiac ultrasound image and corresponding probe pose information thereof, and an acquired cardiac ultrasound image and corresponding probe pose information thereof;
the encoding module 302 is configured to encode the target cardiac ultrasound image into a target cardiac ultrasound encoding vector and a target pose encoding vector, and encode the acquired cardiac ultrasound image into a cardiac ultrasound encoding vector and a pose encoding vector;
the decoding module 303 is configured to decode the cardiac ultrasound encoding vector and the target pose encoding vector to obtain a decoded cardiac ultrasound image;
the loss function calculation module 304 is configured to calculate a first loss function using probe pose information corresponding to the target cardiac ultrasound image, probe pose information corresponding to the acquired cardiac ultrasound image, and pose coding vector; calculating a second loss function by using the target cardiac ultrasound encoding vector and the cardiac ultrasound encoding vector; calculating a third loss function using the target cardiac ultrasound image and the decoded cardiac ultrasound image; calculating a total loss function using the first loss function, the second loss function, and the third loss function;
and the optimizing module 305 is configured to optimize parameters of the pose estimation model by reducing the total loss function, so as to obtain a trained pose estimation model.
Further, in the loss function calculation module, the calculating the first loss function by using probe pose information corresponding to the target cardiac ultrasound image, probe pose information corresponding to the acquired cardiac ultrasound image and pose coding vector includes:
Figure SMS_78
wherein,,
Figure SMS_79
for the first loss function, +.>
Figure SMS_80
Probe pose information corresponding to the acquired heart ultrasonic image; />
Figure SMS_81
Probe pose information corresponding to the target heart ultrasonic image; />
Figure SMS_82
Vectors are encoded for the pose.
Further, in the loss function calculation module, the calculating a second loss function using the target cardiac ultrasound encoding vector and the cardiac ultrasound encoding vector includes:
Figure SMS_83
wherein,,
Figure SMS_84
is a second loss function; />
Figure SMS_85
Encoding vectors for the target heart ultrasound; />
Figure SMS_86
Vector is encoded for cardiac ultrasound.
Further, in the loss function calculation module, the calculating a third loss function using the target cardiac ultrasound image and the decoded cardiac ultrasound image includes:
Figure SMS_87
-/>
Figure SMS_88
wherein,,
Figure SMS_89
is a third loss function; />
Figure SMS_90
An ultrasound image of the target heart; />
Figure SMS_91
To decode the cardiac ultrasound image.
In the depth optimization module, the calculating the total loss function using the first loss function, the second loss function, and the third loss function includes:
Figure SMS_92
wherein,,
Figure SMS_93
as a total loss function; />
Figure SMS_94
As a first loss function; />
Figure SMS_95
Is a second loss function; />
Figure SMS_96
Is a third loss function.
The training device of the intelligent heart ultrasonic probe pose estimation model provided by the invention can adopt a depth convolution network to carry out encoding and decoding.
The embodiment of the invention also provides a memory, which stores a plurality of instructions for realizing the training method of the estimation model according to the first embodiment and the pose estimation method according to the second embodiment.
The embodiment of the invention also provides an electronic device, which comprises a processor and a memory connected with the processor, wherein the memory stores a plurality of instructions which can be loaded and executed by the processor so that the processor can execute the training method of the estimation model as in the first embodiment and the pose estimation method as in the second embodiment.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention. It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (8)

1. A training method of an intelligent heart ultrasonic probe pose estimation model is characterized by comprising the following steps:
acquiring training data, including a target heart ultrasonic image and corresponding probe pose information thereof, and an acquired heart ultrasonic image and corresponding probe pose information thereof;
encoding the target heart ultrasonic image into a target heart ultrasonic encoding vector and a target pose encoding vector, and encoding the acquired heart ultrasonic image into a heart ultrasonic encoding vector and a pose encoding vector;
decoding the heart ultrasonic coding vector and the target pose coding vector to obtain a decoded heart ultrasonic image;
calculating a first loss function by using probe pose information corresponding to the target heart ultrasonic image, probe pose information corresponding to the acquired heart ultrasonic image and pose coding vectors; calculating a second loss function by using the target cardiac ultrasound encoding vector and the cardiac ultrasound encoding vector; calculating a third loss function using the target cardiac ultrasound image and the decoded cardiac ultrasound image; calculating a total loss function using the first loss function, the second loss function, and the third loss function;
optimizing parameters of the pose estimation model by reducing the total loss function to obtain a trained pose estimation model;
the first loss function is calculated by using probe pose information corresponding to the target heart ultrasonic image, probe pose information corresponding to the acquired heart ultrasonic image and pose coding vectorsComprising the following steps:
Figure QLYQS_1
;/>
Figure QLYQS_2
wherein,,
Figure QLYQS_3
as a first loss function; />
Figure QLYQS_4
Probe pose information corresponding to the acquired heart ultrasonic image; />
Figure QLYQS_5
Probe pose information corresponding to the target heart ultrasonic image; />
Figure QLYQS_6
Encoding a vector for the pose; />
Figure QLYQS_7
Is->
Figure QLYQS_8
A corresponding true mark value;
the calculating a second loss function using the target cardiac ultrasound encoding vector and the cardiac ultrasound encoding vector comprises:
Figure QLYQS_9
wherein,,
Figure QLYQS_10
is a second loss function; />
Figure QLYQS_11
Encoding vectors for the target heart ultrasound; />
Figure QLYQS_12
Encoding vectors for cardiac ultrasound;
the calculating a third loss function using the target cardiac ultrasound image and the decoded cardiac ultrasound image comprises:
Figure QLYQS_13
-/>
Figure QLYQS_14
wherein,,
Figure QLYQS_15
is a third loss function; />
Figure QLYQS_16
An ultrasound image of the target heart; />
Figure QLYQS_17
To decode the cardiac ultrasound image.
2. The method of training a pose estimation model of an intelligent cardiac ultrasound probe of claim 1, wherein calculating a total loss function using a first loss function, a second loss function, and a third loss function comprises:
Figure QLYQS_18
wherein,,
Figure QLYQS_19
as a total loss function; />
Figure QLYQS_20
As a first loss function; />
Figure QLYQS_21
Is a second loss function; />
Figure QLYQS_22
Is a third loss function.
3. The training method of the intelligent heart ultrasound probe pose estimation model according to claim 1, wherein the encoding and decoding are performed by using a depth convolution network.
4. An intelligent heart ultrasonic probe pose estimation method is characterized by comprising the following steps:
inputting the currently acquired heart ultrasonic image into a trained pose estimation model to obtain an estimated pose of the intelligent heart ultrasonic probe;
the trained pose estimation model is obtained by training in advance by adopting the training method of the intelligent heart ultrasonic probe pose estimation model according to any one of claims 1-3.
5. The intelligent cardiac ultrasound probe pose estimation method of claim 4, further comprising: and iteratively reducing the estimated pose by using the reinforcement learning model to obtain positive feedback, so that the estimated pose is iteratively close to the target position.
6. The utility model provides a training device of intelligent heart ultrasonic probe pose estimated model which characterized in that includes:
the training data acquisition module is used for acquiring training data, and comprises a target heart ultrasonic image and corresponding probe pose information thereof, and an acquired heart ultrasonic image and corresponding probe pose information thereof;
the encoding module is used for encoding the target heart ultrasonic image into a target heart ultrasonic encoding vector and a target pose encoding vector, and encoding the acquired heart ultrasonic image into a heart ultrasonic encoding vector and a pose encoding vector;
the decoding module is used for decoding the heart ultrasonic coding vector and the target pose coding vector to obtain a decoded heart ultrasonic image;
the loss function calculation module is used for calculating a first loss function by using probe pose information corresponding to the target heart ultrasonic image, probe pose information corresponding to the acquired heart ultrasonic image and pose coding vectors; calculating a second loss function by using the target cardiac ultrasound encoding vector and the cardiac ultrasound encoding vector; calculating a third loss function using the target cardiac ultrasound image and the decoded cardiac ultrasound image; calculating a total loss function using the first loss function, the second loss function, and the third loss function;
the calculating the first loss function by using probe pose information corresponding to the target heart ultrasonic image, probe pose information corresponding to the acquired heart ultrasonic image and pose coding vectors comprises the following steps:
Figure QLYQS_23
;/>
Figure QLYQS_24
wherein,,
Figure QLYQS_25
as a first loss function; />
Figure QLYQS_26
Probe pose information corresponding to the acquired heart ultrasonic image; />
Figure QLYQS_27
Probe pose information corresponding to the target heart ultrasonic image; />
Figure QLYQS_28
Encoding a vector for the pose; />
Figure QLYQS_29
Is->
Figure QLYQS_30
A corresponding true mark value;
the calculating a second loss function using the target cardiac ultrasound encoding vector and the cardiac ultrasound encoding vector comprises:
Figure QLYQS_31
wherein,,
Figure QLYQS_32
is a second loss function; />
Figure QLYQS_33
Encoding vectors for the target heart ultrasound; />
Figure QLYQS_34
Encoding vectors for cardiac ultrasound;
the calculating a third loss function using the target cardiac ultrasound image and the decoded cardiac ultrasound image comprises:
Figure QLYQS_35
-/>
Figure QLYQS_36
wherein,,
Figure QLYQS_37
is a third loss function; />
Figure QLYQS_38
An ultrasound image of the target heart; />
Figure QLYQS_39
To decode the cardiac ultrasound image;
and the optimization module is used for optimizing parameters of the pose estimation model by reducing the total loss function to obtain the trained pose estimation model.
7. A memory, wherein a plurality of instructions for implementing the training method of the predictive model according to any one of claims 1 to 3 or the pose prediction method according to any one of claims 4 to 5 are stored.
8. An electronic device comprising a processor and a memory coupled to the processor, the memory storing a plurality of instructions that are loadable and executable by the processor to enable the processor to perform the method of training the predictive model of any one of claims 1-3 or the method of pose estimation of any one of claims 4-5.
CN202310349908.2A 2023-04-04 2023-04-04 Intelligent heart ultrasonic probe pose estimation model training method and pose estimation method Active CN116152610B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310349908.2A CN116152610B (en) 2023-04-04 2023-04-04 Intelligent heart ultrasonic probe pose estimation model training method and pose estimation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310349908.2A CN116152610B (en) 2023-04-04 2023-04-04 Intelligent heart ultrasonic probe pose estimation model training method and pose estimation method

Publications (2)

Publication Number Publication Date
CN116152610A CN116152610A (en) 2023-05-23
CN116152610B true CN116152610B (en) 2023-06-23

Family

ID=86340957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310349908.2A Active CN116152610B (en) 2023-04-04 2023-04-04 Intelligent heart ultrasonic probe pose estimation model training method and pose estimation method

Country Status (1)

Country Link
CN (1) CN116152610B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112399828A (en) * 2018-05-15 2021-02-23 纽约大学 System and method for capture orientation of ultrasound images
CN113160265A (en) * 2021-05-13 2021-07-23 四川大学华西医院 Construction method of prediction image for brain corpus callosum segmentation for corpus callosum state evaluation
CN115511703A (en) * 2022-10-31 2022-12-23 北京安德医智科技有限公司 Method, device, equipment and medium for generating two-dimensional heart ultrasonic sectional image
CN115546287A (en) * 2022-09-28 2022-12-30 香港中文大学深圳研究院 Method, system, terminal device and medium for processing transesophageal echocardiogram
WO2023282743A1 (en) * 2021-07-06 2023-01-12 Corbotics B.V. Robotized imaging system
CN115615427A (en) * 2022-09-09 2023-01-17 北京百度网讯科技有限公司 Ultrasonic probe navigation method, device, equipment and medium
CN115633216A (en) * 2022-09-05 2023-01-20 北京智源人工智能研究院 Training method of time domain motion consistency video generation model and video generation method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112399828A (en) * 2018-05-15 2021-02-23 纽约大学 System and method for capture orientation of ultrasound images
CN113160265A (en) * 2021-05-13 2021-07-23 四川大学华西医院 Construction method of prediction image for brain corpus callosum segmentation for corpus callosum state evaluation
WO2023282743A1 (en) * 2021-07-06 2023-01-12 Corbotics B.V. Robotized imaging system
CN115633216A (en) * 2022-09-05 2023-01-20 北京智源人工智能研究院 Training method of time domain motion consistency video generation model and video generation method
CN115615427A (en) * 2022-09-09 2023-01-17 北京百度网讯科技有限公司 Ultrasonic probe navigation method, device, equipment and medium
CN115546287A (en) * 2022-09-28 2022-12-30 香港中文大学深圳研究院 Method, system, terminal device and medium for processing transesophageal echocardiogram
CN115511703A (en) * 2022-10-31 2022-12-23 北京安德医智科技有限公司 Method, device, equipment and medium for generating two-dimensional heart ultrasonic sectional image

Also Published As

Publication number Publication date
CN116152610A (en) 2023-05-23

Similar Documents

Publication Publication Date Title
CN111012377B (en) Echocardiogram heart parameter calculation and myocardial strain measurement method and device
CN110009669B (en) 3D/2D medical image registration method based on deep reinforcement learning
CN109697741B (en) PET image reconstruction method, device, equipment and medium
US20220327701A1 (en) Systems and methods for medical acquisition processing and machine learning for anatomical assessment
WO2020134769A1 (en) Image processing method and apparatus, electronic device, and computer readable storage medium
CN114119549B (en) Multi-mode medical image three-dimensional point cloud registration optimization method
CN104584074B (en) Coupled segmentation in 3D conventional and contrast-enhanced ultrasound images
CN108701354A (en) Identify the method and system of area-of-interest profile in ultrasonoscopy
CN103914823B (en) The method of the quick exact non-linear registration solid medical image based on rarefaction representation
CN117078692B (en) Medical ultrasonic image segmentation method and system based on self-adaptive feature fusion
CN112151169A (en) Ultrasonic robot autonomous scanning method and system based on human-simulated operation
CN116958217B (en) MRI and CT multi-mode 3D automatic registration method and device
CN112270993A (en) Ultrasonic robot online decision-making method and system with diagnosis result as feedback
CN108171737B (en) Medical image registration method and system with incompressible organ
CN115615427A (en) Ultrasonic probe navigation method, device, equipment and medium
CN110197472A (en) A kind of method and system for ultrasonic contrast image stabilization quantitative analysis
Guo et al. Automatic segmentation of a fetal echocardiogram using modified active appearance models and sparse representation
CN116152610B (en) Intelligent heart ultrasonic probe pose estimation model training method and pose estimation method
CN111383236B (en) Method, apparatus and computer-readable storage medium for labeling regions of interest
CN114787867A (en) Organ deformation compensation for medical image registration
CN115969414A (en) Method and system for using analytical aids during ultrasound imaging
CN114010227B (en) Right ventricle characteristic information identification method and device
CN114332271A (en) Dynamic parameter image synthesis method and system based on static PET image
CN110189369B (en) Ultrasonic and magnetic resonance image fusion registration method and terminal equipment
CN111932443A (en) Method for improving registration accuracy of ultrasound and magnetic resonance by combining multi-scale expression with contrast agent

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant