CN112370161A - Operation navigation method and medium based on ultrasonic image characteristic plane detection - Google Patents

Operation navigation method and medium based on ultrasonic image characteristic plane detection Download PDF

Info

Publication number
CN112370161A
CN112370161A CN202011085198.XA CN202011085198A CN112370161A CN 112370161 A CN112370161 A CN 112370161A CN 202011085198 A CN202011085198 A CN 202011085198A CN 112370161 A CN112370161 A CN 112370161A
Authority
CN
China
Prior art keywords
dimensional
preoperative
agent
image
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011085198.XA
Other languages
Chinese (zh)
Other versions
CN112370161B (en
Inventor
滕皋军
陆建
温铁祥
王澄
朱海东
张毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Hengle Medical Technology Co ltd
Original Assignee
Zhuhai Hengle Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Hengle Medical Technology Co Ltd filed Critical Zhuhai Hengle Medical Technology Co Ltd
Priority to CN202011085198.XA priority Critical patent/CN112370161B/en
Publication of CN112370161A publication Critical patent/CN112370161A/en
Application granted granted Critical
Publication of CN112370161B publication Critical patent/CN112370161B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2068Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30056Liver; Hepatic

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Evolutionary Computation (AREA)
  • Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Robotics (AREA)
  • Quality & Reliability (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention relates to an operation navigation method and a medium based on ultrasonic image characteristic plane detection, which comprises the following steps: creating a depth convolution neural network, preprocessing preoperative enhanced three-dimensional CT image data, and performing multi-point positioning on the preprocessed preoperative enhanced three-dimensional CT image data through the depth convolution neural network to obtain a multi-point positioning model; a detection model is constructed based on a DQN framework and a multipoint positioning model, any preoperative three-dimensional CT image and intraoperative two-dimensional ultrasonic image are processed through the detection model, a three-dimensional space conversion matrix used for aligning the preoperative three-dimensional CT image and the intraoperative two-dimensional ultrasonic image is obtained, and operation navigation is carried out through the three-dimensional space conversion matrix. The invention has the beneficial effects that: the method realizes the intelligent positioning of the ultrasonic characteristic standard tangent plane in the preoperative three-dimensional CT data, and avoids the manual, time-consuming and labor-consuming positioning of the key point position in the operation process.

Description

Operation navigation method and medium based on ultrasonic image characteristic plane detection
Technical Field
The invention relates to the field of computers, in particular to an operation navigation method and medium based on ultrasonic image feature plane detection.
Background
Ultrasound or CT images are commonly used clinically to navigate the puncture surgical path. The image navigation techniques generally adopted in the navigation system of the interventional therapy operation include 4 kinds:
firstly, CT guidance: the enhanced CT image can clearly display the trend of blood vessels, the size and the position of a tumor area and the surrounding tissue structure. However, the position of the surgical instrument in the body of the patient cannot be detected in real time, the whole interventional operation depends on the experience of an interventional physician, multiple CT scans are required in the operation process, and the position of the puncture needle is continuously adjusted. The general operation time is long, and complications such as bleeding and the like are easy to occur;
II, MRI guidance: the tumor is clearly displayed, but for the patient with magnetic substances, the surgical instrument has electromagnetic compatibility and is expensive;
thirdly, ultrasonic guidance: the position of the surgical instrument can be detected in real time, the price is low, but the imaging quality is checked, and two-dimensional imaging is generally adopted;
fourthly, guiding multi-modal image fusion: the advantages of images in different modes are combined, the accuracy is high, the displayed image has higher resolution, and the guidance is more accurate. Clinical practice shows that the intraoperative two-dimensional ultrasound and preoperative three-dimensional CTCT play a role in mutual image advantage complementation in the interventional therapy fields of tumor ablation, thoracico-abdominal puncture biopsy and the like.
At present, the most accurate and widely used alignment method in a navigation system is a surgical registration method based on a mark point.
The prior art has the following defects: the method is very dependent on the experience of an interventionalist, the process is labor-consuming and time-consuming, and the implementation and popularization of the interventionalist technology are seriously hindered; markers are easy to slide off in the operation process, so that the registration accuracy is reduced; if the position of the marker is moved, the marker needs to be marked again, and the image scanning process is repeated, which is cumbersome.
Disclosure of Invention
The invention aims to at least solve one of the technical problems in the prior art, provides an operation navigation method and medium based on ultrasonic image characteristic plane detection, realizes an intelligent positioning method of an ultrasonic characteristic standard tangent plane in preoperative three-dimensional CT data, and avoids manual, time-consuming and labor-consuming positioning of key point positions in an operation process.
The technical scheme of the invention comprises an operation navigation method based on ultrasonic image characteristic plane detection, which is characterized by comprising the following steps: training preoperative images, establishing a deep convolutional neural network, preprocessing preoperative enhanced three-dimensional CT image data, and performing multi-point positioning on the preprocessed preoperative enhanced three-dimensional CT image data through the deep convolutional neural network to obtain a multi-point positioning model; and aligning the ultrasonic images, constructing a detection model based on the DQN framework and the multipoint positioning model, processing any input preoperative three-dimensional CT image and intraoperative two-dimensional ultrasonic image through the detection model to obtain a three-dimensional space conversion matrix for aligning the preoperative three-dimensional CT image and the intraoperative two-dimensional ultrasonic image, and performing surgical navigation through the three-dimensional space conversion matrix.
The surgical navigation method based on ultrasonic image feature plane detection, wherein the preprocessing comprises the following steps: the enhanced three-dimensional CT image data not including the surgical object is deleted, resampled from several preoperative enhanced three-dimensional CT data to a specified resolution and manually labeled.
According to the surgical navigation method based on ultrasonic image feature plane detection, the ultrasonic image alignment comprises the following steps: constructing a network model, constructing a multi-point joint detection network model, using 3 agents to jointly learn the characteristics of 3 key points, and training each agent to obtain a model for detecting the corresponding key points; constructing an intelligent agent learning and exploring environment, and taking each input three-dimensional CT image data as an activity environment of the intelligent agent; configuring an action space of the agent, wherein the action space comprises up, down, left, right, front and back actions, 4 convolutional layers and 3 maximum pooling layers of the network model are used as network parts shared by 3 agents, and 3 parallel 3-layer full-connection layers are used as specific parts of each agent.
According to the surgical navigation method based on ultrasonic image feature plane detection, the ultrasonic image alignment further comprises: the strategy of the intelligent agent selects actions according to the strategy from a random starting point in the three-dimensional CT image data; reward function, agent from current position PtStarting with PtSampling the size of the image data for the center of the agent' S location as the current state StSelecting action a according to policy πtThen enters the next position Pt+1The feedback is of
Rt+1=|Pt-Pg|2-|Pt+1-Pg|2
To the corresponding agent, Pt+1The state of the corresponding agent is St+1(ii) a And in the termination state, when the distance between the current position of the intelligent agent and the position of the detection target is judged to be less than 1, and when the intelligent agent walks more than 1500 steps in the current environment, the intelligent agent is enabled to enter the termination state.
The surgical navigation method based on ultrasonic image feature plane detection, wherein the method further comprises the following steps: and if the frequency of the occurrence of the same position in the past 20 historical positions of the intelligent agent is more than or equal to 4, the intelligent agent enters the next state until all scales enter a termination state, and one input data is re-sampled.
According to the surgical navigation method based on ultrasonic image feature plane detection, the strategy is configured to be a decapay epsilon-greedy strategy.
The surgical navigation method based on ultrasonic image feature plane detection is characterized in that the method further comprises the following steps: establishing an experience pool, putting states, strategy selection actions and scales of the agents before the agents stop exploring in the corresponding environment into the experience pool, and randomly selecting a group of states, strategy selection actions and scales of the agents from the experience pool to train in the training starting stage until the accumulated reward is maximum according to the network training trend.
The present invention also includes a computer-readable storage medium, in which a computer program is stored, wherein the computer program, when executed by a processor, implements any of the method steps.
The invention has the beneficial effects that: the method realizes the intelligent positioning of the ultrasonic characteristic standard tangent plane in the preoperative three-dimensional CT data, and avoids the manual, time-consuming and labor-consuming positioning of the key point position in the operation process.
Drawings
The invention is further described below with reference to the accompanying drawings and examples;
FIG. 1 is a flow chart of a prior art process;
FIG. 2 is a flow chart according to an embodiment of the present invention;
FIG. 3 is a schematic view of a feature plane in two-dimensional intraoperative ultrasound located in preoperative three-dimensional CT data according to a reinforcement learning method in an embodiment of the present invention;
FIG. 4 is a flow chart illustrating a method of reinforcement learning to locate a feature plane in intraoperative two-dimensional ultrasound in preoperative three-dimensional CT data in accordance with an embodiment of the present invention;
FIG. 5 shows a diagram of an apparatus and media according to an embodiment of the invention.
Detailed Description
Reference will now be made in detail to the present preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout.
In the description of the present invention, the meaning of a plurality of means is one or more, the meaning of a plurality of means is two or more, and larger, smaller, larger, etc. are understood as excluding the number, and larger, smaller, inner, etc. are understood as including the number.
In the description of the present invention, the consecutive reference numbers of the method steps are for convenience of examination and understanding, and the implementation order between the steps is adjusted without affecting the technical effect achieved by the technical solution of the present invention by combining the whole technical solution of the present invention and the logical relationship between the steps.
In the description of the present invention, unless otherwise explicitly defined, terms such as set, etc. should be broadly construed, and those skilled in the art can reasonably determine the specific meanings of the above terms in the present invention in combination with the detailed contents of the technical solutions.
FIG. 1 is a flowchart illustrating a prior art process for acquiring an image space from three-dimensional CT data prior to a finger operation in a liver interventional procedure; the patient space refers to a space coordinate system where the intraoperative two-dimensional ultrasonic image is obtained through the intraoperative positioning device.
Fig. 2 is a flow chart according to an embodiment of the present invention, including: training preoperative images, establishing a depth convolution neural network, preprocessing preoperative enhanced three-dimensional CT image data, and performing multi-point positioning on the preprocessed preoperative enhanced three-dimensional CT image data through the depth convolution neural network to obtain a multi-point positioning model; and (3) ultrasonic image alignment, constructing a detection model based on the DQN framework and the multipoint positioning model, processing any pre-operation three-dimensional CT image and an intra-operation two-dimensional ultrasonic image which are input through the detection model to obtain a three-dimensional space conversion matrix for aligning the pre-operation three-dimensional CT image and the intra-operation two-dimensional ultrasonic image, and performing operation navigation through the three-dimensional space conversion matrix.
Fig. 3 is a schematic view illustrating a feature plane in a two-dimensional ultrasound during positioning operation in a preoperative three-dimensional CT data according to the reinforcement learning method of the embodiment of the present invention, which includes: a training part: preprocessing preoperative enhanced three-dimensional CT data, resampling to a specified resolution ratio, manually labeling, and putting all labeled three-dimensional CT data into a deep convolution neural network to train a multi-point positioning model. Test part: the DQN framework outputs a three-dimensional space conversion matrix aiming at any preoperative three-dimensional CT data and an intraoperative two-dimensional ultrasonic image by utilizing a trained neural network model, and alignment of the preoperative three-dimensional CT data and the intraoperative two-dimensional ultrasonic image is realized.
Fig. 4 is a flow chart illustrating a feature plane in positioning intraoperative two-dimensional ultrasound in preoperative three-dimensional CT data by a reinforcement learning method according to an embodiment of the present invention, and with reference to fig. 3, a training mechanism based on a DQN algorithm includes:
1. building a network model of multi-point joint detection: and jointly learning the characteristics of 3 key points by using 3 agents, and training each agent to obtain a model for detecting the corresponding key points. To learn using the joint features of 3 keypoints, resulting in a model that is superior to single keypoint location detection, we will consider the first half of the network: the 4 convolutional layers and the 3 max pooling layers are used as network parts shared by 3 agents, and 3 parallel 3 full-connection layers are used as parts specific to each agent. The design reduces the number of parameters required by training, and establishes invisible connections among the agents, and the essence of the design is to utilize the CNN network to learn the input image characteristics;
2. constructing an environment for intelligent agent learning exploration: each input three-dimensional CT data is the current environment in which 3 agents are active;
3. action space of agent: actions that an agent may select in three-dimensional CT data include: six actions of up, down, left, right, front and back;
4. strategy of the agent: selecting an action according to a decapy epsilon-greedy strategy from a certain random starting point in the three-dimensional CT data; the algorithm maintains two sets of strategies simultaneously, one set of strategy is used for selecting the current action, and the other set of target strategy is used for maximizing the expectation of the accumulated reward;
5. the reward function: agent from current position Pt(image data of size (45, 45, 45) is sampled with this position as the center as the current state St) Starting from, selecting action a according to strategy pitThen enters the next position Pt+1(As above, the state of the agent in this position is referred to as St+1) The environment feeds back a magnitude of motion for this step
Rt+1=|Pt-Pg|2-|Pt+1-Pg|2
Awarding to agents
6. And (4) termination state: judging that the distance between the current position of the intelligent agent and the target position is less than 1, or judging that the intelligent agent enters a termination state when the intelligent agent walks more than 1500 steps in the current environment; in addition, if the agent has "concussion", that is, the number of times of occurrence of the same position in the past 20 historical positions is greater than or equal to 4, the agent is considered to find the target position under the current scale, and the agent enters the next state. When all scales enter the termination state, an input data is resampled.
7. Building an experience pool: the agent stops exploring in this environment (S)t,at,Rt+1isOver) is placed in the pool and at the beginning of the training a set (S) is randomly selected from the experience poolt,at,Rt+1isOver) for training;
8. the trend in network training maximizes the cumulative rewards.
FIG. 5 shows a diagram of an apparatus and media according to an embodiment of the invention. Fig. 5 shows a schematic view of an apparatus according to an embodiment of the invention. The apparatus comprises a memory 100 and a processor 200, wherein the processor 200 stores a computer program for performing: creating a depth convolution neural network, preprocessing preoperative enhanced three-dimensional CT image data, and performing multi-point positioning on the preprocessed preoperative enhanced three-dimensional CT image data through the depth convolution neural network to obtain a multi-point positioning model; a detection model is constructed based on a DQN framework and a multipoint positioning model, any preoperative three-dimensional CT image and intraoperative two-dimensional ultrasonic image are processed through the detection model, a three-dimensional space conversion matrix used for aligning the preoperative three-dimensional CT image and the intraoperative two-dimensional ultrasonic image is obtained, and operation navigation is carried out through the three-dimensional space conversion matrix. Wherein the memory 100 is used for storing data.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present invention.

Claims (8)

1. A surgical navigation method based on ultrasonic image feature plane detection is characterized by comprising the following steps:
training preoperative images, establishing a deep convolutional neural network, preprocessing preoperative enhanced three-dimensional CT image data, and performing multi-point positioning on the preprocessed preoperative enhanced three-dimensional CT image data through the deep convolutional neural network to obtain a multi-point positioning model;
and aligning the ultrasonic images, constructing a detection model based on the DQN framework and the multipoint positioning model, processing any input preoperative three-dimensional CT image and intraoperative two-dimensional ultrasonic image through the detection model to obtain a three-dimensional space conversion matrix for aligning the preoperative three-dimensional CT image and the intraoperative two-dimensional ultrasonic image, and performing surgical navigation through the three-dimensional space conversion matrix.
2. The surgical navigation method based on ultrasonic image feature plane detection according to claim 1, wherein the preprocessing includes: the enhanced three-dimensional CT image data not including the surgical object is deleted, resampled from several preoperative enhanced three-dimensional CT data to a specified resolution and manually labeled.
3. The surgical navigation method based on ultrasound image feature plane detection according to claim 1, the ultrasound image alignment comprising:
constructing a network model, constructing a multi-point joint detection network model, using 3 agents to jointly learn the characteristics of 3 key points, and training each agent to obtain a model for detecting the corresponding key points; constructing an intelligent agent learning and exploring environment, and taking each input three-dimensional CT image data as an activity environment of the intelligent agent; configuring an action space of the agent, wherein the action space comprises up, down, left, right, front and back actions, 4 convolutional layers and 3 maximum pooling layers of the network model are used as network parts shared by 3 agents, and 3 parallel 3-layer full-connection layers are used as specific parts of each agent.
4. The surgical navigation method based on ultrasound image feature plane detection according to claim 3, the ultrasound image alignment further comprising:
the strategy of the intelligent agent selects actions according to the strategy from a random starting point in the three-dimensional CT image data;
reward function, agent from current position PtStarting with PtSampling the size of the image data for the center of the agent' S location as the current state StSelecting action a according to policy πtThen enters the next position Pt+1The feedback is of
Rt+1=|Pt-Pg|2-|Pt+1-Pg|2
To the corresponding agent, Pt+1The state of the corresponding agent is St+1
And in the termination state, when the distance between the current position of the intelligent agent and the position of the detection target is judged to be less than 1, and when the intelligent agent walks more than 1500 steps in the current environment, the intelligent agent is enabled to enter the termination state.
5. The surgical navigation method based on ultrasound image feature plane detection as claimed in claim 4, further comprising: and if the frequency of the occurrence of the same position in the past 20 historical positions of the intelligent agent is more than or equal to 4, the intelligent agent enters the next state until all scales enter a termination state, and one input data is re-sampled.
6. The surgical navigation method based on ultrasonic image feature plane detection according to claim 4, wherein the strategy is configured as a decapay epsilon-greedy strategy.
7. The surgical navigation method based on ultrasound image feature plane detection as claimed in claim 4, further comprising:
establishing an experience pool, putting states, strategy selection actions and scales of the agents before the agents stop exploring in the corresponding environment into the experience pool, and randomly selecting a group of states, strategy selection actions and scales of the agents from the experience pool to train in the training starting stage until the accumulated reward is maximum according to the network training trend.
8. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method steps of any one of claims 1 to 7.
CN202011085198.XA 2020-10-12 2020-10-12 Operation navigation method and medium based on ultrasonic image characteristic plane detection Active CN112370161B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011085198.XA CN112370161B (en) 2020-10-12 2020-10-12 Operation navigation method and medium based on ultrasonic image characteristic plane detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011085198.XA CN112370161B (en) 2020-10-12 2020-10-12 Operation navigation method and medium based on ultrasonic image characteristic plane detection

Publications (2)

Publication Number Publication Date
CN112370161A true CN112370161A (en) 2021-02-19
CN112370161B CN112370161B (en) 2022-07-26

Family

ID=74581255

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011085198.XA Active CN112370161B (en) 2020-10-12 2020-10-12 Operation navigation method and medium based on ultrasonic image characteristic plane detection

Country Status (1)

Country Link
CN (1) CN112370161B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114052795A (en) * 2021-10-28 2022-02-18 南京航空航天大学 Focus imaging and anti-false-ligation treatment system combined with ultrasonic autonomous scanning
CN114159166A (en) * 2021-12-21 2022-03-11 广州市微眸医疗器械有限公司 Robot-assisted trocar automatic docking method and device
CN114663432A (en) * 2022-05-24 2022-06-24 武汉泰乐奇信息科技有限公司 Skeleton model correction method and device
CN115375854A (en) * 2022-10-25 2022-11-22 天津市肿瘤医院(天津医科大学肿瘤医院) Ultrasonic imaging equipment image processing method fused with liquid crystal device and related device
CN117137450A (en) * 2023-08-30 2023-12-01 哈尔滨海鸿基业科技发展有限公司 Flap implantation imaging method and system based on flap blood transport assessment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102270305A (en) * 2011-08-11 2011-12-07 西北工业大学 Multi-agent cooperative target identification method based on MSBN (Multiple Sectioned Bayesian Network)
CN102999902A (en) * 2012-11-13 2013-03-27 上海交通大学医学院附属瑞金医院 Optical navigation positioning system based on CT (computed tomography) registration results and navigation method thereby
US9569736B1 (en) * 2015-09-16 2017-02-14 Siemens Healthcare Gmbh Intelligent medical image landmark detection
CN107403446A (en) * 2016-05-18 2017-11-28 西门子保健有限责任公司 Method and system for the image registration using intelligent human agents
US20180005083A1 (en) * 2015-09-16 2018-01-04 Siemens Healthcare Gmbh Intelligent multi-scale medical image landmark detection
CN108420529A (en) * 2018-03-26 2018-08-21 上海交通大学 The surgical navigational emulation mode guided based on image in magnetic tracking and art
CN110009669A (en) * 2019-03-22 2019-07-12 电子科技大学 A kind of 3D/2D medical image registration method based on deeply study
US20190378291A1 (en) * 2018-06-07 2019-12-12 Siemens Healthcare Gmbh Adaptive nonlinear optimization of shape parameters for object localization in 3d medical images
CN111462146A (en) * 2020-04-16 2020-07-28 成都信息工程大学 Medical image multi-mode registration method based on space-time intelligent agent

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102270305A (en) * 2011-08-11 2011-12-07 西北工业大学 Multi-agent cooperative target identification method based on MSBN (Multiple Sectioned Bayesian Network)
CN102999902A (en) * 2012-11-13 2013-03-27 上海交通大学医学院附属瑞金医院 Optical navigation positioning system based on CT (computed tomography) registration results and navigation method thereby
US9569736B1 (en) * 2015-09-16 2017-02-14 Siemens Healthcare Gmbh Intelligent medical image landmark detection
US20170103532A1 (en) * 2015-09-16 2017-04-13 Siemens Healthcare Gmbh Intelligent Medical Image Landmark Detection
US20180005083A1 (en) * 2015-09-16 2018-01-04 Siemens Healthcare Gmbh Intelligent multi-scale medical image landmark detection
CN107403446A (en) * 2016-05-18 2017-11-28 西门子保健有限责任公司 Method and system for the image registration using intelligent human agents
CN108420529A (en) * 2018-03-26 2018-08-21 上海交通大学 The surgical navigational emulation mode guided based on image in magnetic tracking and art
US20190378291A1 (en) * 2018-06-07 2019-12-12 Siemens Healthcare Gmbh Adaptive nonlinear optimization of shape parameters for object localization in 3d medical images
CN110009669A (en) * 2019-03-22 2019-07-12 电子科技大学 A kind of 3D/2D medical image registration method based on deeply study
CN111462146A (en) * 2020-04-16 2020-07-28 成都信息工程大学 Medical image multi-mode registration method based on space-time intelligent agent

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114052795A (en) * 2021-10-28 2022-02-18 南京航空航天大学 Focus imaging and anti-false-ligation treatment system combined with ultrasonic autonomous scanning
CN114052795B (en) * 2021-10-28 2023-11-07 南京航空航天大学 Focus imaging and anti-false-prick therapeutic system combined with ultrasonic autonomous scanning
CN114159166A (en) * 2021-12-21 2022-03-11 广州市微眸医疗器械有限公司 Robot-assisted trocar automatic docking method and device
CN114159166B (en) * 2021-12-21 2024-02-27 广州市微眸医疗器械有限公司 Robot-assisted automatic trocar docking method and device
CN114663432A (en) * 2022-05-24 2022-06-24 武汉泰乐奇信息科技有限公司 Skeleton model correction method and device
CN114663432B (en) * 2022-05-24 2022-08-16 武汉泰乐奇信息科技有限公司 Skeleton model correction method and device
CN115375854A (en) * 2022-10-25 2022-11-22 天津市肿瘤医院(天津医科大学肿瘤医院) Ultrasonic imaging equipment image processing method fused with liquid crystal device and related device
CN115375854B (en) * 2022-10-25 2022-12-20 天津市肿瘤医院(天津医科大学肿瘤医院) Ultrasonic image equipment image processing method fused with liquid crystal device and related device
CN117137450A (en) * 2023-08-30 2023-12-01 哈尔滨海鸿基业科技发展有限公司 Flap implantation imaging method and system based on flap blood transport assessment
CN117137450B (en) * 2023-08-30 2024-05-10 哈尔滨海鸿基业科技发展有限公司 Flap implantation imaging method and system based on flap blood transport assessment

Also Published As

Publication number Publication date
CN112370161B (en) 2022-07-26

Similar Documents

Publication Publication Date Title
CN112370161B (en) Operation navigation method and medium based on ultrasonic image characteristic plane detection
CN110464459B (en) Interventional plan navigation system based on CT-MRI fusion and navigation method thereof
US20120087557A1 (en) Biopsy planning and display apparatus
CN103356284B (en) Operation piloting method and system
US6669635B2 (en) Navigation information overlay onto ultrasound imagery
US9597054B2 (en) Ultrasonic guidance of a needle path during biopsy
CN101066210B (en) Method for displaying information in an ultrasound system
EP3813676A1 (en) Biopsy prediction and guidance with ultrasound imaging and associated devices, systems, and methods
US10849694B2 (en) Method and system for displaying the position and orientation of a linear instrument navigated with respect to a 3D medical image
US20080221446A1 (en) Method and apparatus for tracking points in an ultrasound image
US20100286518A1 (en) Ultrasound system and method to deliver therapy based on user defined treatment spaces
US11224405B2 (en) Medical navigation system employing optical position sensing and method of operation thereof
CN101918855A (en) MRI surgical systems for real-time visualizations using MRI image data and predefined data of surgical tools
US20220160434A1 (en) Ultrasound System with Target and Medical Instrument Awareness
CN103037761A (en) Insertion guidance system for needles and medical components
Chen et al. Automatic and accurate needle detection in 2D ultrasound during robot-assisted needle insertion process
US20230107629A1 (en) Non-Uniform Ultrasound Image Modification of Targeted Sub-Regions
CN219323439U (en) Ultrasound imaging system and ultrasound probe apparatus
CN116236280A (en) Interventional therapy guiding method and system based on multi-mode image fusion
US20230147164A1 (en) Systems and Methods for Artificial Intelligence Enabled Ultrasound Correlation
CN115770108A (en) Double-mechanical-arm ultrasonic-guided automatic puncture surgical robot and method
US20210196387A1 (en) System and method for interventional procedure using medical images
US20220241024A1 (en) Ultrasound object point tracking
EP3709889B1 (en) Ultrasound tracking and visualization
CN114173676A (en) Ultrasound object zoom tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: Room 101, building 1, No. 36, Doukou Road, Guangdong Macao cooperative traditional Chinese medicine science and Technology Industrial Park, Hengqin New District, Zhuhai City, Guangdong Province 519000

Patentee after: Zhuhai Hengle Medical Technology Co.,Ltd.

Address before: Room 101, building 1, No. 36, Doukou Road, Guangdong Macao cooperative traditional Chinese medicine science and Technology Industrial Park, Hengqin New District, Zhuhai City, Guangdong Province 519000

Patentee before: Zhuhai Hengle Medical Technology Co.,Ltd.